Search Results

Search found 26454 results on 1059 pages for 'post parameter'.

Page 531/1059 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • Combining Shared Secret and Certificates

    - by Michael Stephenson
    As discussed in the introduction article this walkthrough will explain how you can implement WCF security with the Windows Azure Service Bus to ensure that you can protect your endpoint in the cloud with a shared secret but also combine this with certificates so that you can identify the sender of the message.   Prerequisites As in the previous article before going into the walk through I want to explain a few assumptions about the scenario we are implementing but to keep the article shorter I am not going to walk through all of the steps in how to setup some of this. In the solution we have a simple console application which will represent the client application. There is also the services WCF application which contains the WCF service we will expose via the Windows Azure Service Bus. The WCF Service application in this example was hosted in IIS 7 on Windows 2008 R2 with AppFabric Server installed and configured to auto-start the WCF listening services. I am not going to go through significant detail around the IIS setup because it should not matter in relation to this article however if you want to understand more about how to configure WCF and IIS for such a scenario please refer to the following paper which goes into a lot of detail about how to configure this. The link is: http://tinyurl.com/8s5nwrz   Setting up the Certificates To keep the post and sample simple I am going to use the local computer store for all certificates but this bit is really just the same as setting up certificates for an example where you are using WCF without using Windows Azure Service Bus. In the sample I have included two batch files which you can use to create the sample certificates or remove them. Basically you will end up with: A certificate called PocServerCert in the personal store for the local computer which will be used by the WCF Service component A certificate called PocClientCert in the personal store for the local computer which will be used by the client application A root certificate in the Root store called PocRootCA with its associated revocation list which is the root from which the client and server certificates were created   For the sample Im just using development certificates like you would normally, and you can see exactly how these are configured and placed in the stores from the batch files in the solution using makecert and certmgr.   The Service Component To begin with let's look at the service component and how it can be configured to listen to the service bus using a shared secret but to also accept a username token from the client. In the sample the service component is called Acme.Azure.ServiceBus.Poc.Cert.Services. It has a single service which is the Visual Studio template for a WCF service when you add a new WCF Service Application so we have a service called Service1 with its Echo method. Nothing special so far!.... The next step is to look at the web.config file to see how we have configured the WCF service. In the services section of the WCF configuration you can see I have created my service and I have created a local endpoint which I simply used to do a little bit of diagnostics and to check it was working, but more importantly there is the Windows Azure endpoint which is using the ws2007HttpRelayBinding (note that this should also work just the same if your using netTcpRelayBinding). The key points to note on the above picture are the service behavior called MyServiceBehaviour and the service bus endpoints behavior called MyEndpointBehaviour. We will go into these in more detail later.   The Relay Binding The relay binding for the service has been configured to use the TransportWithMessageCredential security mode. This is the important bit where the transport security really relates to the interaction between the service and listening to the Azure Service Bus and the message credential is where we will use our certificate like we have specified in the message/clientCrentialType attribute. Note also that we have left the relayClientAuthenticationType set to RelayAccessToken. This means that authentication will be made against ACS for accessing the service bus and messages will not be accepted from any sender who has not been authenticated by ACS.   The Endpoint Behaviour In the below picture you can see the endpoint behavior which is configured to use the shared secret client credential for accessing the service bus and also for diagnostic purposes I have included the service registry element.     Hopefully if you are familiar with using Windows Azure Service Bus relay feature the above is very familiar to you and this is a very common setup for this section. There is nothing specific to the username token implementation here. The Service Behaviour Now we come to the bit with most of the certificate stuff in it. When you configure the service behavior I have included the serviceCredentials element and then setup to use the clientCertificate check and also specifying the serviceCertificate with information on how to find the servers certificate in the store.     I have also added a serviceAuthorization section where I will implement my own authorization component to perform additional security checks after the service has validated that the message was signed with a good certificate. I also have the same serviceSecurityAudit configuration to log access to my service. My Authorization Manager The below picture shows you implementation of my authorization manager. WCF will eventually hand off the message to my authorization component before it calls the service code. This is where I can perform some logic to check if the identity is allowed to access resources. In this case I am simple rejecting messages from anyone except the PocClientCertificate.     The Client Now let's take a look at the client side of this solution and how we can configure the client to authenticate against ACS but also send a certificate over to the service component so it can implement additional security checks on-premise. I have a console application and in the program class I want to use the proxy generated with Add Service Reference to send a message via the Azure Service Bus. You can see in my WCF client configuration below I have setup my details for the azure service bus url and am using the ws2007HttpRelayBinding.   Next is my configuration for the relay binding. You can see below I have configured security to use TransportWithMessageCredential so we will flow the token from a certificate with the message and also the RelayAccessToken relayClientAuthenticationType which means the component will validate against ACS before being allowed to access the relay endpoint to send a message.     After the binding we need to configure the endpoint behavior like in the below picture. This contains the normal transportClientEndpointBehaviour to setup the ACS shared secret configuration but we have also configured the clientCertificate to look for the PocClientCert.     Finally below we have the code of the client in the console application which will call the service bus. You can see that we have created our proxy and then made a normal call to a WCF in exactly the normal way but the configuration will jump in and ensure that a token is passed representing the client certificate.     Conclusion As you can see from the above walkthrough it is not too difficult to configure a service to use both a shared secret and certificate based token at the same time. This gives you the power and protection offered by the access control service in the cloud but also the ability to flow additional tokens to the on-premise component for additional security features to be implemented. Sample The sample used in this post is available at the following location: https://s3.amazonaws.com/CSCBlogSamples/Acme.Azure.ServiceBus.Poc.Cert.zip

    Read the article

  • Entity Framework 6: Alpha2 Now Available

    - by ScottGu
    The Entity Framework team recently announced the 2nd alpha release of EF6.   The alpha 2 package is available for download from NuGet. Since this is a pre-release package make sure to select “Include Prereleases” in the NuGet package manager, or execute the following from the package manager console to install it: PM> Install-Package EntityFramework -Pre This week’s alpha release includes a bunch of great improvements in the following areas: Async language support is now available for queries and updates when running on .NET 4.5. Custom conventions now provide the ability to override the default conventions that Code First uses for mapping types, properties, etc. to your database. Multi-tenant migrations allow the same database to be used by multiple contexts with full Code First Migrations support for independently evolving the model backing each context. Using Enumerable.Contains in a LINQ query is now handled much more efficiently by EF and the SQL Server provider resulting greatly improved performance. All features of EF6 (except async) are available on both .NET 4 and .NET 4.5. This includes support for enums and spatial types and the performance improvements that were previously only available when using .NET 4.5. Start-up time for many large models has been dramatically improved thanks to improved view generation performance. Below are some additional details about a few of the improvements above: Async Support .NET 4.5 introduced the Task-Based Asynchronous Pattern that uses the async and await keywords to help make writing asynchronous code easier. EF 6 now supports this pattern. This is great for ASP.NET applications as database calls made through EF can now be processed asynchronously – avoiding any blocking of worker threads. This can increase scalability on the server by allowing more requests to be processed while waiting for the database to respond. The following code shows an MVC controller that is querying a database for a list of location entities:     public class HomeController : Controller     {         LocationContext db = new LocationContext();           public async Task<ActionResult> Index()         {             var locations = await db.Locations.ToListAsync();               return View(locations);         }     } Notice above the call to the new ToListAsync method with the await keyword. When the web server reaches this code it initiates the database request, but rather than blocking while waiting for the results to come back, the thread that is processing the request returns to the thread pool, allowing ASP.NET to process another incoming request with the same thread. In other words, a thread is only consumed when there is actual processing work to do, allowing the web server to handle more concurrent requests with the same resources. A more detailed walkthrough covering async in EF is available with additional information and examples. Also a walkthrough is available showing how to use async in an ASP.NET MVC application. Custom Conventions When working with EF Code First, the default behavior is to map .NET classes to tables using a set of conventions baked into EF. For example, Code First will detect properties that end with “ID” and configure them automatically as primary keys. However, sometimes you cannot or do not want to follow those conventions and would rather provide your own. For example, maybe your primary key properties all end in “Key” instead of “Id”. Custom conventions allow the default conventions to be overridden or new conventions to be added so that Code First can map by convention using whatever rules make sense for your project. The following code demonstrates using custom conventions to set the precision of all decimals to 5. As with other Code First configuration, this code is placed in the OnModelCreating method which is overridden on your derived DbContext class:         protected override void OnModelCreating(DbModelBuilder modelBuilder)         {             modelBuilder.Properties<decimal>()                 .Configure(x => x.HasPrecision(5));           } But what if there are a couple of places where a decimal property should have a different precision? Just as with all the existing Code First conventions, this new convention can be overridden for a particular property simply by explicitly configuring that property using either the fluent API or a data annotation. A more detailed description of custom code first conventions is available here. Community Involvement I blogged a while ago about EF being released under an open source license.  Since then a number of community members have made contributions and these are included in EF6 alpha 2. Two examples of community contributions are: AlirezaHaghshenas contributed a change that increases the startup performance of EF for larger models by improving the performance of view generation. The change means that it is less often necessary to use of pre-generated views. UnaiZorrilla contributed the first community feature to EF: the ability to load all Code First configuration classes in an assembly with a single method call like the following: protected override void OnModelCreating(DbModelBuilder modelBuilder) {        modelBuilder.Configurations            .AddFromAssembly(typeof(LocationContext).Assembly); } This code will find and load all the classes that inherit from EntityTypeConfiguration<T> or ComplexTypeConfiguration<T> in the assembly where LocationContext is defined. This reduces the amount of coupling between the context and Code First configuration classes, and is also a very convenient shortcut for large models. Other upcoming features coming in EF 6 Lots of information about the development of EF6 can be found on the EF CodePlex site, including a roadmap showing the other features that are planned for EF6. One of of the nice upcoming features is connection resiliency, which will automate the process of retying database operations on transient failures common in cloud environments and with databases such as the Windows Azure SQL Database. Another often requested feature that will be included in EF6 is the ability to map stored procedures to query and update operations on entities when using Code First. Summary EF6 is the first open source release of Entity Framework being developed in CodePlex. The alpha 2 preview release of EF6 is now available on NuGet, and contains some really great features for you to try. The EF team are always looking for feedback from developers - especially on the new features such as custom Code First conventions and async support. To provide feedback you can post a comment on the EF6 alpha 2 announcement post, start a discussion or file a bug on the CodePlex site. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Why JSF Matters (to You)

    - by reza_rahman
          "Those who have knowledge, don’t predict. Those who predict, don’t have knowledge."                                                                                                    – Lao Tzu You may have noticed Thoughtworks recently crowned the likes AngularJS, etc imminent successors to server-side web frameworks. They apparently also deemed it necessary to single out JSF for righteous scorn. I have to say as I was reading the analysis I couldn't help but remember they also promptly jumped on the Ruby, Rails, Clojure, etc bandwagon a good few years ago seemingly similarly crowing these dynamic languages imminent successors to Java. I remember thinking then as I do now whether the folks at Thoughtworks are really that much smarter than me or if they are simply more prone to the Hipster buzz of the day. I'll let you make the final call on that one. I also noticed mention of "J2EE" in the context of JSF and had to wonder how up-to-date or knowledgeable the person writing the analysis actually was given that the term was basically retired almost a decade ago. There's one thing that I am absolutely sure about though - as a long time pretty happy user of JSF, I had no choice but to speak up on what I believe JSF offers. If you feel the same way, I would encourage you to support the team behind JSF whose hard work you may have benefited from over the years. True to his outspoken character PrimeFaces lead Cagatay Civici certainly did not mince words making the case for the JSF ecosystem - his excellent write-up is well worth a read. He specifically pointed out the practical problems in going whole hog with bare metal JavaScript, CSS, HTML for many development teams. I'll admit I had to smile when I read his closing sentence as well as the rather cheerful comments to the post from actual current JSF/PrimeFaces users that are apparently supposed to be on a gloomy death march. In a similar vein, OmniFaces developer Arjan Tijms did a great job pointing out the fact that despite the extremely competitive server-side Java Web UI space, JSF seems to manage to always consistently come out in either the number one or number two spot over many years and many data sources - do give his well-written message in the JAX-RS user forum a careful read. I don't think it's really reasonable to expect this to be the case for so many years if JSF was not at least a capable if not outstanding technology. If fact if you've ever wondered, Oracle itself is one of the largest JSF users on the planet. As Oracle's Shay Shmeltzer explains in a recent JSF Central interview, many of Oracle's strategic products such as ADF, ADF Mobile and Fusion Applications itself is built on JSF. There are well over 3,000 active developers working on these codebases. I don't think anyone can think of a more compelling reason to make sure that a technology is as effective as possible for practical development under real world conditions. Standing on the shoulders of the above giants, I feel like I can be pretty brief in making my own case for JSF: JSF is a powerful abstraction that brings the original Smalltalk MVC pattern to web development. This means cutting down boilerplate code to the bare minimum such that you really can think of just writing your view markup and then simply wire up some properties and event handlers on a POJO. The best way to see what this really means is to compare JSF code for a pretty small case to other approaches. You should then multiply the additional work for the typical enterprise project to try to understand what the productivity trade-offs are. This is reason alone for me to personally never take any other approach seriously as my primary web UI solution unless it can match the sheer productivity of JSF. Thanks to JSF's focus on components from the ground-up JSF has an extremely strong ecosystem that includes projects like PrimeFaces, RichFaces, OmniFaces, ICEFaces and of course ADF Faces/Mobile. These component libraries taken together constitute perhaps the largest widget set ever developed and optimized for a single web UI technology. To begin to grasp what this really means, just briefly browse the excellent PrimeFaces showcase and think about the fact that you can readily use the widgets on that showcase by just using some simple markup and knowing near to nothing about AJAX, JavaScript or CSS. JSF has the fair and legitimate advantage of being an open vendor neutral standard. This means that no single company, individual or insular clique controls JSF - openness, transparency, accountability, plurality, collaboration and inclusiveness is virtually guaranteed by the standards process itself. You have the option to choose between compatible implementations, escape any form of lock-in or even create your own compatible implementation! As you might gather from the quote at the top of the post, I am not a fan of crystal ball gazing and certainly don't want to engage in it myself. Who knows? However far-fetched it may seem maybe AngularJS is the only future we all have after all. If that is the case, so be it. Unlike what you might have been told, Java EE is about choice at heart and it can certainly work extremely well as a back-end for AngularJS. Likewise, you are also most certainly not limited to just JSF for working with Java EE - you have a rich set of choices like Struts 2, Vaadin, Errai, VRaptor 4, Wicket or perhaps even the new action-oriented web framework being considered for Java EE 8 based on the work in Jersey MVC... Please note that any views expressed here are my own only and certainly does not reflect the position of Oracle as a company.

    Read the article

  • FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

    - by Sudip Datta
    During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully. For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about: 1. Does it make life easier for your end users? Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues. The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor. 2. Does it make life easier for your administrators? Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases. Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare. That leads us to our next question. 3. Does it satisfy your Security Officers and Compliance Auditors? Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise. Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics. If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner. 4. Does it satisfy your CIO? Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next? As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance. Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware. As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

    Read the article

  • A Simple Collapsible Menu with jQuery

    - by Vincent Maverick Durano
    In this post I'll demonstrate how to make a simple collapsible menu using jQuery. To get started let's go ahead and fire up Visual Studio and create a new WebForm.  Now let's build our menu by adding some div, p and anchor tags. Since I'm using a masterpage then the ASPX mark-up should look something like this:   1: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 2: <div id="Menu"> 3: <p>CARS</p> 4: <div class="section"> 5: <a href="#">Car 1</a> 6: <a href="#">Car 2</a> 7: <a href="#">Car 3</a> 8: <a href="#">Car 4</a> 9: </div> 10: <p>BIKES</p> 11: <div class="section"> 12: <a href="#">Bike 1</a> 13: <a href="#">Bike 2</a> 14: <a href="#">Bike 3</a> 15: <a href="#">Bike 4</a> 16: <a href="#">Bike 5</a> 17: <a href="#">Bike 6</a> 18: <a href="#">Bike 7</a> 19: <a href="#">Bike 8</a> 20: </div> 21: <p>COMPUTERS</p> 22: <div class="section"> 23: <a href="#">Computer 1</a> 24: <a href="#">Computer 2</a> 25: <a href="#">Computer 3</a> 26: <a href="#">Computer 4</a> 27: </div> 28: <p>OTHERS</p> 29: <div class="section"> 30: <a href="#">Other 1</a> 31: <a href="#">Other 2</a> 32: <a href="#">Other 3</a> 33: <a href="#">Other 4</a> 34: </div> 35: </div> 36: </asp:Content>   As you can see there's nothing fancy about the mark up above.. Now lets go ahead create a simple CSS to set the look and feel our our Menu. Just for for the simplicity of this demo, add the following CSS below under the <head> section of the page or if you are using master page then add it a the content head. Here's the CSS below:   1: <asp:Content ID="Content1" ContentPlaceHolderID="HeadContent" runat="server"> 2: <style type="text/css"> 3: #Menu{ 4: width:300px; 5: } 6: #Menu > p{ 7: background-color:#104D9E; 8: color:#F5F7FA; 9: margin:0; 10: padding:0; 11: border-bottom-style: solid; 12: border-bottom-width: medium; 13: border-bottom-color:#000000; 14: cursor:pointer; 15: } 16: #Menu .section{ 17: padding-left:5px; 18: background-color:#C0D9FA; 19: } 20: a{ 21: display:block; 22: color:#0A0A07; 23: } 24: </style> 25: </asp:Content>   Now let's add the collapsible effects on our menu using jQuery. To start using jQuery then register the following script at the very top of the <head> section of the page or if you are using master page then add it the very top of  the content head section.   <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.4.4/jquery.min.js" ></script>   As you can see I'm using Google AJAX API CDN to host the jQuery file. You can also download the jQuery here and host it in your server if you'd like. Okay here's the the jQuery script below for adding the collapsible effects:   1: <script type="text/javascript"> 2: $(function () { 3: $("a").mouseover(function () { $(this).addClass("highlightRow"); }) 4: .mouseout(function () { $(this).removeClass("highlightRow"); }); 5:   6: $(".section").hide(); 7: $("#Menu > p").click(function () { 8: $(this).next().slideToggle("Slow"); 9: }); 10: }); 11: </script>   Okay to give you a little bit of explaination, at line 3.. what it does is it looks for all the "<a>" anchor elements on the page and attach the mouseover and mouseout event. On mouseover, the highlightRow css class is added to <a> element and on mouse out we remove the css class to revert the style to its default look. at line 6 we will hide all the elements that has a class name set as "section" and if you look at the mark up above it is refering to the <div> elements right after each <p> element. At line 7.. what it does is it looks for a <p> element that is a direct child of the element that has an ID of "Menu" and then attach the click event to toggle the visibilty of the section. Here's how it looks in the page: On Initial Load: After Clicking the Section Header:   That's it! I hope someone find this post usefu!   Technorati Tags: ASP.NET,JQuery,Master Page,JavaScript

    Read the article

  • Using custom DataContractResolver in WCF, to transport inheritance trees involving generics

    - by Benson
    I've got a WCF service, in which there are operations which accept a non-generic base class as parameter. [DataContract] class Foo { ... } This base class is in turn inherited, by such generics classes as [DataContract] class Bar : Foo { ... } To get this to work, I'd previously have to register KnownTypes for the Foo class, and have these include all possible variations of Bar (such as Bar, Bar and even Bar). With the DataContractResolver in .NET 4, however, I should be able to build a resolver which properly stores (and restores) the classes. My questions: Are DataContractResolvers typically only used on the service side, and not by the client? If so, how would that be useful in this scenario? Am I wrong to write a DataContractResolver which serializes the fully qualified type name of a generic type, such as Bar1[List1[string, mscorlib], mscorlib] ? Couldn't the same DataContractResolver on the client side restore these types?

    Read the article

  • DataContractJsonSerializer ReadObject Exception

    - by Dan Appleyard
    I am following the accepted answer of ASP.NET MVC How to pass JSON object from View to Controller as Parameter. Like the original question, I have a simple POCO. Everthing works fine for me up until the DataContractJsonSerializer.ReadObject method. I am getting the following exception: Expecting element 'root' from namespace ''.. Encountered 'None' with name '', namespace ''. Public Overrides Sub OnActionExecuting(ByVal filterContext As ActionExecutingContext) If filterContext.HttpContext.Request.ContentType.Contains("application/json") Then Dim s As System.IO.Stream = filterContext.HttpContext.Request.InputStream Dim o = New DataContractJsonSerializer(RootType).ReadObject(s) filterContext.ActionParameters(Param) = o Else Dim xmlRoot = XElement.Load(New StreamReader(filterContext.HttpContext.Request.InputStream, filterContext.HttpContext.Request.ContentEncoding)) Dim o As Object = New XmlSerializer(RootType).Deserialize(xmlRoot.CreateReader) filterContext.ActionParameters(Param) = o End If End Sub Any ideas? Thanks

    Read the article

  • encodingStyle usage in XmlSerializer.Serialize

    - by Vishal Seth
    Can somebody please explain the use of 4th parameter of public void Serialize( XmlWriter xmlWriter, Object o, XmlSerializerNamespaces namespaces, string encodingStyle ) My issue is this: I've following string in one of the fields of my object: "reviewed ?" // music notation When I serialize it, it becomes & # x E; // see it as one word, w/o spaces it won't let me type And it fails when I try to transform this .NET generated XML through another XSL file Is it happening because its serializing using UTF-16? Is there any way I can make it transform using UTF-8 and make this "error" go away? **

    Read the article

  • Split List into Sublists with LINQ

    - by Felipe Lima
    Hi all, I believe this is another easy one for you LINQ masters out there. Is there any way I can separe a List into several separate lists of SomeObject, using the item index as the delimiter of each split? Let me exemplify: I have a List<SomeObject> and I need a List<List<SomeObject>> or List<SomeObject>[], so that each of these resulting lists will contain a group of 3 items of the original list (sequentially). eg.: Original List: [a, g, e, w, p, s, q, f, x, y, i, m, c] Resulting lists: [a, g, e], [w, p, s], [q, f, x], [y, i, m], [c] I'd also need the resulting lists size to be a parameter of this function. Is it possible?? Thanks!

    Read the article

  • C# convert an IOrderedEnumerable<KeyValuePair<string, int>> into a Dictionary<string, int>

    - by Kache4
    I was following the answer to another question, and I got: // itemCounter is a Dictionary<string, int>, and I only want to keep // key/value pairs with the top maxAllowed values if (itemCounter.Count > maxAllowed) { IEnumerable<KeyValuePair<string, int>> sortedDict = from entry in itemCounter orderby entry.Value descending select entry; sortedDict = sortedDict.Take(maxAllowed); itemCounter = sortedDict.ToDictionary<string, int>(/* what do I do here? */); } Visual Studio's asking for a parameter Func<string, int> keySelector. I tried following a few semi-relevant examples I've found online and put in k => k.Key, but that gives a compiler error: 'System.Collections.Generic.IEnumerable<System.Collections.Generic.KeyValuePair<string,int>>' does not contain a definition for 'ToDictionary' and the best extension method overload 'System.Linq.Enumerable.ToDictionary<TSource,TKey>(System.Collections.Generic.IEnumerable<TSource>, System.Func<TSource,TKey>)' has some invalid arguments

    Read the article

  • C# TCP Hole Punch (NAT Traversal) Library or something?

    - by user293531
    I want to do TCP Hole Punching (NAT Traversal) in C#. It can be done with a rendevouzs server if needed. I found http://sharpstunt.codeplex.com/ but can not get this to work. Ideally i need some method which i give a PortNumber (int) as parameter that after a call to this method is available ("Port Forwarded") at the NAT. It would be also ok if the methode just returns some port number which is then available at the NAT. Has anybody done this in C# ? Can you give me working examples for sharpstunt or something else? Thank you

    Read the article

  • Visual Studio 2010, TlbImp generates .net 4.0 interops in 2.0 projects

    - by DJScrib
    In a C# project we add a reference to a COM object via the Add References setup pointing to a COM object which results in the IDE auto-generating the interop assembly. So this is fine and good, but we are building based on .net 3.5 SP1 aka CLR 2.0, and the generated interops are using the 4.0 CLR making them incompatible. Is there a way to prevent this? I assume the other option is configure our build script to try using tlbimp.exe with the /references parameter? to point to mscorlib v2.0? Anyhow, I'm hoping there's a flag somewhere to allow this.

    Read the article

  • "Could not authenticate you." -error when using Twitter OAuth.

    - by Martti Laine
    Hello I'm building my first system using Twitters OAuth and have some issues. First, I'm using Abraham's Twitter-class for this and I have followed this tutorial. However, I get these lines on my callback.php: Warning: array_merge() [function.array-merge]: Argument #2 is not an array in C:\xampp\htdocs\twitter\twitterOAuth\OAuth.php on line 301 Warning: strtoupper() expects parameter 1 to be string, array given in C:\xampp\htdocs\twitter\twitterOAuth\OAuth.php on line 373 Oops - an error has occurred. SimpleXMLElement Object ( [request] => /account/verify_credentials.xml [error] => Could not authenticate you. ) Is this problem by Twitter-class, or am I doing something wrong? I have my Consumer Key and Consumer Secret in config.php as tutorial says, but should I store something else? Martti Laine

    Read the article

  • Explanation of SendMessage message numbers? (C#, Winforms)

    - by John
    I've successfully used the Windows SendMessage method to help me do various things in my text editor, but each time I am just copying and pasting code suggested by others, and I don't really know what it means. There is always a cryptic message number that is a parameter. How do I know what these code numbers mean so that I can actually understand what is happening and (hopefully) be a little more self-sufficient in the future? Thanks. Recent example: using System.Runtime.InteropServices; [DllImport("user32.dll")] static extern int SendMessage(IntPtr hWnd, uint wMsg,UIntPtr wParam, IntPtr lParam); SendMessage(myRichTextBox.Handle, (uint)0x00B6, (UIntPtr)0, (IntPtr)(-1));

    Read the article

  • How to get input value or javascript variable in Asp.Net MVC Ajax.ActionLink

    - by achu
    I want to pass an input control value (say textbox1.value or a javascript variable) to a controller action method (as a parameter) without a form post (using Ajax.ActionLink). please see the code below is it possible to assign like new {name = textbox1.value} in Ajax.ActionLink. View <input type="text" id="textbox1" /> <% =Ajax.ActionLink("mylink", "linkfunction", new {name = textbox1.value}, new AjaxOptions { UpdateTargetId = "result"})%> <span id="result"></span> and controler action is .. public string linkfunction(string name) { return DateTime.Now.ToString(); }

    Read the article

  • jQgrid Pagination with query string

    - by bsreekanth
    Hello, I recently started experimenting with jQgrid, and much appreciate for any guidance on the below use case. I need to implement a (advanced)search functionality, and the results are loaded in the jQgrid. When use pagination, how to specify a complex query in the post data? In the serverside (grails) it it represented as an object, which is mocked below class searchCommand { String val1 List<long> ids //from the multiple selection } the above members can be null, if the user doesn't select any. Without saving the state at the server, I guess the only way to make the pagination work is to pass the query object back and forth with the correct offset, index etc. if that is the case, how best to represent it in jQgrid side. I saw a parameter postData to set additional values, but not sure how to represnt the data (JSON??). Any code snippet on (retaining) converting it from the last result to postData would be helpful. thanks in advance.

    Read the article

  • MATLAB matrix replacement assignment gives error

    - by Gulcan
    I tried to update some part of a matrix, I got the following error message: ??? Assignment has fewer non-singleton rhs dimensions than non-singleton subscripts My code tries to update some values of a matrix that represent a binary image. My code is as follows: outImage(3:5,2:4,1) = max(imBinary(3:5,2:4,1)); When I delete last parameter (1), this time I get the same error. I guess there is a mismatch between dimensions but I could not get it. outImage is a new object that is created at that time (I tried to create it before, but nothing changed). What may be wrong?

    Read the article

  • Matlab matrix replacement assignment gives error

    - by Gulcan
    Hello, when i tried to update some part of a matrix, i got the following error message: ??? Assignment has fewer non-singleton rhs dimensions than non-singleton subscripts My code tries to update some values of a matrix that represent a binary image. My code is as follows: outImage(3:5,2:4,1) = max(imBinary(3:5,2:4,1)); When I delete last parameter (1), this time I get the same error. I guess there is a mismatch between dimensions but I could not get it. outImage is a new object that is created at that time (I tried to create it before, but nothing changed). What may be wrong? Thanks in advance, Gulcan

    Read the article

  • Frame Buffers wont work with pyglet.

    - by Matthew Mitchell
    I have this code: def setup_framebuffer(surface): #Create texture if not done already if surface.texture is None: create_texture(surface) #Render child to parent if surface.frame_buffer is None: surface.frame_buffer = glGenFramebuffersEXT(1) glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, surface.frame_buffer) glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, surface.texture, 0) glPushAttrib(GL_VIEWPORT_BIT) glViewport(0,0,surface._scale[0],surface._scale[1]) glMatrixMode(GL_PROJECTION) glLoadIdentity() #Load the projection matrix gluOrtho2D(0,surface._scale[0],0,surface._scale[1]) glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, surface.frame_buffer) for this despite the second parameter printing as 1 for a test I did, I get: glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, surface.frame_buffer) I only got this after implementing pyglet. GLUT is too limited. Thank you.

    Read the article

  • Is there a way to serialize a .Net MailMessage object

    - by Matt Dawdy
    I am trying to write a proc that will take in as a parameter a MailMessage object, and the split it apart to store the subject, body, to addresses, from address, and attachments (the hard part) in a database so the email can be sent at some point in the future. My first take on this was to rip out the parts I need and store them in a database, and that works great except for attachments. I can't figure out how to loop through the collection and then actually do anything with them. It there an easy way to serialize a MailMessage object that will actually take the content of the attachments with it? Am I doing this all wrong? Has anyone done this before?

    Read the article

  • Active Directory Services: PrincipalContext -- What is the DN of a "container" object?

    - by Ranger Pretzel
    I'm currently trying to authenticate via Active Directory Services using the PrincipalContext class. I would like to have my application authenticate to the Domain using Sealed and SSL contexts. In order to do this, I have to use the following constructor of PrincipalContext (link to MSDN page): public PrincipalContext( ContextType contextType, string name, string container, ContextOptions options ) Specifically, I'm using the constructor as so: PrincipalContext domainContext = new PrincipalContext( ContextType.Domain, domain, container, ContextOptions.Sealing | ContextOptions.SecureSocketLayer); MSDN says about "container": The container on the store to use as the root of the context. All queries are performed under this root, and all inserts are performed into this container. For Domain and ApplicationDirectory context types, this parameter is the distinguished name (DN) of a container object. What is the DN of a container object? How do I find out what my container object is? Can I query the Active Directory (or LDAP) server for this?

    Read the article

  • WCF Binding Created In Code

    - by Daniel
    Hello I've a must to create wcf service with parameter. I'm following this http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/8f18aed8-8e34-48ea-b8be-6c29ac3b4f41 First this is that I don't know how can I set this custom behavior "MyServiceBehavior" in my Web.config in ASP.NET MVC app that will host it. As far as I know behaviors must be declared in section in wcf.config. How can I add reference there to my behavior class from service assembly? An second thing is that I the following example the create local host, but how I can add headers used in constructor when I use service reference and it will already create instance of web service, right? Regards, Daniel Skowronski

    Read the article

  • How to access the FirstData web service integration WSDL file?

    - by rcampbell
    FirstData has horrendous customer support, but I have to integrate with their Global Gateway web service for a project I'm working on. I'm simply trying to run the Axis2 wsdl2java tool according to the instructions in their manual. This basically consists of adding the keyStore and keyStorePassword JVM parameter. I've done both, but I continue to get Connection reset errors when trying to run: wsdl2java.bat -uri https://www.staging.linkpointcentral.com/fdggwsapi/order.wsdl -S C:\ When I try to access the URL with my browser, I get Error 101 (net::ERR_CONNECTION_RESET): Unknown error. I assume there are developers out there who have completed a FirstData web service integration. What am I doing wrong? I've also tried connecting via cURL: C:\curl-7.19.7-ssl-sspi-zlib-static-bin-w32>curl --cert C:\FDGGWS\WSXXXXXXXXXX._.1.pem --key C:\FDGGWS\WSXXXXXXXXXX._.1.key --insecure https://www.staging.linkpointcentral.com/fdggwsapi/order.wsdl Enter PEM pass phrase: curl: (52) SSL read: error:00000000:lib(0):func(0):reason(0), errno 10054 I know I'm entering the correct key password because when I enter a fake one I get: curl: (58) unable to set private key file: 'C:\FDGGWS\WSXXXXXXXXXX._.1.key' type PEM

    Read the article

  • Flash 10 (AS3): Pass Parameters to fscommand (from Projector)?

    - by yar
    Here is my shell script (fscommand/blah.sh): open -a /Applications/TextMate.app/ $1 and here is my ActionScript 3.0: flash.system.fscommand("exec", "blah.sh blah.txt"); this does not work. If I get rid of the $1 from the shell script and the blah.txt it works fine. How can I pass parameters to the shell script? (yes, the shell script works fine with the parameter when called from the command line). Note: This is on OSX but I need this to work on Windows as well. Edit: "Doesn't work" also means no trace, no error :)

    Read the article

  • SSRS & asp.net - passing parameters from .net to ssrs in report viewer

    - by Ricardo Deano
    Hello all. I am about to embark in using a report viewer in my .net page. I have a page that will search for a catgory, upon a button click, the category chosen will pass into the parameter of report viewer. Now, given that I am a newbie to both SSRS and .net, I'd just like a bit of advice on how to tackle this. Should I make the report in SSRS first and include the parameters in this report or can I make the report without the parameters specified, then programmatically enter this in the codebehind? Basically, I know what I would like to do but not sure the best approach to take. If anyone can offer advice, I would be most grateful.

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >