Search Results

Search found 9370 results on 375 pages for 'wcf rest contrib'.

Page 115/375 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • RESTful API: How to model 'request new password'?

    - by Jan P.
    I am designing a RESTful API for a booking application and was quite happy to see I could map all details of the application to the 4 HTTP methods. /users - GET, POST /users/({id}|myself) - GET, POST, PUT, DELETE /users/({id}|myself)/bookings - GET, POST /users/({id}|myself)/bookings/{id} - GET, POST, PUT, DELETE Example: Updating my own user uses a PUT to /users/myself. But now I found out that one thing is missing: The possibility to request a new password if I forgot my old one. Any idea how I could add this?

    Read the article

  • What are the Rails best practices for javascript templates in restful/resourceful controllers?

    - by numbers1311407
    First, 2 common (basic) approaches: # returning from some FoosController method respond_to do |format| # 1. render the javascript directly format.js { render :json => @foo.to_json } # 2. render the default template, say update.js.erb format.js { render } end # in update.js.erb $('#foo').html("<%= escape_javascript(render(@foo)) %>") These are obviously simple cases but I wanted to illustrate what I'm talking about. I believe that these are also the cases expected by the default responder in rails 3 (either the action-named default template or calling to_#{format} on the resource.) The Issues With 1, you have total flexibility on the view side with no worries about the template, but you have to manipulate the DOM directly via javascript. You lose access to helpers, partials, etc. With 2, you have partials and helpers at your disposal, but you're tied to the one template (by default at least). All your views that make JS calls to FoosController use the same template, which isn't exactly flexible. Three Other Approaches (none really satisfactory) 1.) Escape partials/helpers I need into javascript beforehand, then inserting them into the page after, using string replacement to tailor them to the results returned (subbing in name, id, etc). 2.) Put view logic in the templates. For example, looking for a particular DOM element and doing one thing if it exists, another if it does not. 3.) Put logic in the controller to render different templates. For example, in a polymorphic belongs to where update might be called for either comments/foo or posts/foo, rendering commnts/foos/update.js.erb versus posts/foos/update.js.erb. I've used all of these (and probably others I'm not thinking of). Often in the same app, which leads to confusing code. Are there best practices for this sort of thing? It seems like a common enough use-case that you'd want to call controllers via Ajax actions from different views and expect different things to happen (without having to do tedious things like escaping and string-replacing partials and helpers client side). Any thoughts?

    Read the article

  • More HTTP verbs with AS3?

    - by tedw4rd
    I'm developing a social game in Flash with a team of developers. Our server-side guy has developed a really slick RESTful API for the Flash client to talk to. A lot of the client-server interactions involve adding and removing objects from a persistent world, so the API makes extensive use of the PUT and DELETE verbs. The problem is, the URLRequest object in AS3 only supports the GET and POST verbs. We're on a strict schedule, and we'd really rather not have to rewrite the whole API to just use GET and POST. Has anyone come up with a good way to get Flash to send other verbs?

    Read the article

  • How to enforce that HTTP client uses conditional requests for updates?

    - by Day
    In a (proper RMM level 3) RESTful HTTP API, I want to enforce the fact that clients should make conditional requests when updating resources, in order to avoid the lost update problem. What would be an appropriate response to return to clients that incorrectly attempt unconditional PUT requests? I note that the (abandoned?) mod_atom returns a 405 Method Not Allowed with an Allow header set to GET, HEAD (view source) when an unconditional update is attempted. This seems slightly misleading - to me this implies that PUT is never a valid method to attempt on the resource. Perhaps the response just needs to have an entity body explaining that If-Match or If-Unmodified-Since must be used to make the PUT request conditional in which case it would be allowed? Or perhaps a 400 Bad Request with a suitable explanation in the entity body would be a better solution? But again, this doesn't feel quite right because it's using a 400 response for a violation of application specific semantics when RFC 2616 says (my emphasis): The request could not be understood by the server due to malformed syntax. But than again, I think that using 400 Bad Request for application specific semantics is becoming a widely accepted pragmatic solution (citation needed!), and I'm just being overly pedantic.

    Read the article

  • Elegent methods for caching search results from RESTful service?

    - by Paul
    I have a RESTful web service which I access from the browser using JavaScript. As an example, say that this web service returns a list of all the Message resources assigned to me when I send a GET request to /messages/me. For performance reasons, I'd like to cache this response so that I don't have to re-fetch it every time I visit my Manage Messages web page. The cached response would expire after 5 minutes. If a Message resource is created "behind my back", say by the system admin, it's possible that I won't know about it for up to 5 minutes, until the cached search response expires and is re-fetched. This is acceptable, because it creates no confusion for me. However if I create a new Message resource which I know should be part of the search response, it becomes confusing when it doesn't appear on my Manage Messages page immediately. In general, when I knowingly create/delete/update a resource that invalidates a cached search response, I need that cached response to be expired/flushed immediately. The core problem which I can't figure out: I see no simple way of connecting the task of creating/deleting/updating a resource with the task of expiring the appropriate cached responses. In this example it seems simple, I could manually expire the cached search response whenever I create/delete/update a(ny) Message resource. But in a more complex system, keeping track of which search responses to expire under what circumstances will get clumsy quickly. If someone could suggest a simple solution or some clarifying thoughts, I'd appreciate it.

    Read the article

  • MVC2 JSON action, if I want to be RESTful should I allow GET, POST, or Both?

    - by Yads
    The project I'm currently working has a whole bunch of JSON actions in order to populate cascading dropdowns via ajax calls. Since they're technically Select queries and we're trying to be RESTful, we've been marking these actions with the HttpGet attributes. However by default, JsonResultdoes not allow to return results via a GET. So we've had to explicitly call Json(data, JsonRequestBehavior.AllowGet). What I'm wondering is, is this bad practice? Should we only be allowing Post requests to our Json actions? If it makes a difference, this is an enterprise application, that requires a log in to a particular environment before it can be accessed.

    Read the article

  • how did i break :method -> :delete

    - by Tallboy
    i refactored my entire application and gave it a whole new design. The one thing that seems to be broken is all the original link_to methods I had which were :method = :delete are now getting sent as a GET request. The only thing I did that I can remember that might cause it is delete jquery-rails from the gemfile (I'm just getting it from google ajax). Does anyone have any other ideas what I could have deleted?

    Read the article

  • RESTful API: How to model JSON representation?

    - by Jan P.
    I am designing a RESTful API for a booking application. You can request a list of accommodations. And that's where I don't really know how to design the JSON represenation. This is my XML representation: <?xml version="1.0" encoding="utf-8"?> <accommodations> <accommodation> <name>...</name> <category>couch</category> </accommodation> <accommodation> <name>...</name> <category>room</category> </accommodation> <accommodations> My first try to convert this to JSON resulted in this output (1): { "0": { "name": "...", "category": "couch" }, "1": { "name": "...", "category": "room" } } But as I looked how others APIs did it, I found something looking more like this (2): [ { "name": "...", "category": "couch" }, { "name": "...", "category": "room" } ] I know version 1 is an object, and version 2 an array. But which one is better in this case?

    Read the article

  • RESTful web service, PUTting an unnamed resource?

    - by James L
    I have a back-end service that creates unique identifiers for resources. The general idea is that resources are saved and versioned, so you can perform: GET http://service/sales/targets/7818181919/latest or GET http://service/sales/targets/7818181919/4 for version 4, and so on. My question is about the most correct way to upload these resources in the first place. How about: PUT http://service/sales/targets/ returning 303 See other /service/sales/targets/ It seems a little wrong as you should PUT and GET from exactly the same place using a resource-oriented interface, but I can't think of a better option. Any ideas?

    Read the article

  • Should an edit of a comment be sent through POST or PUT?

    - by the_drow
    I have the following URI: Posts/{postId}/Comments/{commentId} I would like to enable users to edit a comment through my API, should the edit be done with POST or PUT? One one hand, POST updates the contents of a resource so that makes sense but on the other hand PUT replaces it with a new one. So if I understand correctly with POST I need to send only what needs to be updates and with PUT I send the whole resource. Usually in edit forms, the whole resource is loaded anyway so what's the point of using POST? If I take one approach or the other, what are the differences?

    Read the article

  • Formulate POST request in curl

    - by user1867256
    I'm using curl to send POST request to web service http ://localhost 2325//Service How can I desirialize body of the POST request into a variable which I could then access within my POST method ? Can someone give me an example? This is my method [WebInvoke(RequestFormat = WebMessageFormat.Json, UriTemplate = "/user", Method = "POST")] public void Create(User us) Class User contains user_id and user_name. Can anyone please help? All I need is an example how to formulate POST request in curl Thanks

    Read the article

  • RESTful enums. string or Id?

    - by GazTheDestroyer
    I have a RESTful service that exposes enums. Should I expose them as localised strings, or plain integers? My leaning is toward integers for easy conversion at the service end, but in that case the client needs to grab a list of localised strings from somewhere in order to know what the enums mean. Am I just creating extra steps for nothing? There seems to be little information I can find about which is commonly done in RESTful APIs. EDIT: OK. Let's say I'm writing a website that stores information about people's pets. I could have an AnimalType enum 0 Dog 1 Cat 2 Rabbit etc. When people grab a particular pet resource, say /pets/1, I can either provide a meaningful localised string for the animal type, or just provide the ID and force them to do another look up via a /pets/types resource. Or should I provide both?

    Read the article

  • Restful client on Codeigniter issue

    - by user1852837
    This is weird. I don't know what is problem on my website. My website works on local server but not on live server. Login page works on first signin but after logout then re-login again message says: "invalid username and password" since it works on first attempt. I found out when I debugging that http://xxxxx.com/api/authentication/sign not found. It display 404 page not found. Sometimes you can login and sometimes not. In my local it works. I contact the web server admin and I ask what is the status of the session on the server and How does it execute it's web requests? (Sockets, file_get_contents, curl?). They said that No problems reproduced with Server Sessions and PHP Curl works fine. I know it's weird but can somebody here can figure it out what is the problem behind of it.

    Read the article

  • How to solve "The ChannelDispatcher is unable to open its IChannelListener" error?

    - by kyrisu
    Hi, I'm trying to communicate between WCF hosted in Windows Service and my service GUI. The problem is when I'm trying to execute OperationContract method I'm getting "The ChannelDispatcher at 'net.tcp://localhost:7771/MyService' with contract(s) '"IContract"' is unable to open its IChannelListener." My app.conf looks like that: <configuration> <system.serviceModel> <bindings> <netTcpBinding> <binding name="netTcpBinding"> <security> <transport protectionLevel="EncryptAndSign" /> </security> </binding> </netTcpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="MyServiceBehavior"> <serviceMetadata httpGetEnabled="true" httpGetUrl="http://localhost:7772/MyService" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="MyServiceBehavior" name="MyService.Service"> <endpoint address="net.tcp://localhost:7771/MyService" binding="netTcpBinding" bindingConfiguration="netTcpBinding" name="netTcp" contract="MyService.IContract" /> </service> </services> </system.serviceModel> Port 7771 is listening (checked using netstat) and svcutil is able to generate configs for me. Any suggestions would be appreciated. Stack trace from exception Server stack trace: at System.ServiceModel.Channels.ServiceChannel.ThrowIfFaultUnderstood(Message reply, MessageFault fault, String action, MessageVersion version, FaultConverter faultConverter) at System.ServiceModel.Channels.ServiceChannel.HandleReply(ProxyOperationRuntime operation, ProxyRpc& rpc) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) There's one inner exeption (but not under Exeption.InnerExeption but under Exeption.Detail.InnerExeption - ToString() method doesn't show that) A registration already exists for URI 'net.tcp://localhost:7771/MyService'. But my service have specified this URI only in app.config file nowhere else. In entire solution this URI apears in server once and client once.

    Read the article

  • How to serialize parameters in Web Service method

    - by Georgi
    Hi, I have this problem. I have WCF .Net C# web service with this method: public interface IMyService { // TODO: Add your service operations here [OperationContract] ListOfRequests GetListOfRequests(string par1, string par2, string par3, DateTime par4, DateTime par5, string par6, string par7); } public class MyService : IMyService { public ListOfRequests GetListOfRequests(string par1, string par2, string par3, DateTime par4, DateTime par5, string par6, string par7) { // .... web method code here; } } The problem is that when I generate the web service the wsdl schema return this: <xs:element name="GetListOfRequests"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" name="par1" nillable="true" type="xs:string" /> <xs:element minOccurs="0" name="par2" nillable="true" type="xs:string" /> <xs:element minOccurs="0" name="par3" nillable="true" type="xs:string" /> <xs:element minOccurs="0" name="par4" type="xs:dateTime" /> <xs:element minOccurs="0" name="par5" type="xs:dateTime" /> <xs:element minOccurs="0" name="par6" nillable="true" type="xs:string" /> <xs:element minOccurs="0" name="par7" nillable="true" type="xs:string" /> </xs:sequence> </xs:complexType> </xs:element> and I want my parameters to be not null like this: <xs:element name="GetListOfRequests"> <xs:complexType> <xs:sequence> <xs:element minOccurs="1" name="par1" nillable="false" type="xs:string" /> <xs:element minOccurs="1" name="par2" nillable="false" type="xs:string" /> <xs:element minOccurs="1" name="par3" nillable="false" type="xs:string" /> <xs:element minOccurs="1" name="par4" type="xs:dateTime" /> <xs:element minOccurs="1" name="par5" type="xs:dateTime" /> <xs:element minOccurs="1" name="par6" nillable="false" type="xs:string" /> <xs:element minOccurs="0" name="par7" nillable="true" type="xs:string" /> </xs:sequence> </xs:complexType> </xs:element> How can I serizalize parameters to achieve this? Thank you in advance for your help. Regards, Georgi

    Read the article

  • Svcutil generating bad config with multiple endpoints

    - by vfilby
    I have a WCF service that has exposed a soap and an xml endpoint. When I use svcutil to generate the proxy code on the client side the generated configuration contains two endpoints which causes the client to fail. If I edit the web.config file and remove the second endpoint (with the custom binding) all works as expected. Is there a way I can get svcutil to generate a config that just works so that I don't need to hand edit the file everytime? Client-side error: An endpoint configuration section for contract 'MyNamespace.ITestService' could not be loaded because more than one endpoint configuration for that contract was found. Please indicate the preferred endpoint configuration section by name. Svcutil command: svcutil http://api.local/Test.svc /reference:bin\MyNamespace.Interface.dll /config:web.config /mergeConfig /out:"Service References\TestService.cs" /n:*,MyNamespace Generated client config: <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_ITestService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> <customBinding> <binding name="CustomBinding_ITestService"> <textMessageEncoding maxReadPoolSize="64" maxWritePoolSize="16" messageVersion="Soap12" writeEncoding="utf-8"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> </textMessageEncoding> </binding> </customBinding> </bindings> <client> <endpoint address="http://api2.local/Test.svc/soap" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_ITestService" contract="MyNamespace.ITestService" name="BasicHttpBinding_ITestService" /> <endpoint binding="customBinding" bindingConfiguration="CustomBinding_ITestService" contract="MyNamespace.ITestService" name="CustomBinding_ITestService" /> </client> </system.serviceModel>

    Read the article

  • KISS: Simple C# application which communicates with a RESTful web service.

    - by Workshop Alex
    Following the KISS principle, I suddenly realised the following: In .NET, you can use the Entity Model Framework to wrap around a database. This model can be exposed as a web service through WCF. This web service would have a very standardized definition. A client application could be created which could consume any such RESTful web service. I don't want to re-invent the wheel and it wouldn't surprise me if someone has already done this, so my question is simple: Has anyone already created a simple (desktop, not web) client application that can consume a RESTful service that's based on the Entity Framework and which will allow the user to read and write data directly to this service? Otherwise, I'll just have to "invent" this myself. :-)Problem is, the database layer and RESTful service is already finished. The RESTful service will only stay in the project during it's development phase, since we can use the database-layer assembly directly from the web applications that are build around it. When the web application is deployed, the RESTful services are just kept out of the deployment. But the database has a lot of data to manage over nearly 50 tables. When developing against a local database, we can have straight access to the database so I wouldn't need this tool for this. When it's deployed, the web application would be the only way to access the data so I could not use this tool. But we're also having a test phase where the database is stored on another system outside the local domain and this database is not available for developers. Only administrators have direct access to this database, making tests a bit more complex. However, through the RESTful service, I can still access the data directly. Thus, when some test goes wrong, I can repair the data through this connection or just create a copy of the data for tests on my local system. There's plenty of other functionality and it's even possible to just open the URL to a table service straight in Excel or XMLSpy to see the contents. But when I want to write something back, I have to write special code to do just that. A generic tool that would allow me to access the data and modify it would be easier. Since it's a generic setup around the ADO.NET Data services, this should be reasonable easy too. Thus, I can do it but hoped someone else has already done something similar. But it appears that there's no such tool made yet...

    Read the article

  • Xml reader creation problem

    - by diver-d
    Hi there, I am having some troubles trying to allow my WCF service to process a larger string. I keep getting the following error. The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader My App.config looks like this <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <!-- When deploying the service library project, the content of the config file must be added to the host's app.config file. System.Configuration does not support config files for libraries. --> <system.serviceModel> <bindings> <wsHttpBinding> <binding name="wsHttpBindingSettings" maxReceivedMessageSize="2147483647"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> </binding> </wsHttpBinding> </bindings> <services> <service name="WCFLongString.Service1" > <endpoint address="" binding="wsHttpBinding" contract="WCFLongString.IService1" bindingConfiguration="wsHttpBindingSettings"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="http://localhost:8732/Design_Time_Addresses/WCFLongString/Service1/" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="True"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="False" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> Am I missing something here?

    Read the article

  • .NET unit test runner outputting FaultException.Detail

    - by Adam
    Hello, I am running some unit tests on a WCF service. The service is configured to include exception details in the fault response (with the following in my service configuration file). <serviceDebug includeExceptionDetailInFaults="true" /> If a test causes an unhandled exception on the server the fault is received by the client with a fully populated server stack trace. I can see this by calling the exception's ToString() method. The problem is that this doesn't seem to be output by any of the test runners that I have tried (xUnit, Gallio, MSTest). They appear to just output the Message and the StackTrace properties of the exception. To illustrate what I mean, the following unit test run by MSTest would output three sections: Error Message Error Stack Trace Standard Console Output (contains the information I would like, e.g. "Fault Detail is equal to An ExceptionDetail, likely created by IncludeExceptionDetailInFaults=true, whose value is: ..." try { service.CallMethodWhichCausesException(); } catch (Exception ex) { Console.WriteLine(ex); // this outputs the information I would like throw; } Having this information will make the initial phase of testing and deployment a lot less painful. I know I can just wrap each unit test in a generic exception handler and write the exception to the console and rethrow (as above) within all my unit tests but that seems a very long-winded way of achieving this (and would look pretty awful). Does anyone know if there's any way to get this information included for free whenever an unhandled exception occurs? Is there a setting that I am missing? Is my service configuration lacking in proper fault handling? Perhaps I could write some kind of plug-in / adapter for some unit testing framework? Perhaps theres a different unit testing framework which I should be using instead! My actual set-up is xUnit unit tests executed via Gallio for the development environment, but I do have a separate suite of "smoke tests" written which I would like to be able to have our engineers run via the xUnit GUI test runner (or Gallio or whatever) to simplify the final deployment. Thanks. Adam

    Read the article

  • How to maxmise the largest contiguous block of memory in the Large Object Heap

    - by Unsliced
    The situation is that I am making a WCF call to a remote server which is returns an XML document as a string. Most of the time this return value is a few K, sometimes a few dozen K, very occasionally a few hundred K, but very rarely it could be several megabytes (first problem is that there is no way for me to know). It's these rare occasions that are causing grief. I get a stack trace that starts: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.Xml.BufferBuilder.AddBuffer() at System.Xml.BufferBuilder.AppendHelper(Char* pSource, Int32 count) at System.Xml.BufferBuilder.Append(Char[] value, Int32 start, Int32 count) at System.Xml.XmlTextReaderImpl.ParseText() at System.Xml.XmlTextReaderImpl.ParseElementContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlTextReader.Read() at System.Xml.XmlReader.ReadElementString() at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderMDRQuery.Read2_getMarketDataResponse() at Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer2.Deserialize(XmlSerializationReader reader) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle) at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) I've read around and it is because the Large Object Heap is just getting too fragmented, so even preceding the call with a quick check to StringBuilder.EnsureCapacity just causes the OutOfMemoryException to be thrown earlier (and because I'm guessing at what's needed, it might not actually need that much so my check is causing more problems than it is solving). Some opinions are that there's not much I can do about it. Some of the questions I've asked myself: Use less memory - have you checked for leaks? Yes. The memory usage goes up and down, but there's no fundamental growth that guarantees this to happen. Some of the times it fails, it succeeded at that stage previously. Transfer smaller amounts Not an option, this is a third party web service over which I have no control (or at least it would take a long time to resolve, in the meantime I still have a problem) Can you do something to the LOH to make it less likely to fail? ... now this is most fruitful course. It's a 32-bit process (it has to be for various political, technical and boring reasons) but there's normally hundreds of meg free (multiples of the largest amount for which we've seen failures). Can we monitor the LOH? Using perfmon I can track the size of the heaps, but I don't think there's a way to monitor the largest available contiguous block of memory. Question is: any advice or suggestions for things to try?

    Read the article

  • Using jQuery and OData to Insert a Database Record

    - by Stephen Walther
    In my previous blog entry, I explored two ways of inserting a database record using jQuery. We added a new Movie to the Movie database table by using a generic handler and by using a WCF service. In this blog entry, I want to take a brief look at how you can insert a database record using OData. Introduction to OData The Open Data Protocol (OData) was developed by Microsoft to be an open standard for communicating data across the Internet. Because the protocol is compatible with standards such as REST and JSON, the protocol is particularly well suited for Ajax. OData has undergone several name changes. It was previously referred to as Astoria and ADO.NET Data Services. OData is used by Sharepoint Server 2010, Azure Storage Services, Excel 2010, SQL Server 2008, and project code name “Dallas.” Because OData is being adopted as the public interface of so many important Microsoft technologies, it is a good protocol to learn. You can learn more about OData by visiting the following websites: http://www.odata.org http://msdn.microsoft.com/en-us/data/bb931106.aspx When using the .NET framework, you can easily expose database data through the OData protocol by creating a WCF Data Service. In this blog entry, I will create a WCF Data Service that exposes the Movie database table. Create the Database and Data Model The MoviesDB database is a simple database that contains the following Movies table: You need to create a data model to represent the MoviesDB database. In this blog entry, I use the ADO.NET Entity Framework to create my data model. However, WCF Data Services and OData are not tied to any particular OR/M framework such as the ADO.NET Entity Framework. For details on creating the Entity Framework data model for the MoviesDB database, see the previous blog entry. Create a WCF Data Service You create a new WCF Service by selecting the menu option Project, Add New Item and selecting the WCF Data Service item template (see Figure 1). Name the new WCF Data Service MovieService.svc. Figure 1 – Adding a WCF Data Service Listing 1 contains the default code that you get when you create a new WCF Data Service. There are two things that you need to modify. Listing 1 – New WCF Data Service File using System; using System.Collections.Generic; using System.Data.Services; using System.Data.Services.Common; using System.Linq; using System.ServiceModel.Web; using System.Web; namespace WebApplication1 { public class MovieService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } First, you need to replace the comment /* TODO: put your data source class name here */ with a class that represents the data that you want to expose from the service. In our case, we need to replace the comment with a reference to the MoviesDBEntities class generated by the Entity Framework. Next, you need to configure the security for the WCF Data Service. By default, you cannot query or modify the movie data. We need to update the Entity Set Access Rule to enable us to insert a new database record. The updated MovieService.svc is contained in Listing 2: Listing 2 – MovieService.svc using System.Data.Services; using System.Data.Services.Common; namespace WebApplication1 { public class MovieService : DataService<MoviesDBEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Movies", EntitySetRights.AllWrite); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } That’s all we have to do. We can now insert a new Movie into the Movies database table by posting a new Movie to the following URL: /MovieService.svc/Movies The request must be a POST request. The Movie must be represented as JSON. Using jQuery with OData The HTML page in Listing 3 illustrates how you can use jQuery to insert a new Movie into the Movies database table using the OData protocol. Listing 3 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>jQuery OData Insert</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { Title: $("#title").val(), Director: $("#director").val() }; // JSONify the data var data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Movies", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result var newMovie = result["d"]; // Show primary key alert("Movie added with primary key " + newMovie.Id); } </script> </body> </html> jQuery does not include a JSON serializer. Therefore, we need to include the JSON2 library to serialize the new Movie that we wish to create. The Movie is serialized by calling the JSON.stringify() method: var data = JSON.stringify(data); You can download the JSON2 library from the following website: http://www.json.org/js.html The jQuery ajax() method is called to insert the new Movie. Notice that both the contentType and dataType are set to use JSON. The jQuery ajax() method is used to perform a POST operation against the URL MovieService.svc/Movies. Because the POST payload contains a JSON representation of a new Movie, a new Movie is added to the database table of Movies. When the POST completes successfully, the insertCallback() method is called. The new Movie is passed to this method. The method simply displays the primary key of the new Movie: Summary The OData protocol (and its enabling technology named WCF Data Services) works very nicely with Ajax. By creating a WCF Data Service, you can quickly expose your database data to an Ajax application by taking advantage of open standards such as REST, JSON, and OData. In the next blog entry, I want to take a closer look at how the OData protocol supports different methods of querying data.

    Read the article

  • maxItemsInObjectGraph ignored...

    - by Palantir
    Hi! I have a problem with a WCF service, which tries to serialize too much data. From the trace I get an error which says that the maximum number of elements that can be serialized or unserialized is '65536', try to increment the MaxItemsInObjectGraph quota. So I went and modified this value, but it is just ignored (the error is the same, with the same number). All this is server-side. I am calling the service via wget for the moment. My web config is like this: <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="AlgoMap.Web.MapService.MapServiceBehavior"> <dataContractSerializer maxItemsInObjectGraph="131072" /> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <customBinding> <binding name="customBinding0" closeTimeout="00:02:00" openTimeout="00:02:00" receiveTimeout="00:02:00"> <binaryMessageEncoding> <readerQuotas maxDepth="64" maxStringContentLength="16384" maxArrayLength="16384" maxBytesPerRead="16384" maxNameTableCharCount="16384" /> </binaryMessageEncoding> <httpTransport /> </binding> </customBinding> </bindings> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <services> <service behaviorConfiguration="AlgoMap.Web.MapService.MapServiceBehavior" name="AlgoMap.Web.MapService.MapService"> <endpoint address="" binding="customBinding" bindingConfiguration="customBinding0" contract="AlgoMap.Web.MapService.MapService" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> Version 2, not working either: <system.serviceModel> <behaviors> <endpointBehaviors> <behavior name="AlgoMap.Web.MapService.MapServiceEndpointBehavior"> <dataContractSerializer maxItemsInObjectGraph="131072" /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="AlgoMap.Web.MapService.MapServiceBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <customBinding> <binding name="customBinding0" closeTimeout="00:02:00" openTimeout="00:02:00" receiveTimeout="00:02:00"> <binaryMessageEncoding> <readerQuotas maxDepth="64" maxStringContentLength="16384" maxArrayLength="16384" maxBytesPerRead="16384" maxNameTableCharCount="16384" /> </binaryMessageEncoding> <httpTransport /> </binding> </customBinding> </bindings> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <services> <service behaviorConfiguration="AlgoMap.Web.MapService.MapServiceBehavior" name="AlgoMap.Web.MapService.MapService"> <endpoint address="" binding="customBinding" bindingConfiguration="customBinding0" contract="AlgoMap.Web.MapService.MapService" behaviorConfiguration="AlgoMap.Web.MapService.MapServiceEndpointBehavior" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" behaviorConfiguration="AlgoMap.Web.MapService.MapServiceEndpointBehavior" /> </service> </services> </system.serviceModel> Can anyone help?? Thanks!!

    Read the article

  • Transport Security with Certificate Authentication

    - by Brian T
    I'm getting the following error when I access my webservice localhost/MyService/MyService.svc The SSL settings for the service 'SslRequireCert' does not match those of the IIS 'Ssl, SslNegotiateCert'. I've following the web.config examples as specified in http://msdn.microsoft.com/en-us/library/ms731074.aspx Here is my wcf server web.config: <?xml version="1.0" encoding="UTF-8"?> <configuration xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0"> <appSettings /> <system.web> <identity impersonate="false" /> <roleManager enabled="true" /> <authentication mode="Windows" /> <customErrors mode="Off" /> <webServices> <protocols> <add name="HttpGet" /> <add name="HttpPost" /> </protocols> </webServices> </system.web> <system.webServer> <directoryBrowse enabled="true" /> <validation validateIntegratedModeConfiguration="false" /> <security> <authorization> <remove users="*" roles="" verbs="" /> <add accessType="Allow" users="*" roles="" /> </authorization> </security> </system.webServer> <system.serviceModel> <services> <service name="AspNetSqlProviderService" behaviorConfiguration="MyServiceBehavior"> <endpoint binding="wsHttpBinding" contract="Interface1" bindingConfiguration="CertificateWithTransportWSHttpBinding" /> <endpoint binding="wsHttpBinding" contract="Interface2" bindingConfiguration="CertificateWithTransportWSHttpBinding" /> <endpoint address="mex" binding="wsHttpBinding" bindingConfiguration="CertificateWithTransportWSHttpBinding" name="Metadata_Exchange" contract="IMetadataExchange" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="MyServiceBehavior"> <serviceDebug includeExceptionDetailInFaults="True" /> <serviceMetadata /> <serviceCredentials> <clientCertificate> <authentication trustedStoreLocation="LocalMachine" revocationMode="Online"/> </clientCertificate> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> <bindings> <wsHttpBinding> <binding name="CertificateWithTransportWSHttpBinding"> <security mode="Transport"> <transport clientCredentialType="Certificate" /> </security> </binding> </wsHttpBinding> </bindings> </system.serviceModel> </configuration> I've configured IIS as follows: https binding added using self signed certificate Under SSL settings, require SSL and accept client certificates is checked The self signed certificate has been added to the Local Computer Trusted Root CA. I can browse and execute the .asmx service definition, but the .svc gives me the error described above.

    Read the article

  • Different Service behaviors per endpoint

    - by Preben Huybrechts
    The situation We are implementing different sort of security on some WCF service. ClientCertificate, UserName & Password and Anonymous. We have 2 ServiceBehaviorConfigurations, one for httpBinding and one for wsHttpBinding. (We have custom authorization policies for claim based security) As a requirement we need different endpoints for each service. 3 endpoints with httpBinding and 1 with wsHttpBinding. Example for one service: basicHttpBinding : Anonymous basicHttpBinding : UserNameAndPassword basicHttpBinding : BasicSsl wsHttpBinding : BasicSsl The Problem Part 1: We cannot specify the same service twice, once with the http service configuration and once with the wsHttp service configuration. Part 2: We cannot specify service behaviors on an endpoint. (Throws and exception, No endpoint behavior was found... Service behaviors cant be set to endpoint behaviours) The Config For part 1: <services> <service name="Namespace.MyService" behaviorConfiguration="securityBehavior"> <endpoint address="http://server:94/MyService.svc/Anonymous" contract="Namespace.IMyService" binding="basicHttpBinding" bindingConfiguration="Anonymous"> </endpoint> <endpoint address="http://server:94/MyService.svc/UserNameAndPassword" contract="Namespace.IMyService" binding="basicHttpBinding" bindingConfiguration="UserNameAndPassword"> </endpoint> <endpoint address="https://server/MyService.svc/BasicSsl" contract="Namespace.IMyService" binding="basicHttpBinding" bindingConfiguration="BasicSecured"> </endpoint> </service> <service name="Namespace.MyService" behaviorConfiguration="wsHttpCertificateBehavior"> <endpoint address="https://server/MyService.svc/ClientCert" contract="Namespace.IMyService" binding="wsHttpBinding" bindingConfiguration="ClientCert"/> </service> </services> Service Behavior configuration: <serviceBehaviors> <behavior name="securityBehavior"> <serviceAuthorization serviceAuthorizationManagerType="Namespace.AdamAuthorizationManager,Assembly"> <authorizationPolicies> <add policyType="Namespace.AdamAuthorizationManager,Assembly" /> </authorizationPolicies> </serviceAuthorization> </behavior> <behavior name="wsHttpCertificateBehavior"> <serviceMetadata httpGetEnabled="false" httpsGetEnabled="true"/> <serviceAuthorization serviceAuthorizationManagerType="Namespace.AdamAuthorizationManager,Assembly"> <authorizationPolicies> <add policyType="Namespace.AdamAuthorizationManager,Assembly" /> </authorizationPolicies> </serviceAuthorization> <serviceCredentials> <clientCertificate> <authentication certificateValidationMode="PeerOrChainTrust" revocationMode="NoCheck"/> </clientCertificate> <serviceCertificate findValue="CN=CertSubject"/> </serviceCredentials> </behavior> How can we specify a different service behaviour on the WsHttpBinding endpoint? Or how can we apply our authorization policy in a different way for wsHttpBinding then basicHttpBinding. We would use endpoint behavior but we can't specify our authorization policy on an endpoint behavior

    Read the article

  • WebLogic Server Weekly for March 26th, 2012: WLS 1211 Update, Java 7 Certification, Galleria, WebLogic for DBAs, REST and Enterprise Architecture, Singleton Services

    - by Steve Button
    WebLogic Server 12c Certified with Java 7 for Production Use WebLogic Server 12c (12.1.1) has been certified with JDK 7 for development usage since December and we have now completed JDK 7 certification for use with production systems. In doing so, we have updated the WebLogic Server 12c (12.1.1) distributions incorporating fixes associated with JDK 7 support as well as some bundled patches that address several issues that have been discovered since the initial release. These updated distributions are available for download from OTN and will be beneficial for all WebLogic Server 12c (12.1.1) users in general. What's New Release Notes Download Here! Updated Oracle WebLogic Server 12.1.1.0 distribution Never one to miss a trick, Markus Eisele was one of the first to notice the WebLogic Server 12c update and post a blog about it. Sources told me that as of Friday last week you have an updated version of WebLogic Server 12c on OTN. http://blog.eisele.net/2012/03/updated-oracle-weblogic-server-12110.html Using WebLogic Server 12c with Java 7 - Video To illustrate the use of Java 7 with WebLogic Server 12c, I put together a screen cam showing the creation of a domain using Java 7 and then build and deploy a simple web application that uses Java 7 syntax to show it working. Ireland OUG Presentation: WebLogic for DBAs Simon Haslam posted his slides from a presentation he gave Dublin on 21/3/12 at the OUG Ireland conference. In this presentation, he explains the core concepts and ideas behind WebLogic Server, walks through an installation and offers some tips and common gotcha's to avoid. Simon also covers some aspects of installing and use Enterprise Manager 12c. Note: I usually install the JVM and use the generic .jar installer rather than using an installer bundled with a JVM. http://www.slideshare.net/Veriton/weblogic-for-dbas-10h Slightly Retro: Jeff West on Enterprise Architecure and REST In this weeks flashback, we look at Jeff West's blog from early 2011 where he provides some thoughtful opinions on enterprise architecture and innovation, then jumps into his views on REST. After I progressed in my career and did more team-leading and architecture type roles I was ‘educated’ on what it meant to have Asynchronous and Long-Running processes as part of your Enterprise Application architecture. If I had a synchronous process then I needed a thread available to service the request and then provide the response. https://blogs.oracle.com/jeffwest/entry/weblogic_integration_wli_web_services_and_soap_and_rest_part_1 Starting Managed Servers without an Administration Server using Node Manager and WLST Blogger weblogic-tips shows how to start a managed server without going through the Administration Server, using the Node Manager and WLST. Connect WLST to a Node Manager by entering the nmConnect command. http://www.weblogic-tips.com/2012/02/18/starting-managed-servers-without-an-administration-server-using-node-manager-and-wlst/ Using WebLogic Server Singleton Services WebLogic Server has supported the notion of a Singleton Service for a number of releases, in which WebLogic Server will maintain a single instance of a configured singleton service on one managed server within a cluster. This blog demonstrates how the singleton service can be accessed and used from applications deployed on the cluster. http://buttso.blogspot.com.au/2012/03/weblogic-server-singleton-services.html

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >