Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 30/232 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • How can I disable keep-alive on ASP.NET Web Service client requests?

    - by Matthew Brindley
    I have a few web servers behind an Amazon EC2 load balancer. I'm using TCP balancing on port 80 (rather than HTTP balancing). I have a client polling a Web Service (running on all web servers) for new items every few seconds. However, the client seems to stay connected to one server and polls that same server each time. I've tried using ServicePointManager to disable KeepAlive, but that didn't change anything. The outgoing connection still had its "connection: keep-alive" HTTP header, and the server kept the TCP connection open. I've also tried adding an override of GetWebRequest to the proxy class created by VS, which inherits from SoapHttpClientProtocol, but I still see the keep-alive header. If I kill the client's process and restart, it'll connect to a new server via the load balancer, but it'll continue polling that new server forever. Is there a way to force it to connect to a random server each time? I want the load from the one client to be spread across all of the web servers. The client is written in C# (as is the server) and uses a Web Reference (not a Service Reference), which points to the load balancer.

    Read the article

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

  • Jersey non blocking client

    - by Pavel Bucek
    Although Jersey already have support for making asynchronous requests, it is implemented by standard blocking way - every asynchronous request is handled by one thread and that thread is released only after request is completely processed. That is OK for lots of cases, but imagine how that will work when you need to do lots of parallel requests. Of course you can limit (and its really wise thing to do, you do want control your resources) number of threads used for asynchronous requests, but you'll get another maybe not pleasant consequence - obviously processing time will incerase. There are few projects which are trying to deal with that problem, commonly named as async http clients. I didn't want to "re-implement a wheel" and I decided I'll use AHC - Async Http Client made by Jeanfrancois Arcand. There is also interesting implementation from Apache - HttpAsyncClient, but it is still in "very early stages of development" and others haven't been in similar or better shape as AHC. How this works? Non-blocking clients allow users to make same asynchronous requests as we can do with standard approach but implementation is different - threads are better utilized, they don't spend most of time in idle state. Simply described - when you make a request (send it over the network), you are waiting for reply from other side. And there comes main advantage of non-blocking approach - it uses these threads for further work, like making other requests or processing responses etc.. Idle time is minimized and your resources (threads) will be far better used. Who should consider using this? Everyone who is making lots of asynchronous requests. I haven't done proper benchmark yet, but some simple dumb tests are showing huge improvement in cases where lots of concurrent asynchronous requests are made in short period. Last but not least - this module is still experimental, so if you don't like something or if you have ideas for improvements/any feedback, feel free to comment this blog post, send mail to [email protected] or contact me personally. All feedback is greatly appreciated! maven dependency (will be present in java.net maven 2 repo by the end of the day): link: http://download.java.net/maven/2/com/sun/jersey/experimental/jersey-non-blocking-client <dependency> <groupId>com.sun.jersey.experimental</groupId> <artifactId>jersey-non-blocking-client</artifactId> <version>1.9-SNAPSHOT</version> </dependency> code snippet: ClientConfig cc = new DefaultNonBlockingClientConfig(); cc.getProperties().put(NonBlockingClientConfig.PROPERTY_THREADPOOL_SIZE, 10); // default value, feel free to change Client c = NonBlockingClient.create(cc); AsyncWebResource awr = c.asyncResource("http://oracle.com"); Future<ClientResponse> responseFuture = awr.get(ClientResponse.class); // or awr.get(new TypeListener<ClientResponse>(ClientResponse.class) { @Override public void onComplete(Future<ClientResponse> f) throws InterruptedException { ... } }); javadoc (temporary location, won't be updated): http://anise.cz/~paja/jersey-non-blocking-client/

    Read the article

  • [Ruby] Why do I have to URI.encode even safe characters for Net::HTTP requests?

    - by Matthias
    I was trying to send a GET request to Twitter (user ID replaced for privacy reasons) using Net::HTTP: url = URI.parse("http://api.twitter.com/1/friends/ids.json?user_id=12345") resp = Net::HTTP.get_response(url) this throws an exception in Net::HTTP: NoMethodError: undefined method empty?' for #<URI::HTTP:0x59f5c04> from /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/http.rb:1470:ininitialize' just by coincidence, I stumbled upon a similar code snippet, which used URI.encode prior to URI.parse, so I copied that and tried again: url = URI.parse(URI.encode("http://api.twitter.com/1/friends/ids.json?user_id=12345")) resp = Net::HTTP.get_response(url) now it works fine, but why? There are no reserved characters that need escaping in the URL I mentioned, so why do I have to call URI.encode for get_response to succeed?

    Read the article

  • Would a Socket Connection Outperform an Intarvaled Database Sweep and Requests?

    - by Jascha
    I'm building a small chat application to add to an existing framework. There will only be 20-50 users MAX at any one time. I was wondering if I could get away with updating a cache file containing (semi) live chat data for whichever users happen to be chatting just by performing timed queries and regular AJAX refreshes for new data as opposed to learning how to open and maintain a socket connection. I'm sure there are existing chat plug-ins out there. But I just had a hell of a time installing one and I could see building the whole damn thing taking just as much time as plugging one in. Am I off to a bad start? Thanks in advance -J (p.s. this is a semi closed network behind a php login so security isn't a great concern)

    Read the article

  • Simplest Way to Process Basic HTTPS GET File Requests?

    - by stormin986
    All I need to do is download some basic text-based and image files from a web server that has a self-signed SSL certificate. I have been trying to figure out how to use HttpClient to do this, but getting the SSL to work is a nightmare that seems to be way too much trouble for such a simple task. Is there a better way to perform these file downloads? Perhaps through a WebView or Browser feature? Reinventing the wheel of making a simple HTTPS GET request is a major pain, and is significantly holding up my development schedule.

    Read the article

  • How do I ensure that SOAP requests from a flash client to my ASP server are coming from the flash cl

    - by Gary Benade
    I have a flash based game that has a high score system implemented with a SOAP service. There are prizes involved and I want to prevent someone from using FireBug or similar to discover the webservice path and submit fake scores. I considered using some kind of encryption on the data but am aware that someone could decompile the swf and work out how I did it. I also considered using an IP whitelist but since the incoming data will come from the users IP and not the servers that won't work. (I'm sure I'm missing something obvious here...) I know that there is a tried and tested solution for this, but I don't seem to be asking google the right questions to get to it. Any help and suggestions will be appreciated, thank you

    Read the article

  • how to do iis 7 url rewrite to avoid processing static html file requests for an aspnet mvc applicat

    - by ROHITH
    currently i am developing an aspnet mvc application on iis 5.5.So i have given wild card mapping.I know if we give wild card mapping ,all request go through the asp.net execution pipeline pipeline.(correct me if i am wrong).and in iis7 or above ,we have in built url rewrite engine.What i want to know,in iis7 or above will a request to static html page will go through the aspnet mvc execution pipeline?

    Read the article

  • How do I sign requests reliably for the Last.fm api in C#?

    - by Arda Xi
    I'm trying to implement authorization through Last.fm. I'm submitting my arguments as a Dictionary to make the signing easier. This is the code I'm using to sign my calls: public static string SignCall(Dictionary<string, string> args) { IOrderedEnumerable<KeyValuePair<string, string>> sortedArgs = args.OrderBy(arg => arg.Key); string signature = sortedArgs.Select(pair => pair.Key + pair.Value). Aggregate((first, second) => first + second); return MD5(signature + SecretKey); } I've checked the output in the debugger, it's exactly how it should be, however, I'm still getting WebExceptions every time I try. Here's my code I use to generate the URL in case it'll help: public static string GetSignedURI(Dictionary<string, string> args, bool get) { var stringBuilder = new StringBuilder(); if (get) stringBuilder.Append("http://ws.audioscrobbler.com/2.0/?"); foreach (var kvp in args) stringBuilder.AppendFormat("{0}={1}&", kvp.Key, kvp.Value); stringBuilder.Append("api_sig="+SignCall(args)); return stringBuilder.ToString(); } And sample usage to get a SessionKey: var args = new Dictionary<string, string> { {"method", "auth.getSession"}, {"api_key", ApiKey}, {"token", token} }; string url = GetSignedURI(args, true); EDIT: Oh, and the code references an MD5 function implemented like this: public static string MD5(string toHash) { byte[] textBytes = Encoding.UTF8.GetBytes(toHash); var cryptHandler = new System.Security.Cryptography.MD5CryptoServiceProvider(); byte[] hash = cryptHandler.ComputeHash(textBytes); return hash.Aggregate("", (current, a) => current + a.ToString("x2")); }

    Read the article

  • How to store or share live data between PHP Requests?

    - by Devyn
    Hi, I want to start a project for facebook and the application will be like real-time multiplayer chess game. The problem I'm having is I have no idea how to store the data when a player moves one piece and update the new position in player2 browser. I'm gonna use PHP, MySQL for server side and jQuery for Client Rendering. The simplest idea is to store the data in XML or MySQL and re-generate the result to player2 browser. But I know that when thousand of players are playing, it will not be an efficient way. Since I don't have time to study new language for this project, I'm gonna have to stick with PHP. I'm not going to use flash either because I want my client side light-weight and flash-free. So is there any way that will solve my problems?

    Read the article

  • WebLogic Server Performance and Tuning: Part II - Thread Management

    - by Gokhan Gungor
    WebLogic Server, like any other java application server, provides resources so that your applications use them to provide services. Unfortunately none of these resources are unlimited and they must be managed carefully. One of these resources is threads which are pooled to provide better throughput and performance along with the fast response time and to avoid deadlocks. Threads are execution points that WebLogic Server delivers its power and execute work. Managing threads is very important because it may affect the overall performance of the entire system. In previous releases of WebLogic Server 9.0 we had multiple execute queues and user defined thread pools. There were different queues for different type of work which had fixed number of execute threads.  Tuning of this thread pools and finding the proper number of threads was time consuming which required many trials. WebLogic Server 9.0 and the following releases use a single thread pool and a single priority-based execute queue. All type of work is executed in this single thread pool. Its size (thread count) is automatically decreased or increased (self-tuned). The new “self-tuning” system simplifies getting the proper number of threads and utilizing them.Work manager allows your applications to run concurrently in multiple threads. Work manager is a mechanism that allows you to manage and utilize threads and create rules/guidelines to follow when assigning requests to threads. We can set a scheduling guideline or priority a request with a work manager and then associate this work manager with one or more applications. At run-time, WebLogic Server uses these guidelines to assign pending work/requests to execution threads. The position of a request in the execute queue is determined by its priority. There is a default work manager that is provided. The default work manager should be sufficient for most applications. However there can be cases you want to change this default configuration. Your application(s) may be providing services that need mixture of fast response time and long running processes like batch updates. However wrong configuration of work managers can lead a performance penalty while expecting improvement.We can define/configure work managers at;•    Domain Level: config.xml•    Application Level: weblogic-application.xml •    Component Level: weblogic-ejb-jar.xml or weblogic.xml(For a specific web application use weblogic.xml)We can use the following predefined rules/constraints to manage the work;•    Fair Share Request Class: Specifies the average thread-use time required to process requests. The default is 50.•    Response Time Request Class: Specifies a response time goal in milliseconds.•    Context Request Class: Assigns request classes to requests based on context information.•    Min Threads Constraint: Limits the number of concurrent threads executing requests.•    Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.•    Capacity Constraint: Causes the server to reject requests only when it has reached its capacity. Let’s create a work manager for our application for a long running work.Go to WebLogic console and select Environment | Work Managers from the domain structure tree. Click New button and select Work manager and click next. Enter the name for the work manager and click next. Then select the managed server instances(s) or clusters from available targets (the one that your long running application is deployed) and finish. Click on MyWorkManager, and open the Configuration tab and check Ignore Stuck Threads and save. This will prevent WebLogic to tread long running processes (that is taking more than a specified time) as stuck and enable to finish the process.

    Read the article

  • [Symfony 1.2: ckWebServicePlugin 3.0.0] Module name in SOAP requests, how to get rid of them?

    - by Henri
    When I generate a WSDL file with ./symfony webservice:generate-wsdl (where is 'frontend', is 'soap' and is 'http://localhost ') I get a nice soap.wsdl file which works like it should. Except, the methods are not named 'justAMethod' but 'soapService_justAMethod' (where soapService is the module which holds the SOAP methods). How do I omit the module name in the SOAP method names? I know this is possible since the previous release of the software had no module name in the SOAP method names.

    Read the article

  • How do you determine an acceptable response time for DB requests?

    - by qiq
    According to this discussion of Google App Engine on Hacker News, A DB (read) request takes over 100ms on the datastore. That's insane and unusable for about 90% of applications. How do you determine what is an acceptable response time for a DB read request? I have been using App Engine without noticing any issues with DB responsiveness. But, on the other hand, I'm not sure I would even know what to look for in that regard :)

    Read the article

  • ASP.NET MVC - How to save some data between requests ?

    - by Tony
    Hi, I'm trying to solve the problem: when a user is being logged into a WebSite (via user control .ascx stored on the Master Page), it's name is being stored in a Page.User.Identity.Name property. OK, but how to retrieve that user name in the controller ? Is it possible without registering System.Security.Principal namespace in the controller ? In the other words - the controller must know whose user wants to do some action (e.g. change account data). I could store it's name in the Html.Hidden control on each View but I don't want to have a mess in my Views

    Read the article

  • How do you determine an acceptable response time for App Engine DB requests?

    - by qiq
    According to this discussion of Google App Engine on Hacker News, A DB (read) request takes over 100ms on the datastore. That's insane and unusable for about 90% of applications. How do you determine what is an acceptable response time for a DB read request? I have been using App Engine without noticing any issues with DB responsiveness. But, on the other hand, I'm not sure I would even know what to look for in that regard :)

    Read the article

  • Can Tomcat provide seperate (or HTTPS only) sessions for HTTPS requests?

    - by Joe
    I have a web application which contains both secure (SSL) and non-secure pages. A user can login to the site and must appear logged-in in both the SSL and non-SSL areas. (NB. SSL isn't implemented via Tomcat, but via Apache HTTPD servers which sit in front of Tomcat - so Tomcat has no SSL configuration.) The logged-in state is currently maintained via a servlet session (using Tomcat's vanilla session management). The obvious issue with this approach is that the JSESSIONID cookie is transported over both HTTP and HTTPS connections, meaning that it's potentially possible to intercept it and hijack the session. Are there any solutions to this without rolling our own session management (i.e. does Tomcat cater for this situation)? I'm prepared to implement our own session management, but don't want to reinvent something that may already be supported.

    Read the article

  • Handle HTTPS Request in Proxy Server by C#

    - by Masoud Zohrabi
    I'm trying to write a Home Proxy Server in C# and I almost succeeded but I have problem to handle HTTPS requests (CONNECT). I don't know really how to handle this type of requests. In my studies I realized that for this requests we must to connect client to target host directly. Steps for these requests (that I realized): Receive first request from client (CONNECT https://www.example.ltd:443 HTTP/1.1) and send that to target host Send HTTP/1.1 200 Connection Established\r\n\r\n to client Listen to both sockets (client and target host) and send receives from each other to each other Listen until one of sockets disconnected Is this correct? If it is, how?

    Read the article

  • HTTP Post requests using HttpClient take 2 seconds, why?

    - by pableu
    Update: You might better hold off this for a bit, I just noticed I could be my fault after all. Working on this all afternoon, and then I find a flaw ten minutes after posting here, ts. Hi, I'am currently coding an android app that submits stuff in the background using HTTP Post and AsyncTask. I use the org.apache.http.client Package for this. I based my code on this example. Basically, my code looks like this: public void postData() { // Create a new HttpClient and Post Header HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost("http://192.168.1.137:8880/form"); try { List<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(2); nameValuePairs.add(new BasicNameValuePair("id", "12345")); nameValuePairs.add(new BasicNameValuePair("stringdata", "AndDev is Cool!")); httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); // Execute HTTP Post Request HttpResponse response = httpclient.execute(httppost); } catch (ClientProtocolException e) { Log.e(TAG,e.toString()); } catch (IOException e) { Log.e(TAG,e.toString()); } } The problem is that the httpclient.execute(..) line takes around 1.5 to 3 seconds, and I do not understand why. Just requesting a page with HTTP Get takes around 80 ms or so, so the problem doesn't seem to be the network latency itself. The problem doesn't seem to be on the server side either, I have also tried POSTing data to http://www.disney.com/ with similarly slow results. And Firebug shows 1 ms response time when POSTing data to my server locally. This happens on the Emulator and with my Nexus One (both with Android 2.2). If you want to look at the complete code, I've put it on GitHub. It's just a dummy program to do HTTP Post in the background using AsyncTask on the push of a button. It's my first Android app, and my first java code for a long time. And incidentially, also my first question on Stackoverflow ;-) Any ideas why httpclient.execute(httppost) takes so long?

    Read the article

  • Mobile My Oracle Support 6.3 Release is live!

    - by JanSyss
    We have released Mobile My Oracle Support 6.3 last Saturday (13-Oct-2012), including 10 enhancements and almost 40 bug fixes. Mobile My Oracle Support is My Oracle Support's webapplication optimized for mobile devices to manage your Service Requests, your On Demand Requests for Change (RFCs), search over Support's Knowledge Base, Bug database or Sun System Handbook, and to manage your pending user requests (CUA). You can find the application at http://support.oracle.mobi  or get redirected from http://support.oracle.com when using a mobile device. Overall Several UI optimizations in different screens. Service Request Area Show the platinum icon for Platinum SRs and the restore status for Platinum Sev 1s. Email send with Share functionality now contains links to Mobile MOS and Full Site. Knowledge Management Area Ability in Advanced Search to search the Sun System Handbook (cfr. screenshot below) Better rendering of the KB documents to avoid where possible horizontal scrolling. Don't hesitate to share your feedback and comments or even requests.

    Read the article

  • How can I implement a jquery ajax form which requests information from a web api via a php request?

    - by Jacob Schweitzer
    I'm trying to implement a simple api request to the SEOmoz linkscape api. It works if I only use the php file but I want to do an ajax request using jquery to make it faster and more user friendly. Here is the javascript code I'm trying to use: $(document).ready(function(){ $('#submit').click(function() { var url=$('#url').val(); $.ajax({ type: "POST", url: "api_sample.php", data: url, cache: false, success: function(html){ $("#results").append(html); } }); }); }); And here is the part of the php file where it takes the value: $objectURL = $_POST['url']; I've been working on it all day and can't seem to find the answer... I know the php code works as it returns a valid json object and displays correctly when I do it that way. I just can't get the ajax to show anything at all!!

    Read the article

  • What is the current standard for authenticating Http requests (REST, Xml over Http)?

    - by CodeToGlory
    The standard should solve the following Authentication challenges like- Replay attacks Man in the Middle Plaintext attacks Dictionary attacks Brute force attacks Spoofing by counterfeit servers I have already looked at Amazon Web Services and that is one possibility. More importantly there seems to be two most common approaches: Use apiKey which is encoded in a similar fashion like AWS but is a post parameter to a request Use Http AuthenticationHeader and use a similar signature like AWS. Signature is typically obtained by signing a date stamp with an encrypted shared secret. This signature is therefore passed either as an apiKey or in the Http AuthenticationHeader. I would like to know weigh both the options from the community, who may have used one or more and would also like to explore other options that I am not considering. I would also use HTTPS to secure my services.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >