Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 57/232 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Add JSON support to Rails app

    - by Meltemi
    I am experimenting with Rails and was wondering what's needed to allow/add support for JSON requests? I have a vanilla installation of Rails 2.3.5 and the default scaffolding seem to provide support for HTML & XML requests but not JSON. class EventsController < ApplicationController # GET /events # GET /events.xml def index @events = Event.all respond_to do |format| format.html # index.html.erb format.xml { render :xml => @events } end end # GET /events/1 # GET /events/1.xml def show @event = Event.find(params[:id]) respond_to do |format| format.html # show.html.erb format.xml { render :xml => @event } end end ... I'm new to this but it would appear as though i would need to add a format line in each method along the lines of: format.js { render :js => @event.json } couldn't this be done automatically? perhaps there's a template somewhere i need to update...or a flag i can set? Or perhaps, and most likely, I've missed the boat entirely?!?

    Read the article

  • Creating Mock object of Interface with type-hint in method fails on PHPUnit

    - by Mark
    I created the following interface: <?php interface Action { public function execute(\requests\Request $request, array $params); } Then I try to make a Mock object of this interface with PHPUnit 3.4, but I get the following error: Fatal error: Declaration of Mock_Action_b389c0b1::execute() must be compatible with that of Action::execute() in D:\Xampp\xampp\php\PEAR\PHPUnit\Framework\TestCase.php(1121) : eval()'d code on line 2 I looked through the stack trace I got from PHPUnit and found that it creates a Mock object that implements the interface Action, but creates the execute method in the following way: <?php public function execute($request, array $params) As you can see, PHPUnit takes over the array type-hint, but forgets about \requests\Request. Which obviously leads to an error. Does anyone knows a workaround for this error? I also tried it without namespaces, but I still get the same error.

    Read the article

  • How/is data shared between fastCGI processes?

    - by Josh the Goods
    I've written a simple perl script that I'm running via fastCGI on Apache. The application loads a set of XML data files which are used to lookup values based upon the the query parameters of an incoming request. As I understand it, if I want to increase the amount of concurrent requests my application can handle I need to allow fastCGI to spawn multiple processes. Will each of these processes have to hold duplicate copies of the XML data in memory? Is there a way to set things up so that I can have one copy of the XML data loaded in memory while increasing the capacity to handle concurrent requests?

    Read the article

  • How to make an HTTP request in a separate thread with timeout?

    - by Vitaly
    Hi, I haven't programmed in Delphi for a while and frankly didn't think I'll ever have to but... Here I am, desperately trying to find some information on the matter and it's so scarce nowadays, I can't find anything. So maybe you guys could help me out. Currently my application uses Synapse library to make HTTP calls, but it doesn't allow for setting a timeout. Usually, that's not a big problem, but now I absolutely must to have a timeout to handle any connectivity issues nicely. What I'm looking for, is a library (synchronous or not) that will allow making HTTP requests absolutely transparent for the user with no visible or hidden delays. I can't immediately kill a thread right now, and with possibility of many frequent requests to the server that is not responding, it's no good.

    Read the article

  • Multiple records with one request in RESTful system

    - by keithjgrant
    All the examples I've seen regarding a RESTful architecture have dealt with a single record. For example, a GET request to mydomain.com/foo/53 to get foo 53 or a POST to mydomain.com/foo to create a new Foo. But what about multiple records? Being able to request a series of Foos by id or post an array of new Foos generally would be more efficient with a single API request rather than dozens of individual requests. Would you "overload" mydomain.com/foo to handle requests for both a single or multiple records? Or would you add a mydomain.com/foo-multiple to handle plural POSTs and GETs? I'm designing a system that may potentially need to get many records at once (something akin to mydomain.com/foo/53,54,66,86,87) But since I haven't seen any examples of this, I'm wondering if there's something I'm just not getting about a RESTful architecture that makes this approach "wrong".

    Read the article

  • Caching web page data, Database or File

    - by Mahdi
    I am creating an RSS reader application that requests the RSS from my server. I mean, the RSS is first downloaded to my server, then application downloads it from my server. I want to create RSS cache for this. And for example, each RSS would be refreshed every 1 minute. So, If 10 users request RSS of example.com in 1 minute, my server will download it only for the first time, and in other 9 requests, RSS will be loaded from cache. My question is, Should I use a Database (MSSQL) for this purpose? or I should use files? I have no limit in Database size nor in file size... EDIT: I'm using ASP.NET for the server.

    Read the article

  • Most Efficient Way of calling an external webservice in Java?

    - by Sudheer
    In one of our applications we need to call the Yahoo Soap Webservice to Get Weather and other related info. I used the wsdl2java tool from axis1.4 and generated th required stubs and wrote a client. I use jsp's use bean to include the client bean and call methods defined in the client which call the yahoo webservice inturn. Now the problem: When users make calls to the jsp the response time of the webservice differs greatly, like for one user it took less then 10 seconds and the other in the same network took more than a minute. I was just wondering if Axis1.4 queues the requests even though the jsps are multithreaded. And finally is there an efficient way of calling the webservice(Yahoo weather). Typically i get around 200 simultaneous requests from my users.

    Read the article

  • How best to pre-install OR pre-load OR cache JavaScript library to optimize performance?

    - by Kabeer
    Hello. I am working for an intranet application. Therefore I have some control on the client machines. The JavaScript library I am using is somewhat big in size. I would like to pre-install OR pre-load OR cache the JavaScript library on each machine (each browser as well) so that it does not travel for each request. I know that browsers do cache a JavaScript library for subsequent requests but I would like the library to be cached once for all subsequent requests, sessions and users. What is the best mechanism to achieve this?

    Read the article

  • How do I make a request using HTTP basic authentication with PHP curl?

    - by Bedwyr Humphreys
    I'm building a REST web service client in PHP and at the moment I'm using curl to make requests to the service. How do I use curl to make authenticated (http basic) requests? Do I have to add the headers myself? If so I've got some other questions - Is there a REST library for php? or is there a wrapper for curl that makes it a bit more rest friendly? or am I going to have to continue to roll my own? Thanks.

    Read the article

  • Generating unobtrusive JS

    - by nico
    I have read quite a bit about unobtrusive JS and how to generate it and all that jazz... My problem is this: I have a website that heavily relies on mod_rewrite, so essentially all the pages requests are sent to index.php that generates the main structure of the page and then includes the appropriate page. Now, there are different sections in the site and each section uses different Javascript functions (e.g. for different AJAX requests). Now, if I just were to attach a function to the onload of the page obviously the thing would not work, as I do not have to initialise the same things for each page... so what is the best way to handle this situation? I hope the situation is clear, I'll be happy to clarify if needed

    Read the article

  • Excluding files from web logs

    - by Ray
    Looking through my web logs, I see a lot of entries that don't interest me. Some of them are commonly used images, css files, and scripts, which I can easily exclude by un-checking the 'log visits' check box in IIS for the folder properties. I would also like to exclude log entries for certain common requests which are not in their own folders. Mostly, 'favicon.ico'. 'scriptresource.axd', and 'webresource.axd'. These (especially scriptresource.axd) make up almost a third of a typical log file on my site. So, the question is, how do I tell IIS not to log these requests? And is there any reason that this is a bad idea?

    Read the article

  • Why packets injected with libpcap are duplicated?

    - by r0u1i
    I'm using sharppcap in order to send packets as part of a monitoring system. Usually it works well but I've encountered the strangest bug on a hosted vista machine and I would like your help. On that virtual vista machine, injected packets are duplicated. That is, if I send a ping request using libpcap, it somehow gets duplicated and I get two requests on the destination machine. The two requests are almost identical byte-wise, and the only difference between them is that the second packet's TTL field is one minus the original packet's value. Using wireshark I can see the packet gets duplicated before it (and its clone) leave the vista machine. The problem is manifested even when using other tools for injecting packets using libpcap (namely PlayCap). Any ideas?

    Read the article

  • Multiple xmlns attributes affect page performance?

    - by Geuis
    We are working on adding some Facebook Connect functionality to our site. Part of their requirements for FB Connect require adding several additional xmlns attributes to the html element. We are likely going to have 5 or 6 of their custom attributes by the time we're done, and I want to know if this will negatively affect our page performance. I.e. will these be additional resources that the browser has to download? I have checked in Firebug and I don't see additional requests, but I don't know if that is because requests are not made by the browser, or if Firebug simply doesn't track them.

    Read the article

  • Simulate Incorrect Content-Length Headers for HTTP in C#

    - by cfeduke
    We are building a comprehensive integration test framework in C# for our application which exists on top of HTTP using IIS7 to host our applications. As part of our integration tests we want to test incoming requests which will result in EndOfStreamExceptions ("Unable to read beyond end of stream") that occur when a client sends up a HTTP header indicating a larger body size than it actually transmits as part of the body. We want to test our error recovery code for this condition so we need to simulate these sorts of requests. I am looking for a .NET Fx-based socket library or custom HttpWebRequest replacement that specifically allows developers to simulate such conditions to add to our integration test suite. Does anyone know of any such libraries? A scriptable solution would work as well.

    Read the article

  • Apache or Nginx for php behind Varnish

    - by Macindy
    We are managing a heavy traffic php site (driven with vbulletin). To get less load I use the cache-proxy varnish. Works very well - with varnish the load was reduced by 50%. But now I am thinking about which webserver to use behind varnish. 90% requests getting to the webserver are php-requests. So is there a difference between apache/mpm-prefork with mod_php and nginx/php-fpm? Is apache perhaps better, because it doesn't have the tcp overhead? Thanks for reply - benchmarks would be great! macindy

    Read the article

  • Addressing "Access Denied" Exception with WMI Calls

    - by Joe
    I'm getting an exception with a message of "Access Denied" when executing against a WMI request. Some WMI requests appear to require higher security privileges than others. Ultimately my goal is to monitor process launches within the system and log. Regardless if there is a better approach, its now become a vendetta in getting this WMI approach to work. I've attempted the code at Security Tools - WMI Programming Using C#.Net and still receive the exception. If you copy the code found in the blog entry you can reproduce my issue. Another post on a similar topic can be found at link text but again, try the code and you'll the see the same security exception. How do I permit my code to execute these WMI requests? I'm running on Windows 7 Pro and VS 2010 in a new C# command line project.

    Read the article

  • Avoiding cookies while requesting static content

    - by Abdel Olakara
    I just did an audit of one of my web application page (built using ASP.Net and running on development server) using Google chrome's developer tool. One particular warning caught my eyes: Serve static content from a cookieless domain (5)! Here is my screen shot (http://yfrog.com/7eauditresultp) as well. I would like to know is it possible to avoid cookies for these kind of requests. I see that there is no cookie requests for javascript files as well. I it possible to avoid cookies in the header for these files as well? and why didn't the browser attach cookies for javascript files and attach for CSS and image? Any thoughts and suggestions are welcome

    Read the article

  • Google app engine - what is the lifecycle of PersistenceManager?

    - by Domchi
    What is the preferred way of using GAE datastore PersistenceManager for web app? GAE instructions are a bit ambiguous on the matter. Do I instantiate PersistenceManagerFactory for each RPC call, or do I use only one factory for all requests? Do I call PMF.get().getPersistenceManager(), or do I call PMF.get().getPersistenceManagerProxy()? Do I close PM after each RPC call, or do I leave it open? What are you guys doing? Furthermore, I'm not certain how GAE handles 30-second-per-request limit. Is it even possible to reference the same PM between requests?

    Read the article

  • DLL dependant on curllib.dll - How can I fix this?

    - by haraldo
    Hi there, I'm new to developing in C++. I've developed a dll where I'm using curllib to make HTTP requests. When running the dll via depend.exe it notifies me that my dll now depends on the curllib.dll. This simply doesn't work for me. My dll is set as a static library not shared and will be distributed on its own. I cannot rely on a user having libcurl.dll installed. I thought by including libcurl into my project this is all that would be needed and my dll could be independent. If this is impossible to resolve is there an alternative method I can use to create HTTP requests? Obviously I would prefer to use libcurl. Thanks in advance.

    Read the article

  • ASP.NET OutPutCache VaryByParam and VaryByHeader with AJAX

    - by DennyDotNet
    I'm trying to do some caching using VaryByParam AND VaryByHeader. When an AJAX request comes in I return a partial XHTML. When a regular request comes in I send the partial XHTML page with header / footer. I tried to cache the page by doing: [OutputCache( Duration = 5, VaryByParam = "nickname,page", VaryByHeader = "X-Requested-With" )] However this doesn't work... if I do a regular request first then run the AJAX call I get the full cached page instead of the partial and vice-versa. Seems like VaryByHeader is being ignored. Is it because X-Requested-With is omitted on normal requests? Or perhaps it's doing VaryByParam OR VaryByHeader? My obvious way around this is for AJAX requests to call a different method which only returns partial pages, however I'd like to avoid that if possible. I'm using ASP.NET MVC 1.0 with the OutputCacheAttribute.

    Read the article

  • StackExchange API key

    - by user21289
    I am working on a project with the StackExchange API, the problem is at a moment I have this Exception on eclipse console: java.io.IOException: Server returned HTTP response code: 400 for URL: https://api.stackexchange.com/2.1/questions?order=desc&sort=votes&tagged=OSM&site=stackoverflow at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(Unknown Source) atbr.inf.pucrio.sog.StackOverflowAcessor.getQuestionsIds(StackOverflowAcessor.java:41) After verifying on the browser with the same link, I have this error message: {"error_id":502,"error_name":"throttle_violation","error_message":"too many requests from this IP, more requests available in 74089 seconds"} I am wondering if this is dur to the limited numbers of the queries per day, if it is the case, how can I do to have the key? if it is not, how can I do to resolve the problem?

    Read the article

  • Stop 2 identical queries from executing almost simultaneously?

    - by James Simpson
    I have developed an AJAX based game where there is a bug caused (very remote, but in volume it happens at least once per hour) where for some reason two requests get sent to the processing page almost simultaneously (the last one I tracked, the requests were a difference of .0001 ms). There is a check right before the query is executed to make sure that it doesn't get executed twice, but since the difference is so small, the check hasn't finished before the next query gets executed. I'm stumped, how can I prevent this as it is causing serious problems in the game. Just to be more clear, the query is starting a new round in the game, so when it executes twice, it starts 2 rounds at the same time which breaks the game, so I need to be able to stop the script from executing if the previous round isn't over, even if that previous round started .0001 ms ago.

    Read the article

  • Can the JVM recover from an OutOfMemoryError without a restart

    - by askullhead
    Can the JVM recover from an OutOfMemoryError without a restart if it gets a chance to run the GC before more object allocation requests come in? Do the various JVM implementations differ in this aspect? EDIT: My question was about the JVM recovering and not the user program trying to recover by catching the error. In other words if an OOME is thrown in an application server (jboss/websphere/..) do I have to restart it? Or can I let it run if further requests seem to work without a problem. Sorry if that wan't clear.

    Read the article

  • stub webserver for integration testing

    - by Frank Schwieterman
    I have some integration tests where I want to verify certain requires are made against a third-[arty webserver. I was thinking I would replace the third-party server with a stub server that simply logs calls made to it. The calls do not need to succeed, but I do need a record of the requests made (mainly just the path+querystring). I was considering just using IIS for this. I could 1) set up an empty site, 2) modify the system's host file to redirect requests to that site 3) parse the log file at the end of each test. This is problematic as for IIS the log files are not written to immediately, and the files are written to continuosly. I'll need to locate the file, read the contents before the test, wait a nondeterministic amount of time after the test, read the update contents, etc. Can someone think of a simpler way?

    Read the article

  • How to block the UI during asynchronous operations in WPF

    - by mcintyre321
    We have a WPF app (actually a VSTO WPF app). On certain controls there are multiple elements which, when clicked, load data from a web service and update the UI. Right now, we carry out these web requests synchronously, blocking the UI thread until the response comes back. This prevents the user clicking around the app while the data is loading, potentially putting it into an invalid state to handle the data when it is returned. Of course the app becomes unresponsive if the request takes a long time. Ideally, we'd like to have the cancel button active during this time, but nothing else. Is there a clever way of doing this, or will we have to switch the requests to execute asynchronously using backgroundworker and write something that disables all the controls apart from the cancel button while a request is in progress?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >