Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 71/232 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • RESTful WCF Data Service Authentication

    - by Adrian Grigore
    Hi, I'd like to implement a REST api to an existing ASP.NET MVC website. I've managed to set up WCF Data services so that I can browse my data, but now the question is how to handle authentication. Right now the data service is secured via the site's built in forms authentication, and that's ok when accessing the service from AJAX forms. However, It's not ideal for a RESTful api. What I would like as an alternative to forms authentication is for the users to simply embed the user name and password into the url to the web service or as request parameters. For example, if my web service is usually accessible as http://localhost:1234/api.svc I'd like to be able to access it using the url http://localhost:1234/api.svc/{login}/{password} So, my questions are as follows: Is this a sane approach? If yes, how can I implement this? It seems trivial redirecting GET requests so that the login and password are attached as GET parameters. I also know how to inspect the http context and use those parameters to filter the results. But I am not sure if / how the same approach could be applied to POST, PUT and DELETE requests. Thanks, Adrian

    Read the article

  • Flash Security Error Accessing URL with crossdomain.xml

    - by user163757
    Hello, I recently deployed a Flash application to a server, and am now experiencing errors when making HTTPService requests. I have put what I believe to be the most permissive crossdomain.xml possible in the wwwroot folder, and still get the errors. Interestingly enough, the error only seems to occur when the request is made from a direct user interaction (i.e. button click). The application makes other requests that are initiated by other means(i.e creationComplete) , and they seem to work as expected. Anyone see anything wrong with the crossdomain.xml, or have any other suggestions? ERROR MESSAGE [RPC Fault faultString="Security error accessing url" faultCode="Channel.Security.Error" faultDetail="Destination: DefaultHTTP"] at mx.rpc::AbstractInvoker/http://www.adobe.com/2006/flex/mx/internal%3A%3AfaultHandler() at mx.rpc::Responder/fault() at mx.rpc::AsyncRequest/fault() at DirectHTTPMessageResponder/securityErrorHandler() at flash.events::EventDispatcher/dispatchEventFunction() at flash.events::EventDispatcher/dispatchEvent() at flash.net::URLLoader/redirectEvent() <!DOCTYPE cross-domain-policy SYSTEM "http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd"> <cross-domain-policy> <site-control permitted-cross-domain-policies="all" /> <allow-access-from domain="*" secure="false" /> <allow-http-request-headers-from domain="*" headers="*" secure="false" /> </cross-domain-policy>

    Read the article

  • Simulating Google Appengine's Task Queue with Gearman

    - by sotangochips
    One of the characteristics I love most about Google's Task Queue is its simplicity. More specifically, I love that it takes a URL and some parameters and then posts to that URL when the task queue is ready to execute the task. This structure means that the tasks are always executing the most current version of the code. Conversely, my gearman workers all run code within my django project -- so when I push a new version live, I have to kill off the old worker and run a new one so that it uses the current version of the code. My goal is to have the task queue be independent from the code base so that I can push a new live version without restarting any workers. So, I got to thinking: why not make tasks executable by url just like the google app engine task queue? The process would work like this: User request comes in and triggers a few tasks that shouldn't be blocking. Each task has a unique URL, so I enqueue a gearman task to POST to the specified URL. The gearman server finds a worker, passes the url and post data to a worker The worker simply posts to the url with the data, thus executing the task. Assume the following: Each request from a gearman worker is signed somehow so that we know it's coming from a gearman server and not a malicious request. Tasks are limited to run in less than 10 seconds (There would be no long tasks that could timeout) What are the potential pitfalls of such an approach? Here's one that worries me: The server can potentially get hammered with many requests all at once that are triggered by a previous request. So one user request might entail 10 concurrent http requests. I suppose I could have a single worker with a sleep before every request to rate-limit. Any thoughts?

    Read the article

  • GAE Task Queue oddness

    - by b3nw
    I have been testing the taskqueue with mixed success. Currently I am using the default queue, in default settings ect ect.... I have a test url setup which inserts about 8 tasks into the queue. With short order, all 8 are completed properly. So far so good. The problem comes up when I re-load that url twice under say a minute. Now watching the task queue, all the tasks are added properly, but only the first batch execute it seems. But the "Run in Last Minute" # shows the right number of tasks being run.... The request logs tell a different story. They show only the first set of 8 running, but all task creation urls working successfully. The oddness of this is that if I wait say a minute between the task creation url requests, it will work fine. Oddly enough changing the bucket_size or execution speed does not seem to help. Only the first batch are executed. I have also reduced the number of requests all the way down to 2, and still found only the first 2 execute. Any others added display the same issues as above. Any suggestions? Thanks

    Read the article

  • RESTful idempotence

    - by DutrowLLC
    I'm designing a RESTful web service utilizing ROA(Resource oriented architecture). I'm trying to work out an efficient way to guarantee idempotence for PUT requests that create new resources in cases that the server designates the resource key. From my understanding, the traditional approach is to create a type of transaction resource such as /CREATE_PERSON. The the client-server interaction for creating a new person resource would be in two parts: Step 1: Get unique transaction id for creating the new PERSON resource::: **Client request:** GET /CREATE_PERSON **Server response:** 200 OK transaction-id:"as8yfasiob" Step 2: Create the new person resource in a request guaranteed to be unique by using the transaction id::: **Client request** PUT /CREATE_PERSON/{transaction_id} first_name="Big bubba" **Server response** 201 Created // (If the request is a duplicate, it would send this PersonKey="398u4nsdf" // same response without creating a new resource. It // would perhaps send an error response if the was used // on a transaction id non-duplicate request, but I have // control over the client, so I can guarantee that this // won't happen) The problem that I see with this approach is that it requires sending two requests to the server in order to do to single operation of creating a new PERSON resource. This creates a performance issues increasing the chance that the user will be waiting around for the client to complete their request. I've been trying to hash out ideas for eliminating the first step such as pre-sending transaction-id's with each request, but most of my ideas have other issues or involve sacrificing the statelessness of the application. Is there a way to do this?

    Read the article

  • MVC Areas - View not found

    - by user314827
    Hi, I have a project that is using MVC areas. The area has the entire project in it while the main "Views/Controllers/Models" folders outside the Areas are empty barring a dispatch controller I have setup that routes default incoming requests to the Home Controller in my area. This controller has one method as follows:- public ActionResult Index(string id) { return RedirectToAction("Index", "Home", new {area = "xyz"}); } I also have a default route setup to use this controller as follows:- routes.MapRoute( "Default", // Default route "{controller}/{action}/{id}", new { controller = "Dispatch", action = "Index", id = UrlParameter.Optional } ); Any default requests to my site are appropriately routed to the relevant area. The Area's "RegisterArea" method has a single route:- context.MapRoute( "xyz_default", "xyz/{controller}/{action}/{id}", new { action = "Index", id = UrlParameter.Optional } My area has multiple controllers with a lot of views. Any call to a specific view in these controller methods like "return View("blah"); renders the correct view. However whenever I try and return a view along with a model object passed in as a parameter I get the following error:- Server Error in '/DeveloperPortal' Application. The view 'blah' or its master was not found. The following locations were searched: ~/Views/Profile/blah.aspx ~/Views/Profile/blah.ascx ~/Views/Shared/blah.aspx ~/Views/Shared/blah.ascx It looks like whenever a model object is passed in as a param. to the "View()" method [e.g. return View("blah",obj) ] it searches for the view in the root of the project instead of in the area specific view folder. What am I missing here ? Thanks in advance.

    Read the article

  • Execute remote Lua Script

    - by Bruno Lee
    Hi, I want to make an application that executes a remote script. The user can create a script (probabily a LUA script) then stores it in the server. Then he can uses an API for execute the script. I was thinking that API could be a webservice. So my questions are: I need high performance to execute the script. So my first choice was LUA script. Someone has another sugestion? Cause I need high perfomance, I was thinking if the webservice is the best solution. Maybe I could create a TCP/IP Windows Service that hold the users request. It is important to say that I will have many user executing scripts at the same time. So I will have a concurrency problem. My scripts will query in a database. I will use Tokyo Cabinet or Tokio Tyrant. I think Tokio Tyrant is the only solution cause I will have many requests. For perfomance, Do I need to make a connection pooling? Is there anyway to share variables between webservices requests? To make the webservice or the Windows service i was thinking to use C++. Can someone help with these questions? thanks

    Read the article

  • java.net.SocketException: Software caused connection abort: recv failed; Causes and cures?

    - by IVR Avenger
    Hi, all. I've got an application running on Apache Tomcat 5.5 on a Win2k3 VM. The application serves up XML to be consumed by some telephony appliances as part of our IVR infrastructure. The application, in turn, receives its information from a handful of SOAP services. This morning, the SOAP services were timing out intermittently, causing all sorts of Exceptions. Once these stopped, I noticed that our application was still performing very slowly, in that it took it a long time to render and deliver pages. This sluggishness was noticed both on the appliances that consume the Tomcat output, and from a simple test of requesting some static documents from my web browser. Restarting Tomcat immediately resolved the issue. Cracking open the localhost log, I see a ton of these errors, right up until I restarted Tomcat: WARNING: Exception thrown whilst processing POSTed parameters java.net.SocketException: Software caused connection abort: recv failed After a big of Googling, my working theory is that the SOAP issue caused my users to get errors, which caused them to make more requests, which put an increased load on the application. This caused it to run out of available sockets to handle incoming requests. So, here's my quandary: 1. Is this a valid hypothesis, or am I just in over my head with HTTP and Tomcat? 2. If this is a valid hypothesis, is there a way to increase the size of the "socket queue", so that this doesn't happen in the future? Thanks! IVR Avenger

    Read the article

  • What is the best way to process XML sent to WCF 3.5

    - by CRM Junkie
    I have to develop a WCF application in 3.5. The input will be sent in the form of XML and the response would be sent in the form of XML as well. A ASP.NET application will be consuming the WCF and sending/receiving data in XML format. Now, as per my understanding, when consuming WCF from an ASP.NET application, we just add a reference to the service, create an object of the service, pack all the necessary data(Data Members in WCF) into the input object (object of the Data Contract) and call the necessary function. It happens that the ASP.NET application is being developed by a separate party and they are hell bent on receiving and sending data in XML format. What I can perceive from this is that the WCF will take the XML string (a single Data Member string type) as input and send out a XML string (again a single Data Member string type) as output. I have created WCF applications earlier where requests and responses were sent out in XML/JSON format when it was consumed by jQuery ajax calls. In those cases, the XML tags were automatically mapped to the different Data Members defined. What approach should I take in this case? Should I just take a string as input (basically the XML string) or is there any way WCF/.NET 3.5 will automatically map the XML tags with the Data Members for requests and responses and I would not need to parse the XML string separately?

    Read the article

  • Locking issues with replacing files on a website

    - by Moe Sisko
    I want to replace existing files on an IIS website with updated versions. Say these files are large pdf documents, which can be accessed via hyperlinks. The site is up 24x7, so I'm concerned about locking issues when a file is being updated at exactly the same time that someone is trying to read the file. The files are updated using C# code run on the server. I can think of two options for opening the file for writing. Option 1) Open the file for writing, using FileShare.Read : using (FileStream stream = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.Read)) While this file is open, and a user requests the same file for reading in a web browser via a hyperlink, the document opens up as a blank page. Option 2) Open the file for writing using FileShare.None : using (FileStream stream = new FileStream(path, FileMode.Create, FileAccess.Write, FileShare.None)) While this file is open, and a user requests the same file for reading in a web browser via a hyperlink, the browser shows an error. In IE 8, you get HTTP 500, "The website cannot display the page", and in Firefox 3.5, you get : "The process cannot access the file because it is being used by another process." The browser behaviour kind of makes sense, and seem reasonable. I guess its highly unlikely that a user will attempt to read a file at exactly the same time you are updating it. It would be nice if somehow, the file update was atomic, like updating a database with SQL wrapped around a transaction. I'm wondering if you guys worry about this sort of thing, and prefer either of the above options, or even have other options of your own for updating files.

    Read the article

  • PHP Performance Metrics

    - by bigstylee
    I am currently developing a PHP MVC Framework for a personal project. While I am developing the framework I am interested to see any notable performance by implementing different techniques for optimization. I have implemented a crude BenchMark class that logs mircotime. The problem is I have no frame of reference for execution times. I am very near the beginnig of this project with a database connection and a few queries but no output (bar some debugging text and BenchMark log). I have a current execution time of 0.01917 seconds. I was expecting this to be lower but as I said before I have no frame of reference. I appreciate there are many variables to take into account when juding performance but I am hoping to find some sort of metric to a) techniques to measure performance for example requests per second and b) compare results for example; how a "moderately" sized PHP application on a "standard" webserver will perform. I appreciate "moderately" and "standard" are very subjective words so perhaps a table of known execution times for a particular application (eg StackOverFlow's executing time). What are other techniques of measuring performance are there other than execution time? When looking at MVC Framework Performance Comparisom it talks about Requests Per Second (RPS). How is this calculated? I am guessing with my current execution time of 0.01917 seconds can handle 52 RPS (= 1 / 0.01917 ). This seems to be significantly lower than that quoted on the graph especially when you consider my current limited funcitonality.

    Read the article

  • PageMethod with Custom Handler

    - by Mister Cook
    I have a page based web method which works fine, using [WebMethod], i.e [WebMethod] public static void DoSomethingOnTheServer() { } I also have a custom page handler to make SEO friendly URLs possible, e.g http://localhost/MySite/A_nice_url.ext = http://localhost/MySite/page.aspx?id=1 I am using Server.Transfer in my ProcessRequest handler to achieve this. This all works fine. Also, my page method works fine when the user requests the URL such as: http://localhost/MySite/page.aspx?id=1 However, when the user requests the custom handled URL, i.e. http://localhost/MySite/A_nice_url.ext The client reports that the PageMethod has been successfully completed, but it has not been called at all. I'm guessing it has something to do with my custom handler, so I have modified it to include PathInfo: public void ProcessRequest(HttpContext context) { // ... (code to determine id) ... // Transfer to the required page string actualPath = "~/page.aspx" + context.Request.PathInfo + "?id=" + determinedId; context.Server.Transfer(actualPath); } So now if a PageMethod is called, it will result in: http://localhost/MySite/page.aspx/DoSomethingOnTheServer?id=1 I'm wondering if this is the correct syntax for calling a PageMethod. When I try Server.Transfer reports: Error executing child request for /MySite/page.aspx/DoSomethingOnTheServer I've experimented with HttpContext.RewritePath but not sure how to actually make it perform the request. This does not seem to work, any ideas?

    Read the article

  • Apache with JBOSS using AJP (mod_jk) giving spikes in thread count.

    - by Beginner
    We used Apache with JBOSS for hosting our Application, but we found some issues related to thread handling of mod_jk. Our website comes under low traffic websites and has maximum 200-300 concurrent users during our website's peak activity time. As the traffic grows (not in terms of concurrent users, but in terms of cumulative requests which came to our server), the server stopped serving requests for long, although it didn't crash but could not serve the request till 20 mins. The JBOSS server console showed that 350 thread were busy on both servers although there was enough free memory say, more than 1-1.5 GB (2 servers for JBOSS were used which were 64 bits, 4 GB RAM allocated for JBOSS) In order to check the problem we were using JBOSS and Apache Web Consoles, and we were seeing that the thread were showing in S state for as long as minutes although our pages take around 4-5 seconds to be served. We took the thread dump and found that the threads were mostly in WAITING state which means that they were waiting indefinitely. These threads were not of our Application Classes but of AJP 8009 port. Could somebody help me in this, as somebody else might also got this issue and solved it somehow. In case any more information is required then let me know. Also is mod_proxy better than using mod_jk, or there are some other problems with mod_proxy which can be fatal for me if I switch to mod__proxy? The versions I used are as follows: Apache 2.0.52 JBOSS: 4.2.2 MOD_JK: 1.2.20 JDK: 1.6 Operating System: RHEL 4 Thanks for the help.

    Read the article

  • Executing remote script - Architecture

    - by Bruno Lee
    Hi, I want to make an application that executes a remote script. The user can create a script (probabily a LUA script) then stores it in the server. Then he can uses an API for execute the script. I was thinking that API could be a webservice. So my questions are: I need high performance to execute the script. So my first choice was LUA script. Someone has another sugestion? Cause I need high perfomance, I was thinking if the webservice is the best solution. Maybe I could create a TCP/IP Windows Service that hold the users request. It is important to say that I will have many user executing scripts at the same time. So I will have a concurrency problem. My scripts will query in a database. I will use Tokyo Cabinet or Tokio Tyrant. I think Tokio Tyrant is the only solution cause I will have many requests. For perfomance, Do I need to make a connection pooling? Is there anyway to share variables between webservices requests? To make the webservice or the Windows service i was thinking to use C++. Can someone help with these questions? thanks

    Read the article

  • eventmachine and external scripts via backticks

    - by Maciek
    I have a small HTTP server script I've written using eventmachine which needs to call external scripts/commands and does so via backticks (``). When serving up requests which don't run backticked code, everything is fine, however, as soon as my EM code executes any backticked external script, it stops serving requests and stops executing in general. I noticed eventmachine seems to be sensitive to sub-processes and/or threads, and appears to have the popen method for this purpose, but EM's source warns that this method doesn't work under Windows. Many of the machines running this script are running Windows, so I can't use popen. Am I out of luck here? Is there a safe way to run an external command from an eventmachine script under Windows? Is there any way I could fire off some commands to be run externally without blocking EM's execution? edit: the culprit that seems to be screwing up EM the most is my usage of the Windows start command, as in: start java myclass. The reason I'm using start is because I want those external scripts to start running and keep running after the EM request is served

    Read the article

  • Nonetype object has no attribute '__getitem__'

    - by adohertyd
    I am trying to use an API wrapper downloaded from the net to get results from the new azure Bing API. I'm trying to implement it as per the instructions but getting the runtime error: Traceback (most recent call last): File "bingwrapper.py", line 4, in <module> bingsearch.request("affirmative action") File "/usr/local/lib/python2.7/dist-packages/bingsearch-0.1-py2.7.egg/bingsearch.py", line 8, in request return r.json['d']['results'] TypeError: 'NoneType' object has no attribute '__getitem__' This is the wrapper code: import requests URL = 'https://api.datamarket.azure.com/Data.ashx/Bing/SearchWeb/Web?Query=%(query)s&$top=50&$format=json' API_KEY = 'SECRET_API_KEY' def request(query, **params): r = requests.get(URL % {'query': query}, auth=('', API_KEY)) return r.json['d']['results'] The instructions are: >>> import bingsearch >>> bingsearch.API_KEY='Your-Api-Key-Here' >>> r = bingsearch.request("Python Software Foundation") >>> r.status_code 200 >>> r[0]['Description'] u'Python Software Foundation Home Page. The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to ...' >>> r[0]['Url'] u'http://www.python.org/psf/ This is my code that uses the wrapper (as per the instructions): import bingsearch bingsearch.API_KEY='abcdefghijklmnopqrstuv' r = bingsearch.request("affirmative+action")

    Read the article

  • Questions about the MVC architecture

    - by ah123
    I started coding a considerably complicated web application, and it became quite a mess. So I figured I'd try to organize it in a better way. MVC seemed appropriate. I've never used MVC before, and researching about it I'm trying to consolidate a better perception of it (and my questions obviously reflect what I think I've learned so far). My questions are slightly JavaScript oriented: What object should make "AJAX" requests? The Controller or the Model? (seperation -- should the Model just store/manipulate the data, should it not care/know where the data came from, or should it be the one fetching it?) Should the Model call View functions providing them with data as arguments or should the View query (reference) the Model within itself? (seperation principles in mind, "the View shouldn't care/know where it gets the data from" -- is that correct?) In general, should the View "know" of the Model's existance, and vice-versa? Is the Controller the only thing gluing them together or is that simply incorrect? (I really doubt that statement is generally correct) There's a good chance I'd want to port this into a desktop/mobile application, so I would like to seperate components in a way that will allow me to achieve that task, replacing the current source of the data, HTTP requests, with DB access, and replacing the View. Maybe every approach that I've asked about is still "valid" MVC and it's just up to me to choose. I understand that nothing is set in stone, I'm just trying to have a (better) general idea in my head.

    Read the article

  • From ASPX to WCF

    - by Barguast
    I'm hoping someone can advise me on how to solve my networking scenario. Both the client and server are to be C# / .NET based. I basically want to invoke some kind of web service from my client in order to retrieve both binary data (e.g. files) and serialised objects and lists of objects (e.g. database query results). At the moment, I'm using ASPX pages, using the query string to provide parameters and I get back either the binary data, or the binary data of the serialised messages. This affords me a lot of flexbility, and I can choose how to transmit the data, perform simulatanous requests, cancel ongoing requests, etc. Since I can control the serialised format, I can also deserialise lists of objects as they are received which is crucial. My problem isn't a problem as such, but this feels a little hack-ish and I can't help but wonder if there are better ways to go about it. I'm considering moving on to WCF or perhaps another technology to see if it helps. However, I need to know if it helps with my scenarios above that is; Can a WCF method return a list of objects, and can the client receive the items of this list as they arrive as opposed to getting the entire list on completion (i.e. streaming). Does anyone know of any examples of this? Am I likely to get any performance benefits from this? I don't know how well ASPX pages are tuned for this, as it surely isn't their primary purpose. Are there any other approaches I should consider? Thanks for your time spent reading this. I hope you can help.

    Read the article

  • what webserver / mod / technique should I use to serve everything from memory?

    - by reinier
    I've lots of lookuptables from which I'll generate my webresponse. I think IIS with Asp.net enables me to keep static lookuptables in memory which I can use to serve up my responses very fast. Are there however also non .net solutions which can do the same? I've looked at fastcgi, but I think this starts X processes, of which anyone can handle Y requests. But the processes are by definition shielded from eachother. I could configure fastcgi to use just 1 process, but does this have scalability implications? anything using PHP or any other interpreted language won't fly because it is also cgi or fastcgi bound right? I understand memcache could be an option, though this would require another (local) socket connection which I'd rather avoid since everything in memory would be much faster. The solution can work under WIndows or Unix... it doesn't matter too much. The only thing which matters is that there will be a lot of requests (100/sec now and growing to 500/sec in a year), and I want to reduce the amount of webservers needed to process it. The current solution is done using PHP and memcache (and the occasional hit to the SQL server backend). Although it is fast (for php anyway), Apache has real problems when the 50/sec is passed. I've put a bounty on this question since I've not seen enough responses to make a wise choice. At the moment I'm considering either Asp.net or fastcgi with C(++).

    Read the article

  • implementing a download manager that supports resuming

    - by Idan K
    hi, I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download). From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets. So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those. Is this the right way to do this? What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file) When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request? When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks. Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously? If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers). Note: I'll probably be using Qt, but this is a general question so I left code out of it.

    Read the article

  • Throttling outbound API calls generated by a Rails app

    - by Sharpie
    I am not a professional web developer, but I like to wrench on websites as a hobby. Recently, I have been playing with developing a Rails app as a project to help me learn the framework. The goal of my toy app is to harvest data from another service through their API and make it available for me to query using a search function. However, the service I want to pull data from imposes a rate limit on the number of API calls that may be executed per minute. I plan on having my app run a daily update which may generate a burst of API calls that far exceeds the limit provided by the external service. I wish to respect the performance of the external site and so would like to throttle the rate at which my app executes the calls. I have done a little bit of searching and the overwhelming amount of tutorial material and pre-built libraries I have found cover throttling inbound API calls to a web app and I can find little discussion of controlling the flow of outbound calls. Being both an amateur web developer and a rails newbie, it is entirely possible that I have been executing the wrong searches in the wrong places. Therefore my questions are: Is there a nice website out there aggregating Rails tutorials that has material related to throttling outbound API requests? Are there any ruby gems or other libraries that would help me throttle the requests? I have some ideas of how I might go about writing a throttling system using a queue-based worker like DelayedJob or Resque to manage the API calls, but I would rather spend my weekends building the rest of the site if there is a good pre-built solution out there already.

    Read the article

  • How should I implement lazy session creation in PHP?

    - by Adam Franco
    By default, PHP's session handling mechanisms set a session cookie header and store a session even if there is no data in the session. If no data is set in the session then I don't want a Set-Cookie header sent to the client in the response and I don't want an empty session record stored on the server. If data is added to $_SESSION, then the normal behavior should continue. My goal is to implement lazy session creation behavior of the sort that Drupal 7 and Pressflow where no session is stored (or session cookie header sent) unless data is added to the $_SESSION array during application execution. The point of this behavior is to allow reverse proxies such as Varnish to cache and serve anonymous traffic while letting authenticated requests pass through to Apache/PHP. Varnish (or another proxy-server) is configured to pass through any requests without cookies, assuming correctly that if a cookie exists then the request is for a particular client. I have ported the session handling code from Pressflow that uses session_set_save_handler() and overrides the implementation of session_write() to check for data in the $_SESSION array before saving and will write this up as library and add an answer here if this is the best/only route to take. My Question: While I can implement a fully custom session_set_save_handler() system, is there an easier way to get this lazy session creation behavior in a relatively generic way that would be transparent to most applications?

    Read the article

  • Form values in a list item

    - by Tri
    Here is the site mock-up I'm working on for my job: http://dev.arm.gov/~noensie/dqhands/cgi-bin/explorer. I'm still a novice in web developing and I need help with placing form values in a list item to pass on to another page. I'd rather not go in great detail the purpose of this website, but in terms of its basic use, select the parameters in the middle column for a request and add it to the list on the right column by pressing on "add request" button when it appears. After the list have been populated, a user would submit the selected requests, which will direct them to another page based on the requests selected (or added to the list, same thing). That last sentence is where I'm having a problem. Right now, each of request in the list in the right column are <li> elements and I assign them attribute values that needed to be passed on to the next page. I tried inserting a hidden input with same values, but I'm still not sure how to utilize that; I'm not even sure using the hidden input is the correct way. Also the "submit request" button is located outside the <form> block. I was going to utilize javascript and jQuery to enable the button to serialize the values in the <form> block, but I don't know quite how to do that. Go ahead and take a look at my javascript code (index.js) and slay me, or, I mean, my code. It's still pretty elementary and short (~260 lines, that's short right?). I will take any help for this problem (as the matter of fact, if you see any other problems or a better way of implementation of something, go ahead and mention that too); tips, advice, code samples, or whatever else you can contribute, it will be greatly appreciated. Tri

    Read the article

  • How to File Transfer client to Server?

    - by Phsika
    i try to recieve a file from server but give me error on server.Start() ERROR : In a manner not permitted by the access permissions to access a socket was attempted to How can i solve it? private void btn_Recieve_Click(object sender, EventArgs e) { TcpListener server = null; // Set the TcpListener on port 13000. Int32 port = 13000; IPAddress localAddr = IPAddress.Parse("192.168.1.201"); // TcpListener server = new TcpListener(port); server = new TcpListener(localAddr, port); // Start listening for client requests. server.Start(); // Buffer for reading data Byte[] bytes = new Byte[277577]; String data; data = null; // Perform a blocking call to accept requests. // You could also user server.AcceptSocket() here. TcpClient client = server.AcceptTcpClient(); NetworkStream stream = client.GetStream(); int i; i = stream.Read(bytes, 0, 277577); BinaryWriter writer = new BinaryWriter(File.Open("GoodLuckToMe.jpg", FileMode.Create)); writer.Write(bytes); writer.Close(); client.Close(); }

    Read the article

  • silverlight security with WCF service, Forms Authentication and Custom Form Ticket

    - by user74825
    I have a silverlight application with login on the silverlight page. It uses Forms Authentication with WCF authentication service and customer Membership Provider. Something like : http://blogs.msdn.com/phaniraj/archive/2009/09/10/using-the-ado-net-data-services-silverlight-client-library-in-x-domain-and-out-of-browser-scenarios-ii-forms-authentication.aspx So, SL page login page calls the WCF service authentication service, it validates using DB - brings back username and password. Now, in each subsequent calls (in Global.asax in Authenticate_Request, I get HttpContext.User.IsAuthenticated and HttpContext.User.UserName). I have all this working properly. But, I just don't want the username, but more information surrounding the user, like UserId, UserAddress, UserAssociateCustomer etc. I tried couple of different approaches. 1) Use HttpContext.Cache as a dictionary to save the item and get it off based on httpcontext.user.name, problem is cache can be erased if there memory is being used heavily. 2) Tried CustomFormsAuth Ticket, when forms authentication writes a ticket, I intercept CreatingCookie method and write additional info in formauthentication ticket, so that I can read it in subsequent requests, I am having problems with this approach, I don't find the ticket in subsequent requests. I read about how we should use REsponse.Redirect, but where do I redirect user from WCF call. How do you guys implement the above scenario? Any best practices.? Any issues you see with going on HTTPS? All examples (or most of them) just explains simple forms authentication with "I am logged in message".. Any suggestions ?

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >