Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 148/232 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • MYSQL : First and last record of a grouped record (aggregate functions)

    - by Jimmy
    I am trying to do fectch the first and the last record of a 'grouped' record. More precisely, I am doing a query like this SELECT MIN(low_price), MAX(high_price), open, close FROM symbols WHERE date BETWEEN(.. ..) GROUP BY YEARWEEK(date) but I'd like to get the first and the last record of the group. It could by done by doing tons of requests but I have a quite large table. Is there a [low processing time if possible] way to do this with MySQL?

    Read the article

  • Waiting on multiple asynchronous calls to complete before continuing

    - by Chad
    So, I have a page that loads and through jquery.get makes several requests to populate drop downs with their values. $(function() { LoadCategories($('#Category')); LoadPositions($('#Position')); LoadDepartments($('#Department')); LoadContact(); }; It then calls LoadContact(); Which does another call, and when it returns it populates all the fields on the form. The problem is that often, the dropdowns aren't all populated, and thus, it can't set them to the correct value. What I need to be able to do, is somehow have LoadContact only execute once the other methods are complete and callbacks done executing. But, I don't want to have to put a bunch of flags in the end of the drop down population callbacks, that I then check, and have to have a recursive setTimeout call checking, prior to calling LoadContact(); Is there something in jQuery that allows me to say, "Execute this, when all of these are done."?

    Read the article

  • IIS reveals internal IP address in content-location field - fix

    - by saille
    Referring: http://support.microsoft.com/kb/q218180/, there is a known issue in IIS4/5/6 whereby it will reveal the internal IP of a web server in the content-location field of the HTTP header. We have IIS 6. I have tried the fix suggested, but it has not worked. The website is configured to send all requests to ASP.NET, and I am wondering if this is why the fix, which addresses IIS configuration, has not worked for us. If this is the case, how would we fix this in ASP.NET? We need to fix this issue in order to pass a security audit.

    Read the article

  • Can EC2 instances be set up to come from different IP ranges?

    - by Joshua Frank
    I need to run a web crawler and I want to do it from EC2 because I want the HTTP requests to come from different IP ranges so I don't get blocked. So I thought distributing this on EC2 instances might help, but I can't find any information about what the outbound IP range will be. I don't want to go to the trouble of figuring out the extra complexity of EC2 and distributed data, only to find that all the instances use the same address block and I get blocked by the server anyway. NOTE: This isn't for a DoS attack or anything. I'm trying to harvest data for a legitimate business purpose, I'm respecting robots.txt, and I'm only making one request per second, but the host is still shutting me down. Edit: Commenter Paul Dixon suggests that the act of blocking even my modest crawl indicates that the host doesn't want me to crawl them and therefore that I shouldn't do it (even assuming I can work around the blocking). Do people agree with this?

    Read the article

  • Is this a correct Interlocked synchronization design?

    - by Dan Bryant
    I have a system that takes Samples. I have multiple client threads in the application that are interested in these Samples, but the actual process of taking a Sample can only occur in one context. It's fast enough that it's okay for it to block the calling process until Sampling is done, but slow enough that I don't want multiple threads piling up requests. I came up with this design (stripped down to minimal details): public class Sample { private static Sample _lastSample; private static int _isSampling; public static Sample TakeSample(AutomationManager automation) { //Only start sampling if not already sampling in some other context if (Interlocked.CompareExchange(ref _isSampling, 0, 1) == 0) { try { Sample sample = new Sample(); sample.PerformSampling(automation); _lastSample = sample; } finally { //We're done sampling _isSampling = 0; } } return _lastSample; } private void PerformSampling(AutomationManager automation) { //Lots of stuff going on that shouldn't be run in more than one context at the same time } } Is this safe for use in the scenario I described?

    Read the article

  • Facebook like on demand meta content scraper

    - by Tobias
    you guys ever saw that FB scrapes the link you post on facebook (status, message etc.) live right after you paste it in the link field and displays various metadata, a thumb of the image, various images from the a page link or a video thumb from a video related link (like youtube). any ideas how one would copy this function? i'm thinking about a couple gearman workers or even better just javascript that does a xhr requests and parses the content based on regex's or something similar... any ideas? any links? did someone already tried to do the same and wrapped it in a nice class? anything? :) thanks!

    Read the article

  • How to assign permissions for Copy/Paste on windows

    - by jalchr
    Well, as everyone knows there is no way you can assign permissions for Copy/Paste of files on windows platform. I need to control the copy process from a central file server, in a way that helps me know: which user performed the copy Which files were copied where did he pasted them Total size of data copied Time of copy operation If user exceeds the allowed "Copy-Limit", a dialog box requests him to enter administrative credentials or deny him (as it would be configured) Store all this data in a file for later review or send by email. I need to collect this data by putting a utility program on the server itself, without any other installation on client computers. I know about monitoring the Clipboard, but which clipboard would it be? the user's clipboard or the server's clipboard ? And what about drag-drop operation, which doesn't even pass through the clipboard? Any knowledge of whether SystemFileWatcher is useful in such case ? Any ideas ?

    Read the article

  • How to get generate WSDL using GroovyWS

    - by James Black
    I am implementing SOAP web services for a commercial application, and I am using GroovyWS to speed up the development. But, when I deploy it on Tomcat, I am not using Grails, as the software has it's own J2EE framework, so how I do I get it to react to wsdl requests? Do I need to write a groovy-based servlet? Ideally I would like the WSDL generated upon request, so I can easily change the interface and see the change. It seems I will miss the annotations that JAX-WS provides for, though, to help fine-tune the WSDL.

    Read the article

  • Artificial Intelligence in online game using Google App Engine

    - by Hortinstein
    I am currently in the planning stages of a game for google app engine, but cannot wrap my head around how I am going to handle AI. I intend to have persistant NPCs that will move about the map, but short of writing a program that generates the same XML requests I use to control player actions, than run it on another server I am stuck on how to do it. I have looked at the Task Queue feature, but due to long running processes not being an option on the App engine, I am a little stuck. I intend to run multiple server instances with 200+ persistant NPC entities that I will need to update. Most action is slowly roaming around based on player movements/concentrations, and attacking close range players...(you can probably guess the type of game im developing)

    Read the article

  • ASP.NET Website no Response while Processing Long Process

    - by Ammar
    Dear Programmers, When my Application face a long-time process, i.e fetch a query (SELECT a, b, c FROM d) This query needs 10 seconds to be completed in the MSSQL Management Studio, but when the ASP.NET application try to fetch it, it refuse to return any response to any other requests made on that Server. I am hosting my Application on VPS Server with good specifications, and I am giving this example the (SELECT a, b, c FROM d) just to tell you the issue, it can be any process, maybe processing a movie, or even fetching some data through external API that is experiencing some slow-down,or whatever. Any help or suggestions would be highly appreciated.

    Read the article

  • iphone: is there any secure way to establish 2-way SSL from an application

    - by pmilosev
    Hi I need to establish a HTTPS 2-way SSL connection from my iPhone application to the customer's server. However I don't see any secure way to deliver the client side certificates to the application (it's an e-banking app, so security is really an issue). From what I have found so far the only way that the app would be able to access the certificate is to provide it pre-bundeled with the application itself, or expose an URL from which it could be fetched (http://stackoverflow.com/questions/2037172/iphone-app-with-ssl-client-certs). The thing is that neither of this two ways prevent some third party to get the certificate, which if accepted as a risk eliminates the need for 2-way SSL (since anyone can have the client certificate). The whole security protocol should look like this: - HTTPS 2-way SSL to authenticate the application - OTP (token) based user registration (client side key pair generated at this step) - SOAP / WSS XML-Signature (requests signed by the keys generated earlier) Any idea on how to establish the first layer of security (HTTPS) ? regards

    Read the article

  • How to lock non-browser clients from submitting a request?

    - by Thomas Kohl
    I want to block non-browser clients from accessing certain pages / successfully making a request. The website content is served to authenticated users. What happens is that our user gives his credentials to our website to 3rd party - it can be another website or a mobile application - that performs requests on his behalf. Say there is a form that the user fills out and sends a message. Can I protect this form so that the server processing the submission can tell whether the user has submitted it directly from the browser or not? I don't want to use CAPTCHA for usability reasons. Can I do it with some javascript?

    Read the article

  • Memcache failover and consistent hashing

    - by Industrial
    Hi everyone, I am trying to work out a good way to handle offline/down memcached servers in my current web application that are built with PHP. I just found this link that shows an approach on how to do what I want, I think: http://cmunezero.com/2008/08/11/consistent-memcache-hashing-and-failover-with-php/ Anyhow, it gets me confused when I start working with it and reading the PHP documention about failover with memcache. Why is offline memcache servers added to the $realInstance server pool together with the online servers? Reading the memcache documentation confuses me even more: http://www.php.net/manual/en/memcache.addserver.php status Controls if the server should be flagged as online. Setting this parameter to FALSE and retry_interval to -1 allows a failed server to be kept in the pool so as not to affect the key distribution algorithm. Requests for this server will then failover or fail immediately depending on the memcache.allow_failover setting. Default to TRUE, meaning the server should be considered online. Thanks,

    Read the article

  • How can I get read-ahead bytes?

    - by Bruno Martinez
    Operating systems read from disk more than what a program actually requests, because a program is likely to need nearby information in the future. In my application, when I fetch an item from disk, I would like to show an interval of information around the element. There's a trade off between how much information I request and show, and speed. However, since the OS already reads more than what I requested, accessing these bytes already in memory is free. What API can I use to find out what's in the OS caches? Alternatively, I could use memory mapped files. In that case, the problem reduces to finding out whether a page is swapped to disk or not. Can this be done in any common OS?

    Read the article

  • Problem with jQuery and ASP.Net MVC 2

    - by robert_d
    I have a problem with jQuery, here is how my web app works Search.aspx web page which contains a form and jQuery script posts data to Search() action in Home controller after user clicks button1 button. Search.aspx: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<GLSChecker.Models.WebGLSQuery>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Title </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Search</h2> <% Html.EnableClientValidation(); %> <% using (Html.BeginForm()) {%> <fieldset> <div class="editor-label"> <%: Html.LabelFor(model => model.Url) %> </div> <div class="editor-field"> <%: Html.TextBoxFor(model => model.Url, new { size = "50" } ) %> <%: Html.ValidationMessageFor(model => model.Url) %> </div> <div class="editor-label"> <%: Html.LabelFor(model => model.Location) %> </div> <div class="editor-field"> <%: Html.TextBoxFor(model => model.Location, new { size = "50" } ) %> <%: Html.ValidationMessageFor(model => model.Location) %> </div> <div class="editor-label"> <%: Html.LabelFor(model => model.KeywordLines) %> </div> <div class="editor-field"> <%: Html.TextAreaFor(model => model.KeywordLines, 10, 60, null)%> <%: Html.ValidationMessageFor(model => model.KeywordLines)%> </div> <p> <input id ="button1" type="submit" value="Search" /> </p> </fieldset> <% } %> <script src="../../Scripts/jquery-1.4.1.js" type="text/javascript"></script> <script type="text/javascript"> jQuery("#button1").click(function (e) { window.setInterval(refreshResult, 5000); }); function refreshResult() { jQuery("#divResult").load("/Home/Refresh"); } </script> <div id="divResult"> </div> </asp:Content> [HttpPost] public ActionResult Search(WebGLSQuery queryToCreate) { if (!ModelState.IsValid) return View("Search"); queryToCreate.Remote_Address = HttpContext.Request.ServerVariables["REMOTE_ADDR"]; Session["Result"] = null; SearchKeywordLines(queryToCreate); Thread.Sleep(15000); return View("Search"); }//Search() After button1 button is clicked the above script from Search.aspx web page runs. Search() action in controller runs for longer period of time. I simulate this in testing by putting Thread.Sleep(15000); in Search()action. 5 sec. after Submit button was pressed, the above jQuery script calls Refresh() action in Home controller. public ActionResult Refresh() { ViewData["Result"] = DateTime.Now; return PartialView(); } Refresh() renders this partial <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" % <%= ViewData["Result"] % The problem is that in Internet Explorer 8 there is only one request to /Home/Refresh; in Firefox 3.6.3 all requests to /Home/Refresh are made but nothing is displayed on the web page. Another problem with Firefox is that requests to /Home/Refresh are made every second not every 5 seconds. I noticed that after I clear Firefox cache the script works well first time button1 is pressed, but after that it doesn't work. I would be grateful for helpful suggestions.

    Read the article

  • Guidelines for good webcrawler 'Etiquette'

    - by Harry
    I'm building a search engine (for fun) and it has just struck me that potentially my little project might wreak havok by clicking on ads and all sorts of problems. So what are the guidelines for good webcrawler 'Etiquette'? Things that spring to mind: Observe Robot.txt instructions Limit the number of simultaneous requests to the same domain Don't follow ad links? Stopping the crawler from clicking on ads - This one is particularly on my mind at the moment... how do i stop my bot from 'clicking' on ads? if it is going straight to the url in the ad is it counted as a click?

    Read the article

  • Rails and image gallery: how to make it flexible?

    - by user305270
    Hy! In my app, every profile has a gallery and this gallery has images. They can have 1 or 1000 images, so when i show up the gallery i only need a part of the images, let's say 12 images. Then in my ajax gallery when, someone requests more images are loaded. This gallery is SEO, so, the main url is: "/profile-url/" and then, there are these urls too: "/profile-url/images/1" that loads the image with the id=1 in the database. The gallery have to show only 6 images per stack, so When i request the page with the image.id=64, only have to show from 60 to 72, and to select the 64, that is done with js. how may i add a request like this? and how to order them? thanks

    Read the article

  • Multiple Broadcast Messages with Less Data or Less Broadcast Messages with More Data?

    - by niko
    Hi, I am developing an application which is communicating with the server. Tha application can perform log-in and get different parameters from server. The application consists of a RESTful client (custom class for making requests), Communication Service (the service which runs in the background) and the main activity. For now I created multiple broadcast messages and multiple broadcast receivers in the main activity so when the application performs login operation a receiver (loginBroadcastReceiver) in the main activity receives a message and when another parameter is received from the server different message is broadcasted and another receiver handles the message. This way however the application performance is poor but I am not sure whether it is due to multiple broadcast receivers. Does anyone know what is the best way to exchange data between service and main activity - is it better to create a single broadcast receiver and retrieve all parameters from message or is it better to initialize multiple broadcast receivers for multiple parameters? I would appreciate if you could provide any useful resource about the topic because I'm writing the thesis and it would be good if the solution could be explained.

    Read the article

  • Ideas on simulating webservices for local automated testing.

    - by novice123
    I am testing an app, which talks to different webservices over the internet. For my automated testing, I don't want to go over the network. To achieve this, I need to simulate the webservice on my machine using another app. My initial thought is to record all the requests and responses between client and webservice, and then just write a simulation app which replays these responses. The disadvantage here is that everytime the webservice protocol changes a bit, I have to modify all my recorded resposnes. so I am looking to see if there are more elegant solutions. have anyone solved a similar problem? any thoughts, suggestion are appreciated.

    Read the article

  • Huge amount of time sending data with suds and proxy

    - by Roman
    Hi everyone, I have the following code to send data through a proxy using suds: import suds t = suds.transport.http.HttpTransport() proxy = urllib2.ProxyHandler({'http': 'http://192.168.3.217:3128'}) opener = urllib2.build_opener(proxy) t.urlopener = opener ws = suds.client.Client('http://xxxxxxx/web.asmx?WSDL', transport=t) req = ws.factory.create('ActionRequest.request') req.SerialNumber = 'asdf' req.HostName = 'hola' res = ws.service.ActionRequest(req) I don't know why, but it can be sending data above 2 or 3 minutes, or even more and it raises a "Gateway timeout" exception sometimes. If I don't use the proxy, the amount of time used is above 2 seconds or less. Here is the SOAP reply: (ActionResponse){ Id = None Action = "Action.None" Objects = "" } The proxy is running right with other requests through urllib2, or using normal web browsers like firefox. Does anyone have any idea what's happening here with suds? Thanks a lot in advance!!!

    Read the article

  • Tomcat Clustering and HTTPS Issue

    - by Angelo
    Hi I have two instances of Tomcat 6 with content accessible via HTTP and HTTPS for other pages. I have configured the instances this way: 1) Instance one to listen on port 8080(Http) and 8443(Https) 2) Instance two to listen on port 7080(Http) and 7443(Https) I have mod_proxy configured with Apache 2.2 to do clustering. The requests are coming in properly and all works well for HTTP traffic but when you are in the app and it becomes HTTPS then i get the page cannot be found when tomcat tries to serve the page. Now if I access the two tomcat instances directly bypassing the load balancer then everything is fine. So http/https is configured properly on tomcat but not on Apache. I have a feeling i must configure Apache to handle this(or mod_proxy). Thanks,

    Read the article

  • Overriding properties of child view controller vs setting them via parent view controller

    - by robinjam
    If you want to modify the default behaviour of a View Controller by changing the value of one of its properties, is it considered better form to instantiate the class and set its property directly, or subclass it and override the property? With the former it would become the parent View Controller's responsibility to configure its children, whereas with the latter the children would effectively configure themselves. EDIT: Some more information: The class I am referring to is FetchedTableViewController, a subclass of UITableViewController that I made to display the results of a Core Data fetch operation. There are two places I want to display the results of a fetch, and they each have different fetch requests. I'm trying to decide whether it's better to create a subclass for each one, and override the fetchRequest property, or make it the responsibility of the parent controller to set the fetchRequest property for its children.

    Read the article

  • mod_rewrite in conjunction with "options indexes"

    - by Travis
    I have a directory ("files") where sub-directories and files are going to be created and stored over time. The directories also need to deliver a directory listing, using "options indexes", but only if a user is authenticated, and authorized. I have that part built, and working, by doing the following: <Directory /var/www/html/files> Options Indexes IndexOptions FancyIndexing SuppressHTMLPreamble HeaderName /includes/autoindex/auth.php </Directory> Now I need to take care of file delivery. To force authentication for files, I have built the following: RewriteCond %{REQUEST_URI} -f RewriteRule /files/(.*) /auth.php I also tried: RewriteCond %{REQUEST_URI} !-d RewriteRule /files/(.*) /auth.php Both directives are redirecting to auth.php when I request: foo.com/files/bar/ foo.com/files/bar/baz I am outputting the SERVER global on auth.php during testing and it is showing the requests as I made them (I thought Apache may have been doing something behind the scenes by adding something like "index.html" to the end with "Options Indexes" being on). Ideas?

    Read the article

  • How do I throttle my site's API users?

    - by scotts
    The legitimate users of my site occasionally hammer the server with API requests that cause undesirable results. I want to institute a limit of no more than say one API call every 5 seconds or n calls per minute (haven't figured out the exact limit yet). I could obviously log every API call in a DB and do the calculation on every request to see if they're over the limit, but all this extra overhead on EVERY request would be defeating the purpose. What are other less resource-intensive methods I could use to institute a limit? I'm using PHP/Apache/Linux, for what it's worth.

    Read the article

  • How to deal with routing when developing a custom CMS in Codeigniter

    - by Ashley Ward
    Hi All - I’m a recent user of Codeigniter and am developing a simple backend CMS to manage pages. Based on a URL (in this example I have hidden “index.php”) : mysite.com/pagename I would like the system to detect if there is a value of “pagename” in my database, if there is, I need the system to re-route to a custom controller (eg: Pagemaker) and if there is no record called pagename, just do it’s normal thing (i.e. find a controller called pagename) Currently I have: $route['(:any)'] = "pagemaker/create/$1"; whereby all requests are forwarded to my custom function. However I want to change this structure so that if the page does NOT exist in the db, the traditional codeigniter request process is followed. Can anyone offer any advice about how to complete this? Or any advice about routing custom CMS’s in codeigniter in general?

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >