Search Results

Search found 22689 results on 908 pages for 'bad request'.

Page 23/908 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Detect aborted connection during ASIO request

    - by Tim Sylvester
    Is there an established way to determine whether the other end of a TCP connection is closed in the asio framework without sending any data? Using Boost.asio for a server process, if the client times out or otherwise disconnects before the server has responded to a request, the server doesn't find this out until it has finished the request and generated a response to send, when the send immediately generates a connection-aborted error. For some long-running requests, this can lead to clients canceling and retrying over and over, piling up many instances of the same request running in parallel, making them take even longer and "snowballing" into an avalanche that makes the server unusable. Essentially hitting F5 over and over is a denial-of-service attack. Unfortunately I can't start sending a response until the request is complete, so "streaming" the result out is not an option, I need to be able to check at key points during the request processing and stop that processing if the client has given up.

    Read the article

  • SOAPUI Extract data from SOAP Response and use in REST request

    - by Adrian
    I have been looking at the answer to this question: Pulling details from response to new request SoapUI which is similar to what I am looking for but I can't get it to work. I have a small SOAPUI testsuite and I need to extract a value from the response of a SOAP request and then use this value in a subsequent REST request. The response to my SOAP request is: <ns0:session xmlns:ns0="http://www.someurl.com/la/la/v1_0"> <token>AQIC5wM2xAAIwMg==#</token> </ns0:session> so I need the token to use in my REST request. I know it involves using Property Transfer and some XPath / XQuery but I just can't get it right. At the moment my property transfer window points to Source: SOAP test Property: Response and has data(/session/token/text()) in the text box. In target it has Target: REST testcase Property: newProp and I have Use XQuery checked. Any help greatly appreciated. Thanks, Adrian

    Read the article

  • url to http request object

    - by takeshin
    I need to convert string like this: $url = 'module/controller/action/param1/param1value/paramX/paramXvalue'; to url regarding current router (including translation and so on). Usually I generate the target urls using url view helper, but for this I need to specify all params, so I would need to manually explode the string. I tried to use request object, like this: $request = new Zend_Controller_Request_Http(); // some code here passing the $url Zend_Debug::dump($request->getControllerName()); // null instead of 'controllers' Zend_Debug::dump($request->getParams()); // null instead of array but this seems to be suspected. Do I need to dispatch this request? How to handle this case well?

    Read the article

  • How to By Pass Request Validation

    - by GIbboK
    Hi, I have a GridView and I need update some data inserting HTML CODE; I would need this data been stored encoded and decoded on request. I cannot in any way disable globally "Request Validation" and not even at Page Level, so I would need a solution to disable "Request Validation" at Control Level. At the moment I am using a script which should Html.Encode every value being update, butt seems that "Request Validation" start its job before event RowUpdating, so I get the Error "Page A potentially dangerous Request.Form ... ". Any idea how to solve it? Thanks protected void GridView1_RowUpdating(object sender, GridViewUpdateEventArgs e) { foreach (DictionaryEntry entry in e.NewValues) { e.NewValues[entry.Key] = Server.HtmlEncode(entry.Value.ToString()); } PS I USE Wweb Controls not MVC

    Read the article

  • ASP.NET request extension type

    - by Krishna
    Hello, I am working on a large web application which I have recently shelved tons of .aspx pages from the project. To avoid page not found error, I added these entities in the xml which came around 300+ in count. I wrote a http module that checks the request url in the xml entities and if they are found, my module is going to redirect the request to respective new pages. Everything works great, but my collection is getting iterated for all the requests, I mean for each and every .jpg, .css, .js, .ico, .pdf etc. Is there any object or property in .net that can tell the type of request that user requested for like HttpContext.request.type. So that I can avoid checking the request for all unwanted file types.

    Read the article

  • Pattern to iterate Request Params

    - by NOOBie
    My view is not a strongly typed view and I need to iterate through Request Params in the controller action to determine the values posted. Is there a better way to iterate through the nameValueCollection AllKeys? I am currently looping through the Request Params and setting values appropriately. foreach (var key in Request.Params.AllKeys) { if (key.Equals("CustomerId")) queryObject.CustomerId = Request.Params[key]; else if (key.Equals("OrderId")) queryObject.OrderId= Request.Params[key]; //and so on } I see a considerable amount of repetition in this code. Is there a better way to handle this?

    Read the article

  • uploading large xml to WCF REST service -> 400 Bad request

    - by glenn.danthi
    I am trying to upload large xml files to a REST service... I have tried almost all methods specified on stackoverflow on google but I still cant find out where I am going wrong....I cannot upload a file greater than 64 kb!.. I have specified the maxRequestLength : <httpRuntime maxRequestLength="65536"/> and my binding config is as follows : <bindings> <webHttpBinding> <binding name="RESTBinding" maxBufferSize="67108864" maxReceivedMessageSize="67108864" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647"/> </binding> </webHttpBinding> </bindings> In my C# client side I am doing the following : WebRequest request = HttpWebRequest.Create(@"http://localhost.:2381/RepositoryServices.svc/deviceprofile/AddDdxml"); request.Credentials = new NetworkCredential("blah", "blah"); request.Method = "POST"; request.ContentType = "application/xml"; request.ContentLength = byteArray.LongLength; using (Stream postStream = request.GetRequestStream()) { postStream.Write(byteArray, 0, byteArray.Length); } There is no special configuration done on the client side...

    Read the article

  • DBan not working because disk has bad sectors?

    - by canadiancreed
    Attempting to wipe the drive of a laptop that I have before it's sold, and normally use DBAN to do so. However this time it starts and then finishes instantly with the following message. "DBAN finished with non-fatal errors This is usually cause by disks with bad sectors" Have tried multiple flags such as noverify to force it to skip this check (it doesn't show bad sectors in the OS scan in windows). but the error always comes back. This is the only time that I've seen this message, as every other of the few drives I've used this software on usually take 3-5 hours to do their job.

    Read the article

  • why does chkdsk always report errors on a bad shutdown

    - by rep_movsd
    Once in a while, Windows XP hangs on my laptop (usually when going into standby or hibernate and occasionally on startup) and I have to forcefully poweroff. Ususally chkdsk never runs automatically (I thought it should know that the partitions have nit been unmounted and do that). I religiously run chkdsk without /F after bad shutdowns like this, and invariably it reports that the drive has unfixed errors and must be checked with /F and I do that, and more often than not, the chdsk that runs on startup does not report fixing anything. I have had times in the past (and not only just on this system) when not running chkdsk leads to some strange errors like files not opening even though they exist and inability to save certain files, so I make it a point to always chkdsk after bad shutdown. I never understood why this is : Isnt the whole point of a journalling filesystem like NTFS to avoid file system corruption and endless chkdsks? I even tried once disabling write caching to see if it made any difference, but to no avail....

    Read the article

  • Problem calling Request using RequestBuilder

    - by Tushar Ahirrao
    Hi My Code is String url = "http: gd.geobytes.com/gd?after=-1&variables=GeobytesCountry,GeobytesCity"; RequestBuilder builder = new RequestBuilder(RequestBuilder.GET, URL .encode(url)); try { Request request = builder.sendRequest(null, new RequestCallback() { public void onError(Request request, Throwable exception) { Couldn't connect to server (could be timeout, SOP violation, etc.) } public void onResponseReceived(Request request, Response response) { System.out.println(response.getText() + "Response"); if (200 == response.getStatusCode()) { Window.alert(response.getText()); } else { Window.alert(response.getText()); } } }); } catch (RequestException e) { e.printStackTrace(); } i receive following error com.google.gwt.http.client.RequestPermissionException: The URL http://gd.geobytes.com/gd?after=-1&variables=GeobytesCountry,GeobytesCity is invalid or violates the same-origin security restriction at com.google.gwt.http.client.RequestBuilder.doSend(RequestBuilder.java:378) at com.google.gwt.http.client.RequestBuilder.sendRequest(RequestBuilder.java:254) at com.ip.client.IpAddressTest.onModuleLoad(IpAddressTest.java:46) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.shell.ModuleSpace.onLoad(ModuleSpace.java:369) at com.google.gwt.dev.shell.OophmSessionHandler.loadModule(OophmSessionHandler.java:185) at com.google.gwt.dev.shell.BrowserChannelServer.processConnection(BrowserChannelServer.java:380) at com.google.gwt.dev.shell.BrowserChannelServer.run(BrowserChannelServer.java:222) at java.lang.Thread.run(Thread.java:619) Caused by: com.google.gwt.http.client.RequestException: (NS_ERROR_DOM_BAD_URI): Access to restricted URI denied

    Read the article

  • [ASP.NET] Odd HttpRequest behaviour

    - by barguast
    I have a web service which runs with a HttpHandler class. In this class, I inspect the request stream for form / query string parameters. In some circumstances, it seemed as though these parameters weren't getting through. After a bit of digging around, I came across some behaviour I don't quite understand. See below: // The request contains 'a=1&b=2&c=3' // TEST ONLY: Read the entire request string contents; using (StreamReader sr = new StreamReader(context.Request.InputStream)) { contents = sr.ReadToEnd(); } // Here 'contents' is usually correct - containing 'a=1&b=2&c=3'. Sometimes it is empty. string a = context.Request["a"]; // Here, a = null, regardless of whether the 'contents' variable above is correct Can anyone explain to me why this might be happening? I'm using a .NET WebClient and UploadDataAsync to perform the request on the client if that makes any difference. If you need any more information, please let me know.

    Read the article

  • How to remove request blocking on apache reverse proxy after failure of backend before asking backen

    - by matnagel
    I am working on an apache2 reverse proxy vhost. When the server behind apache is down, the first request to apache shows the error page of course. But at subsequent requests it seems apache delays for some time before asking the backend server again. During all this time (which is short but in development I don't want a delay at all) only the apache error page is shown to the browser, although the backend server is already up. Where is this setting in apache, what is this behaviour, and how can I set the delay time to zero? Edit: I am not trying to change the timeout for a single request. I want to change the blocking time. It is my experience that apache blocks further requests for a certain time before asking a backend server again that has failed once. Edit2: This is what apache delivers: Service Temporarily Unavailable The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.7 with Suhosin-Patch proxy_html/3.0.0 Server at localhost Port 80 After hitting Ctrl-R in firefox for 60 seconds the page finally appears.

    Read the article

  • Unusual request URL in ASP.NET health monitoring event

    - by Troy Hunt
    I’m seeing a rather strange occurrence in the request information section of an ASP.NET health monitoring email I hope someone can shed some light on. This is a publicly facing website which runs on infrastructure at an Indian hosting provider. Health monitoring is notifying us of server errors via automated email but every now and then the requested URL appears as a totally different website. For example: Request information: Request URL: http://www.baidu.com/Default.aspx Request path: /Default.aspx User host address: 221.13.128.175 User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE Obviously the site in question is not Baidu and obviously this attribute is not the referrer either; the “Request URL” value is the path which has generated the error. The IP address is located in Beijing (coincidental given the Baidu address?) and in this instance it looks like the SQL server backend was not accessible (I haven't included the entire error message for security's sake). What would cause the request URL attribute to be arbitrarily changed to that of another site? I’ve never seen this occur in a health monitoring event before. Thanks!

    Read the article

  • Request bursting from web application Load Tests

    - by MaseBase
    I'm migrating our web and database hosting to a new environment on all new machines. I've recently performed a Load Test using WAPT to generate load from multiple distributed clients. The server has plenty of room to handle the traffic load, but I'm seeing an odd pattern of incoming traffic during the load tests. Here is the gist of our setup: Firewall server running MS Forefront TMG 2010 on Win 2k8 server Request routing done by IIS Application Request Routing on firewall machine Web server is a Hyper-V VM on the Database server (which is the host OS) These machines are hefty with dual-CPU's with six cores (12 total procs) Web server running IIS 7.5 Web applications built in ASP.NET 2.0, with 1 ISAPI filter (Url Rewrite) in front What I'm seeing during the load tests is that the requests all come through in bursts. Even though I have 7 different distributed clients sending traffic loads, the requests come through about 300-500 requests at a time. The performance monitor shows nearly all of the counters moving through this pattern, where a burst of requests comes in the req/sec jumps to 70, the queued requests jumps to 500, the current requests jumps up, the CPU jumps up, everything. Then once it's handled that group of requests, it has a lull for nearly 10 seconds where nearly nothing is happening. 0-5 req/sec, 0 queued requests, minimal CPU usage. Then after 10 seconds of inactivity, another burst comes through, spiking all of the counters once again. What I can't figure out is why the requests are coming through in bursts when I know that the load being generated is not sent that way, especially considering the various load-generating clients sending traffic all in different intervals with random think time's between each request. Is there something in the layers between Hyper-V or perhaps in the hardware which might cause this coalesce of requests together? Here is what i'm looking at, the highlighted metric is Requests/sec, but the others critical counter go with it: Requests Queued (which I'd obviously like to keep as close to 0 as possible). Any ideas on this?

    Read the article

  • Why does calling abort() on ajax request cause error in ASP.Net MVC (IE8)

    - by user169867
    I use jquery to post to an MVC controller action that returns a table of information. The user of the page triggers this by clicking on various links. In the event the user decides to click a bunch of these links in quick succession I wanted to cancel any previous ajax request that may not have finished. I've found that when I do this (although its fine from the client's POV) I will get errors on the web application saying that "The parameters dictionary contains a null entry for parameter srtCol of non-nullable type 'System.Int32'" Now the ajax post deffinately passes in all the parameters, and if I don't try and cancel the ajax request it works just fine. But if I do cancel the request by calling abort() on the XMLHttpRequest object that ajax() returns before it finishes I get the error from ASP.Net MVC. Example: //Cancel any pevious request if (req) { req.abort(); req = null; } //Make new request req= $.ajax({ type: 'POST', url: "/Myapp/GetTbl", data: {srtCol: srt, view: viewID}, success: OnSuccess, error: OnError, dataType: "html" }); I've noticed this only happen is IE8. In FF it seems to not cuase a problem. Does anyone know how to cancel an ajax request in IE8 without causing errors for MVC? Thanks for any help.

    Read the article

  • Will disk cloning resolve bad stripes on RAID?

    - by user13323
    Hi. We have a logical RAID1 drive in bad stripes state, which kept that status even after replacement and rebuilding of both drives, and gives errors in Windows logs about failure of writing to disk. IBM support suggests erasing and re-creating the RAID, then re-installing the Windows. The resulting down-time unacceptible for us, so we want to clone the RAID (via Acronis True Image), erase and re-create the RAID, then dump the cloned data back. Following IBM logic where RAID erasing and re-creation resets the whole RAID meta-data, this should clear the bad-stripes status, and start from a blank page. Question is if such strategy is possible, and will produce the desired effect? Any idea is appreciated - thanks in advance!

    Read the article

  • idle proccesses and high memory bad? uwsgi/django

    - by JimJimThe3rd
    I have a VPS with 256MB of ram. I'm running nginx, uwsgi and postgresql on Ubuntu 12.04 for a soon to be Django site. About 200MB of ram are being used despite the website not being active, the uwsgi processes seem to just be idling. Is this bad? I once heard that having a bunch of free memory isn't necessarily a good metric because it is possible that the memory in use can easily be freed up. I mean, it is possible that the server is storing commonly used "stuff" in case it is accessed but is more than happy to dump it if the ram is needed. But I'm really not sure, hence me asking this question. If it is bad I could set some of the application loading options for uwsgi like "cheap" or "idle" mode. Screenshot of my htop

    Read the article

  • NHibernate: how to handle entity-based validation using session-per-request pattern, without control

    - by Seth Petry-Johnson
    What is the best way to do entity-based validation (each entity class has an IsValid() method that validates its internal members) in ASP.NET MVC, with a "session-per-request" model, where the controller has zero (or limited) knowledge of the ISession? Here's the pattern I'm using: Get an entity by ID, using an IFooRepository that wraps the current NH session. This returns a connected entity instance. Load the entity with potentially invalid data, coming from the form post. Validate the entity by callings its IsValid() method. If valid, call IFooRepository.Save(entity). Otherwise, display error message. The session is currently opened when the request begins and flushed when the request ends. Since my entity is connected to a session, flushing the session attempts to save the changes even if the object is invalid. What's the best way to keep validation logic in the entity class, limit controller knowledge of NH, and avoid saving invalid changes at the end of a request? Option 1: Explicitly evict on validation failure, implicitly flush: if the validation fails, I could manually evict the invalid object in the action method. If successful, I do nothing and the session is automatically flushed. Con: error prone and counter-intuitive ("I didn't call .Save(), why are my invalid changes being saved anyways?") Option 2: Explicitly flush, do nothing by default: By default I can dispose of the session on request end, only flushing if the controller indicates success. I'd probably create a SaveChanges() method in my base controller that sets a flag indicating success, and then query this flag when closing the session at request end. Pro: More intuitive to troubleshoot if dev forgets this step [relative to option 1] Con: I have to call IRepository.Save(entity)' and SaveChanges(). Option 3: Always work with disconnected objects: I could modify my repositories to return disconnected/transient objects, and modify the Repo.Save() method to re-attach them. Pro: Most intuitive, given that controllers don't know about NH. Con: Does this defeat many of the benefits I'd get from NH?

    Read the article

  • Any HTTP proxies with explicit, configurable support for request/response buffering and delayed conn

    - by Carlos Carrasco
    When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following: A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part. The proxy server stops buffering the request when: A size limit has been reached (say, 4KB), or The request has been received completely, headers and body Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed. The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.) Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed. The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources. I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model? (Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh)

    Read the article

  • What are the strategies available to minimise badblocks on an encrypted partition?

    - by David Andreoletti
    Let me explain my backup strategy and the problem I am facing. My current backup strategy: Open encrypted container and execute Carbon Copy Cleaner on it at least once a week. Rotate backup disks. Problem: I have an Truecrypt partition on my 1st external hard disk. I recently found out that some files on this encrypted partition cannot be read due to bad blocks (reported by Antonio Diaz's GNU 'ddrescue'). My backup strategy is ineffective in this scenario because bad blocks are discovered during backup. Possible strategy Strategy #0: Have the encrypted partition over a RAID 1 with 2 disks. Is this a suitable strategy ? Strategy #1: Do you think of any other one ? Environment: Mac OS X 10.8 External 2.5" hard disk (SATA) No RAID

    Read the article

  • Most awesomely bad hack

    - by Zypher
    As I sit watching one of my latest dirty dirty hacks run, I started wondering what kind of dirty hacks you have created that are so bad they are awesome. We all have a few of them in our past - and they are probably still running in production somewhere, chugging along somehow still working. Which reminds me of the hack we had to put into place when we were moving data centers. Our IVRs had to keep running, as the data center we were moving from was the primary DC, and the new Primary wasn't quite ready to take traffic. So what do we do. Well we answer the calls in DC1, then ship the sip stream over the internet to DC2 1900 miles away ... that just felt oh so wrong. So the question is, what is one (or more) of your awesomely bad hacks?

    Read the article

  • Is zip's encryption really bad?

    - by Nifle
    The standard advice for many years regarding compression and encryption has been that the encryption strength of zip is bad. Is this really the case in this day and age? I read this article about WinZip (it has had the same bad reputation). According to that article the problem is removed provided you follow a few rules when choosing your password. At least 12 characters in length Be random not contain any dictionary, common words or names At least one Upper Case Character Have at least one Lower Case Character Have at least one Numeric Character Have at least one Special Character e.g. $,£,*,%,&,! This would result in roughly 475,920,314,814,253,000,000,000 possible combinations to brute force Please provide recent (say past five years) links to back up your information.

    Read the article

  • How to post to a request using node.js

    - by Mr JSON
    I am trying to post some json to a URL. I saw various other questions about this on stackoverflow but none of them seemed to be clear or work. This is how far I got, I modified the example on the api docs: var http = require('http'); var google = http.createClient(80, 'server'); var request = google.request('POST', '/get_stuff', {'host': 'sever', 'content-type': 'application/json'}); request.write(JSON.stringify(some_json),encoding='utf8'); //possibly need to escape as well? request.end(); request.on('response', function (response) { console.log('STATUS: ' + response.statusCode); console.log('HEADERS: ' + JSON.stringify(response.headers)); response.setEncoding('utf8'); response.on('data', function (chunk) { console.log('BODY: ' + chunk); }); }); When I post this to the server I get an error telling me that it's not of the json format or that it's not utf8, which they should be. I tried to pull the request url but it is null. I am just starting with nodejs so please be nice.

    Read the article

  • SQL server 2000 reporting bad values to ASP.Net Application

    - by Ben
    I have an instance of SQL server 2000 (8.0.2039) with a rather simple table. We recently had users complain about an application I wrote returning bad values for some of the dates in the databse. When I query the table directly via Server Management Studio, it will return the correct values, however the identical queries from my application report the wrong values, but only for a couple of dates. I have been over the code, and it is solid. If the error was in the code, all of the dates reported should be wrong. I have also run the code on an identical test database, and everything is reported properly. I believe the problem may lie in the sql instance itself, which is why I am posting in Server Fault. My question is, has anyone heard of a database reporting bad (incorrect) date values when queried via web application? It should be noted that this particular server was once manually rebuilt after having a cluster clean run on it.

    Read the article

  • "Bad response to Storage command" when scheduling job with Bacula

    - by Joril
    I have a Bacula setup with 9 clients, and it's working happily. Today I had to add another client, so I went and copied+adapted the existing configuration files from another client, but when I schedule a job for the new client, I get these errors: 20-Mar 17:50 tools-dir JobId 39: Start Backup JobId 39, Job=BackupPresenze2.2012-03-20_17.50.49_04 20-Mar 17:50 tools-dir JobId 39: Using Device "FileStorage" 20-Mar 17:50 presenze2-fd JobId 39: Fatal error: Failed to connect to Storage daemon: bacula.mylan.local:9103 20-Mar 17:50 tools-dir JobId 39: Fatal error: Bad response to Storage command: wanted 2000 OK storage , got 2902 Bad storage From the client I can telnet to bacula.mylan.local:9103 just fine, and jobs for other clients work successfully... What could I check? (Server and client run Ubuntu 10.04, if it's relevant)

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >