Search Results

Search found 57458 results on 2299 pages for 'http response codes'.

Page 168/2299 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Rails 3 get raw post data and write it to tmp file

    - by Andrew
    I'm working on implementing Ajax-Upload for uploading photos in my Rails 3 app. The documentation says: For IE6-8, Opera, older versions of other browsers you get the file as you normally do with regular form-base uploads. For browsers which upload file with progress bar, you will need to get the raw post data and write it to the file. So, how can I receive the raw post data in my controller and write it to a tmp file so my controller can then process it? (In my case the controller is doing some image manipulation and saving to S3.) Some additional info: As I'm configured right now the post is passing these parameters: Parameters: {"authenticity_token"=>"...", "qqfile"=>"IMG_0064.jpg"} ... and the CREATE action looks like this: def create @attachment = Attachment.new @attachment.user = current_user @attachment.file = params[:qqfile] if @attachment.save! respond_to do |format| format.js { render :text => '{"success":true}' } end end end ... but I get this error: ActiveRecord::RecordInvalid (Validation failed: File file name must be set.): app/controllers/attachments_controller.rb:7:in `create'

    Read the article

  • Why is curl in Ruby slower than command-line curl?

    - by Stiivi
    I am trying to download more than 1m pages (URLs ending by a sequence ID). I have implemented kind of multi-purpose download manager with configurable number of download threads and one processing thread. The downloader downloads files in batches: curl = Curl::Easy.new batch_urls.each { |url_info| curl.url = url_info[:url] curl.perform file = File.new(url_info[:file], "wb") file << curl.body_str file.close # ... some other stuff } I have tried to download 8000 pages sample. When using the code above, I get 1000 in 2 minutes. When I write all URLs into a file and do in shell: cat list | xargs curl I gen all 8000 pages in two minutes. Thing is, I need it to have it in ruby code, because there is other monitoring and processing code. I have tried: Curl::Multi - it is somehow faster, but misses 50-90% of files (does not download them and gives no reason/code) multiple threads with Curl::Easy - around the same speed as single threaded Why is reused Curl::Easy slower than subsequent command line curl calls and how can I make it faster? Or what I am doing wrong? I would prefer to fix my download manager code than to make downloading for this case in a different way. Before this, I was calling command-line wget which I provided with a file with list of URLs. Howerver, not all errors were handled, also it was not possible to specify output file for each URL separately when using URL list. Now it seems to me that the best way would be to use multiple threads with system call to 'curl' command. But why when I can use directly Curl in Ruby? Code for the download manager is here, if it might help: Download Manager (I have played with timeouts, from not-setting it to various values, it did not seem help) Any hints appreciated.

    Read the article

  • Best Approach to process images in Django

    - by primalpop
    I've have an application with Android front end and Django as the back end. As part of the answers here, I'm confused over the approach which I should take to send images to Django Server. I've 2 options at my disposal as Piro pointed out there. 1) Sending images as Multi Part entity 2) Send image as a String after encoding it using Base 64. So I am considering the approach that would make it easy to be processed by Django. The images are small in size (<200kb) and number (<10). Any suggestions or pointers are most welcome.

    Read the article

  • Tomcat fails on first request in combination with jsvc

    - by Roalt
    I have a web application where the first request may take a few seconds as some singletons are initialised. I've used the mod_proxy and jsvc construction mentioned in this question and described on this page to connect apache with tomcat (data is served via SSL) For the sample Tomcat application, everything works as it should. However, when using my application I get the following error in my apache log: [Wed Feb 10 09:48:29 2010] [error] [client 130.12.1.26] (70014)End of file found: proxy: error reading status line from remote server localhost [Wed Feb 10 09:48:29 2010] [error] [client 130.12.1.26] proxy: Error reading from remote server returned by /MyWebApp/MyWebApp.faces and I get the following error in my tomcat output: 10/02/2010 09:48:29 9947 jsvc.exec error: Service exit with a return value of 1 I'm not an expert on this so I would like to know what's the cause of the problem and where I should look for an answer?

    Read the article

  • 2D game collision response: SAT & minimum displacement along a given axis?

    - by Archagon
    I'm trying to implement a collision system in a 2D game I'm making. The separating axis theorem (as described by metanet's collision tutorial) seems like an efficient and robust way of handling collision detection, but I don't quite like the collision response method they use. By blindly displacing along the axis of least overlap, the algorithm simply ignores the previous position of the moving object, which means that it doesn't collide with the stationary object so much as it enters it and then bounces out. Here's an example of a situation where this would matter: According to the SAT method described above, the rectangle would simply pop out of the triangle perpendicular to its hypotenuse: However, realistically, the rectangle should stop at the lower right corner of the triangle, as that would be the point of first collision if it were moving continuously along its displacement vector: Now, this might not actually matter during gameplay, but I'd love to know if there's a way of efficiently and generally attaining accurate displacements in this manner. I've been racking my brains over it for the past few days, and I don't want to give up yet! (Cross-posted from StackOverflow, hope that's not against the rules!)

    Read the article

  • Cookie not renewing/overwriting in IE

    - by deceze
    I have a weird quirk with cookies in IE. When a user logs into the site, I'm generating a new session id and hence need to overwrite the cookie. The flow is basically: Client goes to https://secure.example.com/users/login page, automatically receiving a session id Client POSTs login credentials to same address Client receives the following headers together with a 302 redirect to https://secure.example.com/users/mypage: CAKEPHP=deleted; expires=Sun, 05-Apr-2009 04:50:35 GMT; path=/ CAKEPHP=98hnIO23...; expires=Mon, 12 Apr 2010 04:50:36 GMT; path=/; secure Client is supposed to visit https://secure.example.com/users/mypage, presenting the new session id. This works in all browsers, except IE (tested in 7 & 8). IE retains the old, unauthenticated session id, and is redirected back to the login page. It works on my local test environment (using a self-signed certificate at https://localhost:8443/...), but not on the live server. I'm using CakePHP and simply issue a $this->Session->renew(), which produces the above cookie headers. Any ideas how to get IE to accept the new cookie?

    Read the article

  • Tomcat reporting 404 error on all of newly deployed WAR files?

    - by dacracot
    I deployed a WAR file into $TOMCAT_HOME/webapps by copying the file into the directory, just like I've done a thousand times before. Tomcat detects the WAR and inflates it. I can traverse the directory tree on my server at the command line (it's Fedora). But when I address the webapp within my client machine's browser, I get nothing but 404 errors. This has happened to the last two deployments of completely separate WARs. The first was a replacement of an existing WAR. I first deleted the WAR and its inflated directory, and then copied in the WAR which inflated... 404. I deleted everything again, put back the previously working WAR from backup. It inflated and worked. The second was a completely new, never before deployed WAR... nothing but 404. Other WARs are working, but now I'm afraid to change anything until I know what is going on. Any clues? Edit: From my comment you can see that the logs included "SEVERE: Error listenerStart" after the WAR was deployed by Tomcat. There were no stack traces or other errors reported.

    Read the article

  • <audio> element autobuffers no matter what

    - by pthulin
    I'm trying to make a web based media player using the HTML5 audio element implemented in Firefox 3.5 and Chrome. Reading Mozillas documentation, omitting the autobuffer attribute should result in the audio src not being requested: if specified, the audio will automatically begin being downloaded, even if not set to automatically play. This continues until the media cache is full, or the entire audio file has been downloaded, whichever comes first However, on the server side I notice the files are being requested anyway. My sample page is very simple: <html> <body> <audio src="1.ogg"></audio> <audio src="2.ogg"></audio> </body> </html>

    Read the article

  • How do you get Lighttpd to compress CodeIgniter's "clean urls"?

    - by ocdcoder
    I was looking at PageSpeed on my test website and noticed that Lighttpd wasn't compressing my HTML (but was compressing my javascript and css files). I'm assuming this is because I'm using CodeIgniter and it's clean url system and since the requests don't have file extensions, Lighttpd doesn't have the rule to compress it. That being the case, how do I get Lighttpd to compress my HTML? Is this something I shouldn't be doing? Or something I need to specially configure Lighttpd for?

    Read the article

  • Not Seeing Ajax Requests In Firebug If Header Has Been Modified

    - by FluidFoundation
    Hey braintrust, I'm making an ajax call using jQuery's library to an api, which requires a username and password encoded to base64 be added to the header. here's a basic example: $.ajax({ type: "GET", contentType: 'application/json', beforeSend:function(xhr){ xhr.setRequestHeader("Authentication", "Basic " + base64EncodedValue); } url: 'https://api.company.com/uri/', complete: function(result) { alert(result); } }); But when this fires off, I get a black alert box, so it doesn't appear as if something is coming back. There is no log in the Firebug console that a get ajax request was done. However, if I remove the beforeSend option, I do see the ajax request get logged, but the request gets back a 'not authorized', so it definitely hit the right place. Any ideas on why it's not showing up in Firebug so I can verify the headers are being sent out correctly?

    Read the article

  • Best way to implement a 404 in ASP.NET

    - by Ben Mills
    I'm trying to determine the best way to implement a 404 page in a standard ASP.NET web application. I currently catch 404 errors in the Application_Error event in the Global.asax file and redirect to a friendly 404.aspx page. The problem is that the request sees a 302 redirect followed by a 404 page missing. Is there a way to bypass the redirect and respond with an immediate 404 containing the friendly error message? Does a web crawler such as Googlebot care if the request for a non existing page returns a 302 followed by a 404?

    Read the article

  • are there any negative implications of sourcing a javascript file that does not actually exist?

    - by dreftymac
    If you do script src="/path/to/nonexistent/file.js" in an HTML file and call that in a browser, and there are no dependencies or resources anywhere else in the HTML file that expect the file or code therein to actually exist, is there anything inherently bad-practice about doing this? Yes, it is an odd question. The rationale is the developer is dealing with a CMS that allows custom (self-contained) javascript files to be provided in certain circumstances. The problem is the CMS is not very flexible when it comes to creating conditional includes for javascript. Therefore it is easier to just make references to the self-contained js files regardless of whether they are actually at the specified path. Since no errors are displayed to the user, should this practice be considered a viable option?

    Read the article

  • 500 error on function call in Codeigniter

    - by Ilia Lev
    I am using just installed CI 2.1.3 Following phpacademy tutorial I wrote in the routes.php: $route['default_controller'] = "site"; (instead of: $route['default_controller'] = "welcome";) and in controllers/site.php: <?php if ( ! defined('BASEPATH')) exit('No direct script access allowed'); class Site extends CI_Controller { public function index() { echo "default function started.<br/>"; } public function hello(){ echo "hello function started.<br/>"; } } After uploading it to the server and going to the [www.mydomain.ext] it works ok (writes: "default function started.") BUT if I add 'this-hello();' to the index() function it throws a 500 error. Why does it happen and how can I resolve this? Thank you in advance.

    Read the article

  • Location detecting tecniques for IP adresses

    - by ilhan
    What are the location detecting tecniques for IP adresses? I know to look at the $_SERVER['HTTP_ACCEPT_LANGUAGE'] (not accurate but mostly useful to detect location, for example if an IP range's users set French to their browser then it means that this range belongs to France gethostbyaddr($_SERVER['REMOTE_ADDR']) then may be to whois gethostbyaddr($_SERVER['REMOTE_ADDR']) sometimes $HTTP_USER_AGENT (Firefox's user agent string has language code, not accurate but mostly can be used to detect the location) But what about cities?

    Read the article

  • "[object Object]" passed instead of the actual object as parameter

    - by Andrew Latham
    I am using Heroku with a Ruby on Rails application, and running from Safari. I have the following Ajax call: $.ajax({ type : 'POST', url : '/test_page', data : {stuff: arr1}, dataType : 'script' }); arr1 is supposed to be an array of objects. There's a console.log right before that, and it is: [Object, Object, Object, Object, Object, ...] However, I got an error on the server side when I made this ajax call. The logs showed 2012-10-01T03:13:34+00:00 app[web.1]: Parameters: {"stuff"=>"[object Object]"} 2012-10-01T03:13:34+00:00 app[web.1]: WARNING: Can't verify CSRF token authenticity 2012-10-01T03:13:34+00:00 app[web.1]: NoMethodError (undefined method `to_hash' for "[object Object]":String): 2012-10-01T03:13:34+00:00 app[web.1]: Completed 500 Internal Server Error in 1ms I'm unable to replicate the error. It's really confusing to me - what would cause that string to sometimes be passed to the server instead of the object?

    Read the article

  • browser cookie issue

    - by George2
    Hello everyone, In my previous understanding, for a web site, only login user of a web site (no matter what login/authentication approach is used) could have cookie as persistent identifier, so that if the user close the browser, open browser again to go to the same web site, the web site could remember the user. But I learned recently that it seems for non-login user, there could still be a cookie associated with the user (after the user close browser, and then open the browser again to go to the same web site, the web site could remember the user), and it is called browser cookie? Is that true? If it is true, who is responsible to set the browser cookie? i.e. need some coding/config at web server side, client browser configuration (without coding from server side), or both? How could web server access such cookie? Appreciate if any code samples. thanks in advance, George

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >