Search Results

Search found 21350 results on 854 pages for 'url parsing'.

Page 269/854 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • twitter streaming api instead of search api

    - by user1711576
    I am using twitters search API to view all the tweets that use a particular hashtag I want to view. However, I want to use the stream function, so, I only get recent ones, and so, I can then store them. <?php global $total, $hashtag; $hashtag = $_POST['hash']; $total = 0; function getTweets($hash_tag, $page) { global $total, $hashtag; $url = 'http://search.twitter.com/search.json?q='.urlencode($hash_tag).'&'; $url .= 'page='.$page; $ch = curl_init($url); curl_setopt ($ch, CURLOPT_RETURNTRANSFER, TRUE); $json = curl_exec ($ch); curl_close ($ch); echo "<pre>"; $json_decode = json_decode($json); print_r($json_decode->results); $json_decode = json_decode($json); $total += count($json_decode->results); if($json_decode->next_page){ $temp = explode("&",$json_decode->next_page); $p = explode("=",$temp[0]); getTweets($hashtag,$p[1]); } } getTweets($hashtag,1); echo $total; ?> The above code is what I have been using to search for the tweets I want. What do I need to do to change it so I can stream the tweets instead? I know I would have to use the stream url https://api.twitter.com/1.1/search/tweets.json , but what do I need to change after that is where I don't know what to do. Obviously, I know I'll need to write the database sql but I want to just capture the stream first and view it. How would I do this? Is the code I have been using not any good for just capturing the stream?

    Read the article

  • Android - BitmapFactory.decodeByteArray - OutOfMemoryError (OOM)

    - by Bob Keathley
    I have read 100s of article about the OOM problem. Most are in regard to large bitmaps. I am doing a mapping application where we download 256x256 weather overlay tiles. Most are totally transparent and very small. I just got a crash on a bitmap stream that was 442 Bytes long while calling BitmapFactory.decodeByteArray(....). The Exception states: java.lang.OutOfMemoryError: bitmap size exceeds VM budget(Heap Size=9415KB, Allocated=5192KB, Bitmap Size=23671KB) The code is: protected Bitmap retrieveImageData() throws IOException { URL url = new URL(imageUrl); InputStream in = null; OutputStream out = null; HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // determine the image size and allocate a buffer int fileSize = connection.getContentLength(); if (fileSize < 0) { return null; } byte[] imageData = new byte[fileSize]; // download the file //Log.d(LOG_TAG, "fetching image " + imageUrl + " (" + fileSize + ")"); BufferedInputStream istream = new BufferedInputStream(connection.getInputStream()); int bytesRead = 0; int offset = 0; while (bytesRead != -1 && offset < fileSize) { bytesRead = istream.read(imageData, offset, fileSize - offset); offset += bytesRead; } // clean up istream.close(); connection.disconnect(); Bitmap bitmap = null; try { bitmap = BitmapFactory.decodeByteArray(imageData, 0, bytesRead); } catch (OutOfMemoryError e) { Log.e("Map", "Tile Loader (241) Out Of Memory Error " + e.getLocalizedMessage()); System.gc(); } return bitmap; } Here is what I see in the debugger: bytesRead = 442 So the Bitmap data is 442 Bytes. Why would it be trying to create a 23671KB Bitmap and running out of memory?

    Read the article

  • C++: Get char after space character instead or return carriage.

    - by Kzone272
    Okay this is similar to my last question but what I ended up doing was way too complex for something as simple as this. I simply need to get a single character or number (I will know which of these I am receiving) from the console after I press space, instead of pressing enter. I'm sure there must be a way to have the terminal read input after a space instead of a '\n'. I need to read inputs from the console in which the succeeding data types will vary depending on what the first input is, and I think reading the entire line, parsing it into strings, then parsing some of those into ints is a bit unnecessary. So Is this actually not possible in C++ or have I just not found it yet?

    Read the article

  • A question on webpage representation in Java

    - by Gemma
    Hello there. I've followed a tutorial and came up with the following method to read the webpage content into a CharSequence public static CharSequence getURLContent(URL url) throws IOException { URLConnection conn = url.openConnection(); String encoding = conn.getContentEncoding(); if (encoding == null) { encoding = "ISO-8859-1"; } BufferedReader br = new BufferedReader(new InputStreamReader(conn.getInputStream(),encoding)); StringBuilder sb = new StringBuilder(16384); try { String line; while ((line = br.readLine()) != null) { sb.append(line); sb.append('\n'); } } finally { br.close(); } return sb; } It will return a representation of the webpage specified by the url. However,this representation is hugely different from what I use "view page source" in my Firefox,and since I need to scrape data from the original webpage(some data segement in the original "view page source" file),it will always fail to find required text on this Java representation. Did I go wrong somewhere?I need your advice guys,thanks a lot for helping!

    Read the article

  • Java Applet 411 Content Length

    - by user1903006
    I am new to Java. I wrote an applet with a gui that sends results (int w and int p) to a server, and I get the "411 Length Required" error. What am I doing wrong? How do you set a Content-Length? This is the method that communicates with the server: public void sendPoints1(int w, int p){ try { String url = "http://somename.com:309/api/Results"; String charset = "UTF-8"; String query = String.format("?key=%s&value=%s", URLEncoder.encode(String.valueOf(w), charset), URLEncoder.encode(String.valueOf(p), charset)); String length = String.valueOf((url + query).getBytes("UTF-8").length); HttpURLConnection connection = (HttpURLConnection) new URL(url + query).openConnection(); connection.setRequestMethod("POST"); connection.setRequestProperty("Content-Length", length); connection.connect(); System.out.println("Responce Code: " + connection.getResponseCode()); System.out.println("Responce Message: " + connection.getResponseMessage()); } catch (Exception e) { System.err.println(e.getMessage()); } }

    Read the article

  • How to get a checkout-able revision info from subversion?

    - by zhongshu
    I want to check a svn url and to get the latest revision, then checkout it, I don't want to use HEAD because I will compare the latest revision to others. so I use "svn info" to get the "Last Changed Rev" for the url like this: D:\Project>svn info svn://.../branches/.../path Path: ... URL: svn://.../branches/.../path Repository Root: svn://yt-file-srv/ Repository UUID: 9ed5ffd7-7585-a14e-96b2-4aab7121bb21 Revision: 2400 Node Kind: directory Last Changed Author: xxx Last Changed Rev: 2396 Last Changed Date: 2010-03-12 09:31:52 +0800 but, I found the 2396 revision is not checkout-able, because this path is in a branch copied from trunk, and the 2396 is the revision modified in the trunk. so when I use svn checkout -r 2396, I will get a workcopy for the path in the trunk, then I can not do checkin for the branch. D:\Project>svn checkout svn://.../branches/.../path -r 2396 workcopy ..... ..... D:\Project>svn info workcopy Path: workcopy URL: svn://.../trunk/.../path Repository Root: svn://yt-file-srv/ Repository UUID: 9ed5ffd7-7585-a14e-96b2-4aab7121bb21 Revision: 2396 Node Kind: directory Schedule: normal Last Changed Author: xxx Last Changed Rev: 2396 Last Changed Date: 2010-03-12 09:31:52 +0800 So, my question is how to get a checkout-able revision for the branch path, for this example, I want to get 2397 (because 2397 is the revision which copy occur). And I know "svn log" can get the info, but "svn log" output maybe very long and parse it will be difficult than "svn info". I just want know which revision is the latest checkout-able revision for the path.

    Read the article

  • Simple search form passing the searched string through GET

    - by Brian Roisentul
    Hi, I'd like my Search form to return the following url after submit: /anuncios/buscar/the_text_I_searched My form is the following: <% form_for :announcement, :url => search_path(:txtSearch) do |f| %> <div class="searchBox" id="basic"> <%= text_field_tag :txtSearch, params[:str_search].blank? ? "Busc&aacute; tu curso r&aacute;pido y f&aacute;cil." : params[:str_search], :maxlength=> 100, :class => "basicSearch_inputField", :onfocus => "if (this.value=='Busc&aacute; tu curso r&aacute;pido y f&aacute;cil.') this.value=''", :onblur => "if(this.value=='') { this.value='Busc&aacute; tu curso r&aacute;pido y f&aacute;cil.'; return false; }" %> <div class="basicSearch_button"> <input type="submit" value="BUSCAR" class="basicSearch_buttonButton" /> <br /><a href="#" onclick="javascript:jQuery('#advance').modal({opacity:60});">Busqueda avanzada</a> </div> </div> <% end %> My routes' line for search_path is this: map.search '/anuncios/buscar/:str_search', :controller => 'announcements', :action => 'search' Well, this will work if I manually type the url I want in the brower, but definitely, if you look at the form's url, you'll find a ":txtSearch" parameter, which is not giving me the actual value of the text field when the form is submitted. And that's what I'd like to get! Could anybody help me on this?

    Read the article

  • Help me make a jquery AJAXed divs' links work like an iframe.

    - by Dave
    I want to make a few divs on the same page work similar to iframes. Each will load a URL which contains links. When you click on those links I want an AJAX request to go out and replace the div's html with new html from the page of the clicked link. It will be very similar to surfing a page inside an iframe. Here is my code to initially load the divs (this code works): onload: $.ajax({ url: "http://www.foo.com/videos.php", cache: false, success: function(html){ $("#HowToVideos").replaceWith(html); } }); $.ajax({ url: "http://www.foo.com/projects.php", cache: false, success: function(html){ $("#HowToProjects").replaceWith(html); } }); This is a sample of code that I'm not quite sure how to implement but explains the concept. Could I get some help with some selectors(surround in ?'s) and or let me know what is the correct way of doing this? I also want to display a loading icon, which I need to know where the right place to place the function is. $(".ajaxarea a").click(function(){ var linksURL = this.href; // var ParentingAjaxArea = $(this).closest(".ajaxarea");; $.ajax({ url: linksURL, cache: false, success: function(html){ $(ParentingAjaxArea).replaceWith(html); } }); return false; }); $(".ajaxarea").ajaxStart(function(){ // show loading icon });

    Read the article

  • how to connect to MSSQL using activerecord, JDBC, JTDS and Integrated Security

    - by Rob
    As per the above, I've tried: establish_connection(:adapter => "jdbcmssql", :url => "jdbc:jtds:sqlserver://myserver:1433/mydatabase;domain='mynetwork';", :username => 'user', :password=>'pass' ) establish_connection(:adapter => "jdbcmssql", :url => 'jdbc:jtds:sqlserver://myserver:1433/mydatabase;domain="mynetwork";user="mynetwork\user"' ) establish_connection(:adapter => "jdbcmssql", :url => "jdbc:jtds:sqlserver://myserver:1433/mydatabase;domain='mynetwork';", :username=>'user' ) establish_connection(:adapter => "jdbcmssql", :url => "jdbc:jtds:sqlserver://myserver:1433/mydatabase;domain='mynetwork';integratedSecurity='true'", :username=>'user' ) .. and various other combinations. Each time I get: net/sourceforge/jtds/jdbc/SQLDiagnostic.java:368:in `addDiagnostic': java.sql.SQLException: Login failed for user ''. The user is not associated with a trusted SQL Server connection. (NativeException) Any tips? Thanks, activerecord (2.3.5) activerecord-jdbc-adapter (0.9.6) activerecord-jdbcmssql-adapter (0.9.6) jdbc-jtds (1.2.5) jruby 1.4.0 (ruby 1.8.7 patchlevel 174) (2009-11-02 69fbfa3) (Java HotSpot(TM) Client VM 1.6.0_18) [x86-java]

    Read the article

  • Post valuse and upload Image to php server in android

    - by lawat
    I am trying to upload image from android phone to php server with additional values,the method is post my php file look like this if($_POST['val1']){ if($_POST['val2']){ if($_FILE['image']){ ...... } } }else{ echo "Value not found"; } I am doing is URL url=new URL("http://www/......../myfile.php"); HttpURLConnection con=(HttpURLConnection) url.openConnection(); con.setDoInput(true); con.setDoOutput(true); con.setUseCaches(false); con.setRequestMethod("POST");//Enable http POST con.setRequestProperty("Connection", "Keep-Alive"); con.setRequestProperty("Content-Type", "multipart/form-data;boundary="+"****"); connection.setRequestProperty("uploaded_file", imagefilePath); DataOutputStream ostream = new DataOutputStream( con.getOutputStream()); String res=("Content-Disposition: form-data; name=\"val1\""+val1+"****"+ "Content-Disposition: form-data; name=\"val2\""+val2+"****" "Content-Disposition: form-data; name=\"image\";filename=\"" + imagefilePath +"\""+"****"); outputStream.writeBytes(res); my actual problem is values are not posting so first if condition get false and else section is executed that is it give value not found please help me

    Read the article

  • initWithContentsOfURL seems to have issues with "long" URLs

    - by samsam
    Hi there I'm facing a rather strange Issue when trying to load data from an XML-Webservice. The webservice allows me to pass separated identifiers within the URL-Request. It is therefore possible for the URL to become rather long (240 characters). If I open said URL in firefox the response arrives as planned, if I execute the following code xmlData remains empty. NSString *baseUrl = [[NSString alloc] initWithString:[[[[kSearchDateTimeRequestTV stringByReplacingOccurrencesOfString:@"{LANG}" withString:appLanguageCode] stringByReplacingOccurrencesOfString:@"{IDENTIFIERS}" withString:myIdentifiers] stringByReplacingOccurrencesOfString:@"{STARTTICKS}" withString:[NSString stringWithFormat:@"%@", [[startTime getTicks] descriptionWithLocale:nil]]] stringByReplacingOccurrencesOfString:@"{ENDTICKS}" withString:[NSString stringWithFormat:@"%@", [[endTime getTicks] descriptionWithLocale:nil]]]]; NSLog(baseUrl); //looks good, if openend in browser, returnvalue is ok urlRequest = [NSURL URLWithString:baseUrl]; NSString *xmlData = [NSString stringWithContentsOfURL:urlRequest encoding:NSUTF8StringEncoding error:&err]; //err is nil, therefore i guess everything must be ok... :( NSLog(xmlData); //nothing... is there any sort of URL-Length restriction, does the same problem happened to anyone of you as well? whats a good workaround? thanks for your help sam

    Read the article

  • Can I keep git from pushing the master branch to all remotes by default?

    - by Curtis
    I have a local git repository with two remotes ('origin' is for internal development, and 'other' is for an external contractor to use). The master branch in my local repository tracks the master in 'origin', which is correct. I also have a branch 'external' which tracks the master in 'other'. The problem I have now is that my master brach ALSO wants to push to the master in 'other' as well, which is an issue. Is there any way I can specify that the local master should NOT push to other/master? I've already tried updating my .git/config file to include: [branch "master"] remote = origin merge = refs/heads/master [branch "external"] remote = other merge = refs/heads/master [push] default = upstream But remote show still shows that my master is pushing to both remotes: toko:engine cmlacy$ git remote show origin Password: * remote origin Fetch URL: <REPO LOCATION> Push URL: <REPO LOCATION> HEAD branch: master Remote branches: master tracked refresh-hook tracked Local branch configured for 'git pull': master merges with remote master Local ref configured for 'git push': master pushes to master (up to date) Those are all correct. toko:engine cmlacy$ git remote show other Password: * remote other Fetch URL: <REPO LOCATION> Push URL: <REPO LOCATION> HEAD branch: master Remote branch: master tracked Local branch configured for 'git pull': external merges with remote master Local ref configured for 'git push': master pushes to master (local out of date) That last section is the problem. 'external' should merge with other/master, but master should NEVER push to other/master. It's never gong to work.

    Read the article

  • Can you make a python script behave differently when imported than when run directly?

    - by futuraprime
    I often have to write data parsing scripts, and I'd like to be able to run them in two different ways: as a module and as a standalone script. So, for example: def parseData(filename): # data parsing code here return data def HypotheticalCommandLineOnlyHappyMagicFunction(): print json.dumps(parseData(sys.argv[1]), indent=4) the idea here being that in another python script I can call import dataparser and have access to dataParser.parseData in my script, or on the command line I can just run python dataparser.py and it would run my HypotheticalCommandLineOnlyHappyMagicFunction and shunt the data as json to stdout. Is there a way to do this in python?

    Read the article

  • How to parse XML string value using jQuery ?

    - by Vijay
    Hi All, I am new to jquery. I am trying to parse xml string using jquery. I have found one sample code; $(function() { $.get('data.xml', function(d) { var data = ""; var startTag = "<table border='1' id='mainTable'><tbody><tr><td style=\"width: 120px\">Name</td><td style=\"width: 120px\">Link</td></tr>"; var endTag = "</tbody></table>"; $(d).find('url').each(function() { var $url = $(this); var link = $url.find('link').text(); var name = $url.find('name').text(); data += '<tr><td>' + name + '</td>'; data += '<td>' + link + '</td></tr>'; }) $("#content").html(startTag + data + endTag); ; }); }); In this case, I am able to parse and fetch the values from xml file. but now what I am looking for is instead of reading file from desk, I want to read the xml from string. Say, instead of data.xml I want to parse string which consists of well formed xml. does anyone have any idea about this ? Thanks in advance

    Read the article

  • CSS Sprite for images which have vertical as well as horizontal repeats

    - by Rachel
    I have four images, one of which has background repeat property in horizontal direction and three of which have background repeat in vertical direction. I have different CSS classes which currently uses this images as under: .sb_header_dropdown { background: url(images/shopping_dropdown_bg.gif) repeat-y top left; padding: 8px 3px 8px 15px; } .shopping_basket_dropdown .sb_body { background: url(images/shopping_dropdown_body_bg.png) repeat-y top left; margin: 0; padding: 5px 9px 5px 8px; position: relative; z-index: 99999; } .checkout_cart .co_header_left { background: url(images/bg.gif) repeat-x 0 -150px; overflow: hidden; padding-left: 3px; } .sb_dropdown_footer { background: url(images/shopping_dropdown_footer_bg.png) repeat-y top left; clear: both; height: 7px; font-size: 0; } So here am making 4 HTTP Request and I want to implement CSS Sprite for all 4 images such that I can reduce the number of HTTP Request from 4 to 1, also thing to keep in mind is that here we have background repeat for all 4 images, either on x-direction or on y-direction and so how should sprite be created and how it can be used in the CSS to reduce the number of HTTP request. I hope this question is clear.

    Read the article

  • Time out when creating a site collection

    - by Daeko
    I am trying to create a site collection programmatically. It has worked for about 6 months, but after the servers have been updated (various patches) it doesn’t work anymore (we have 3 servers: 1 development, 1 test, 1 production). It is still working in my development environment which hasn’t been updated, but not on the two others. I don’t receive any error messages, it just hangs at the code that is supposed to add the site collection (see code below). I am using Windows Server 2003 R2 and Sharepoint 2007 (version 12.0.0.6421 ). It doesn’t give me any errors, it just hangs until Internet Explorer comes with a “request timed out” response. If I try and debug the code, the code just stops there and nothing happens. No error messages or anything. public static string CreateSPAccountSite(string siteName) { string url = ""; SPSecurity.RunWithElevatedPrivileges(delegate() { SPWeb web = SPContext.Current.Web; using (SPSite siteCollectionOuter = new SPSite(web.Site.ID)) { SPWebApplication webApp = siteCollectionOuter.WebApplication; SPSiteCollection siteCollection = webApp.Sites; SPSite site = siteCollection.Add("sites/" + siteName, siteName, "Auto generated Site collection.", 1033, "STS#0", siteCollectionOuter.Owner.LoginName, siteCollectionOuter.Owner.Name, siteCollectionOuter.Owner.Email); //Hangs here site.PortalName = "Portal"; site.PortalUrl = mainUrl; // https://www.ourdomain.net url = site.Url; } }); return url; //Should be "https://www.outdomain.net/sites/siteName" }

    Read the article

  • Android: Gzip/Http supported by default?

    - by OneWorld
    I am using the code shown below to get Data from our server where Gzip is turned on. Does my Code already support Gzip (maybe this is already done by android and not by my java program) or do I have to add/change smth.? How can I check that it's using Gzip? For my opionion the download is kinda slow. private static InputStream OpenHttpConnection(String urlString) throws IOException { InputStream in = null; int response = -1; URL url = new URL(urlString); URLConnection conn = url.openConnection(); if (!(conn instanceof HttpURLConnection)) throw new IOException("Not an HTTP connection"); try { HttpURLConnection httpConn = (HttpURLConnection) conn; httpConn.setAllowUserInteraction(false); httpConn.setInstanceFollowRedirects(true); httpConn.setRequestMethod("GET"); httpConn.connect(); response = httpConn.getResponseCode(); if (response == HttpURLConnection.HTTP_OK) { in = httpConn.getInputStream(); if(in == null) throw new IOException("No data"); } } catch (Exception ex) { throw new IOException("Error connecting"); } return in; }

    Read the article

  • Automator / AppleScript to process incoming emails in Mac Mail

    - by mark
    Hello all, I'm designing an app that allows users to email me crash reports if my app ever crashes. I'd like to leave Mac Mail running on a computer and when an email comes through, an automator script / AppleScript runs to process the contents of the body of the email. I've got the entire parsing/processing done in a python script, except I have to manually copy the contents of the email into a file and then run my parser on that file. What's the best way to set this up so I can the contents of the email be pushed into my parsing script? Many thanks!

    Read the article

  • optimized search using ajax and keypress

    - by ooo
    i have the following code as i want to search a database as a user is typing into a textbox. This code below works fine but it seems a little inefficient as if a user is typing really fast, i am potentially doing many more searches than necessary. So if a user is typing in "sailing" i am searching on "sail", "saili", "sailin", and "sailing" i wanted to see if there was a way to detect any particular time between keypresses so only search if user stops typing for 500 milliseconds or something like this. is there a best practices for something like this? $('#searchString').keypress(function(e) { if (e.keyCode == 13) { var url = '/Tracker/Search/' + $("#searchString").val(); $.get(url, function(data) { $('div#results').html(data); $('#results').show(); }); } else { var existingString = $("#searchString").val(); if (existingString.length > 2) { var url = '/Tracker/Search/' + existingString; $.get(url, function(data) { $('div#results').html(data); $('#results').show(); }); } }

    Read the article

  • Help Redirecting A Page to Another Page with adverts for 5 seconds, and then redirecting to another page.

    - by XcodeDev
    Hey, I am trying to redirect a page to another page, and that was working successfully. However I am trying to redirect the first page to another page with adverts. This page will then redirect to another page after five seconds. I am trying to do that by doing this: <?php include('ads.php'); ?> <?php sleep(2); $url = $_GET['url']; header("Location: ".$url.""); exit; ?> However it is showing the advert in ads.php perfectly, but it is not redirecting after five seconds. I am receiving this error in my web browser: Warning: Cannot modify header information - headers already sent by (output started at /home/nucleusi/public_html/adverts/ads.php:1) in /home/nucleusi/public_html/adverts/index.php on line 7 A typical link I would be redirecting to would be this: http://nucleusiphone.com/adverts/index.php/?url=http%3A%2F%2Fitunes.apple.com%2Fmx%2Falbum%2Fstill-got-the-blues%2Fid14135178%3Fi%3D14135158 Thanks in advanced. PS. I don't know any php so any code helps!

    Read the article

  • Pipelining String in Powershell

    - by ChvyVele
    I'm trying to make a simple PowerShell function to have a Linux-style ssh command. Such as: ssh username@url I'm using plink to do this, and this is the function I have written: function ssh { param($usernameAndServer) $myArray = $usernameAndServer.Split("@") $myArray[0] | C:\plink.exe -ssh $myArray[1] } If entered correctly by the user, $myArray[0] is the username and $myArray[1] is the URL. Thus, it connects to the URL and when you're prompted for a username, the username is streamed in using the pipeline. Everything works perfectly, except the pipeline keeps feeding the username ($myArray[0]) and it is entered as the password over and over. Example: PS C:\Users\Mike> ssh xxxxx@yyyyy login as: xxxxx@yyyyy's password: Access denied xxxxx@yyyyy's password: Access denied xxxxx@yyyyy's password: Access denied xxxxx@yyyyy's password: Access denied xxxxx@yyyy's password: Access denied xxxxx@yyyyy's password: FATAL ERROR: Server sent disconnect message type 2 (protocol error): "Too many authentication failures for xxxxx" Where the username has been substituted with xxxxx and the URL has been substituted with yyyyy. Basically, I need to find out how to stop the script from piping in the username ($myArray[0]) after it has been entered once. Any ideas? I've looked all over the internet for a solution and haven't found anything.

    Read the article

  • how to pass value to controller??

    - by rajesh
    When I try to pass url value to controller action, action is not getting the required value. I'm sending the value like this: function value(url,id) { alert(url); document.getElementById('rating').innerHTML=id; var params = 'artist='+id; alert(params); // var newurl='http://localhost/songs_full/public/eslresult/ratesong/userid/1/id/27'; var myAjax = new Ajax.Request(newurl,{method: 'post',parameters:params,onComplete: loadResponse}); //var myAjax = new Ajax.Request(url,{method:'POST',parameters:params,onComplete: load}); //alert(myAjax); } function load(http) { alert('success'); } and in the controller I have: public function ratesongAction() { $user=$_POST['rating']; echo $user; $post= $this->getRequest()->getPost(); //echo $post; $ratesongid= $this->_getParam('id'); } But still not getting the result. I am using zend framework.

    Read the article

  • Better way to download a binary file?

    - by geoff
    I have a site where a user can download a file. Some files are extremely large (the largest being 323 MB). When I test it to try and download this file I get an out of memory exception. The only way I know to download the file is below. The reason I'm using the code below is because the URL is encoded and I can't let the user link directly to the file. Is there another way to download this file without having to read the whole thing into a byte array? FileStream fs = new FileStream(context.Server.MapPath(url), FileMode.Open, FileAccess.Read); BinaryReader br = new BinaryReader(fs); long numBytes = new FileInfo(context.Server.MapPath(url)).Length; byte[] bytes = br.ReadBytes((int) numBytes); string filename = Path.GetFileName(url); context.Response.Buffer = true; context.Response.Charset = ""; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.ContentType = "application/x-rar-compressed"; context.Response.AddHeader("content-disposition", "attachment;filename=" + filename); context.Response.BinaryWrite(bytes); context.Response.Flush(); context.Response.End();

    Read the article

  • Nginx - Treats PHP as binary

    - by Think Floyd
    We are running Nginx+FastCgi as the backend for our Drupal site. Everything seems to work like fine, except for this one url. http:///sites/all/modules/tinymce/tinymce/jscripts/tiny_mce/plugins/smimage/index.php (We use TinyMCE module in Drupal, and the url above is invoked when user tries to upload an image) When we were using Apache, everything was working fine. However, nginx treats that above url Binary and tries to Download it. (We've verified that the file pointed out by the url is a valid PHP file) Any idea what could be wrong here? I think it's something to do with the NGINX configuration, but not entirely sure what that is. Any help is greatly appreciated. Config: Here's the snippet from the nginx configuration file: root /var/www/; index index.php; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; } error_page 404 index.php; location ~* \.(engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(code-style\.pl|Entries.*|Repository|Root|Tag|Template)$ { deny all; } location ~* ^.+\.(jpg|jpeg|gif|png|ico)$ { access_log off; expires 7d; } location ~* ^.+\.(css|js)$ { access_log off; expires 7d; } location ~ .php$ { include /etc/nginx/fcgi.conf; fastcgi_pass 127.0.0.1:8888; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; } location ~ /\.ht { deny all; }

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >