Search Results

Search found 11646 results on 466 pages for 'progressive download'.

Page 55/466 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Curl download image not working

    - by mark
    I would like to check whether a remote image is not older than 2 days and then download it. The image is not downloaded in anycase. What is wrong here? $ch = curl_init($file_source); // the file we are downloading curl_setopt($ch, CURLOPT_TIMEOUT, 15); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch, CURLOPT_FILETIME, true); curl_setopt($ch, CURLOPT_HEADER, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, false); curl_exec($ch); $headers = curl_getinfo($ch); $last_modified = $headers['filetime']; if ($last_modified != -1) { if ($last_modifiedtime()-86400*2) { $ch2 = curl_init($file_source); $wh = fopen($file_target, 'wb') or errorIMG('003'); curl_setopt($ch2, CURLOPT_FILE, $wh); curl_setopt($ch2, CURLOPT_TIMEOUT, 25); curl_setopt($ch2, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch2, CURLOPT_HEADER, true); curl_setopt($ch2, CURLOPT_RETURNTRANSFER, true); curl_exec($ch2); curl_close($ch2); fclose($wh); } } curl_close($ch);

    Read the article

  • Download Remote File

    - by Abs
    Hello all, I have a function that will be passed in a link. The link is to a remote image. I thought I could just use the extension of the file in the URL to determine the type of image but some URLs won't have extensions in the URL. They probably just push headers to the browser and therefore I do not have an extension to parse from the URL. How can I test if the URL has an extension and if not then read the headers to determine the file type? Am I over complicating things here? Is there an easier way to do this? I am making use of Codeigniter maybe there is something already built in to do this? All I really want to do is download an image from a URL with the correct extension. This is what I have so far. function get_image($image_link){ $remoteFile = $image_link; $ext = ''; //some URLs might not have an extension $file = fopen($remoteFile, "r"); if (!$file) { return false; }else{ $line = ''; while (!feof ($file)) { $line .= fgets ($file, 4096); } $file_name = time().$ext; file_put_contents($file_name, $line); } fclose($file); } Thanks all for any help

    Read the article

  • Nginx - Treats PHP as binary

    - by Think Floyd
    We are running Nginx+FastCgi as the backend for our Drupal site. Everything seems to work like fine, except for this one url. http:///sites/all/modules/tinymce/tinymce/jscripts/tiny_mce/plugins/smimage/index.php (We use TinyMCE module in Drupal, and the url above is invoked when user tries to upload an image) When we were using Apache, everything was working fine. However, nginx treats that above url Binary and tries to Download it. (We've verified that the file pointed out by the url is a valid PHP file) Any idea what could be wrong here? I think it's something to do with the NGINX configuration, but not entirely sure what that is. Any help is greatly appreciated. Config: Here's the snippet from the nginx configuration file: root /var/www/; index index.php; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; } error_page 404 index.php; location ~* \.(engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(code-style\.pl|Entries.*|Repository|Root|Tag|Template)$ { deny all; } location ~* ^.+\.(jpg|jpeg|gif|png|ico)$ { access_log off; expires 7d; } location ~* ^.+\.(css|js)$ { access_log off; expires 7d; } location ~ .php$ { include /etc/nginx/fcgi.conf; fastcgi_pass 127.0.0.1:8888; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; } location ~ /\.ht { deny all; }

    Read the article

  • Download Canvas Image Png Chome/Safari

    - by user2639176
    Works in Firefox, and won't work in Safari, or Chrome. function loadimage() { var canvas = document.getElementById("canvas"); if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); xmlhttp2=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); xmlhttp2=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { rasterizeHTML.drawHTML(xmlhttp.responseText, canvas); var t=setTimeout(function(){copy()},3000) } } xmlhttp.open("GET","/sm/<?=$sm[0];?>",true); xmlhttp.send(); } function copy() { var canvas = document.getElementById("canvas"); var img = canvas.toDataURL("image/png"); document.getElementById('dl').href = img; document.getElementById('dl').innerHTML = "Download"; } Now I didn't write this, so I don't know too much javascript. But the script works in Firefox. In Chrome, getting: Uncaught Security Error: An attempt was made to break through the security policy of the user-agent. For toDataURL("image/png")

    Read the article

  • Serving files (800MB) results in an empty file

    - by azz0r
    Hello, with the following code, small files are served fine, however large (see, 800MB and above) result in empty files! Would I need to do something with apache to solve this? <?php class Model_Download { function __construct($path, $file_name) { $this->full_path = $path.$file_name; } public function execute() { if ($fd = fopen ($this->full_path, "r")) { $fsize = filesize($this->full_path); $path_parts = pathinfo($this->full_path); $ext = strtolower($path_parts["extension"]); switch ($ext) { case "pdf": header("Content-type: application/pdf"); // add here more headers for diff. extensions header("Content-Disposition: attachment; filename=\"".$path_parts["basename"]."\""); // use 'attachment' to force a download break; default; header("Content-type: application/octet-stream"); header("Content-Disposition: filename=\"".$path_parts["basename"]."\""); break; } header("Content-length: $fsize"); header("Cache-control: private"); //use this to open files directly while(!feof($fd)) { $buffer = fread($fd, 2048); echo $buffer; } } fclose ($fd); exit; } }

    Read the article

  • select value of td and download content of selected tds

    - by user1272145
    I have this table <table class="results" id="summary_results"> <tr> <td>select all</td> <td>name</td> <td>id</td> <td>address</td> <td>url</td> </tr> <tr> <td> <input type="checkbox"> </td> <td>john doe</td> <td>1</td> <td>33.85 some address</td> <td>http://www.domain.com</td> </tr> <tr> <td> <input type="checkbox"> </td> <td>jane doe</td> <td>2</td> <td>34.85 some address</td> <td>http://www.domain2.com</td> </tr> <tr> <td> <input type="checkbox"> </td> <td>sam</td> <td>3</td> <td>33.86 some address</td> <td>http://www.domain3.com</td> </tr> </table> I would like to select all the rows then download the content of the urls knowing that each url is linked to the id. for example the first url will be www.domain.com?id=1&report=report

    Read the article

  • Is it possible to download a large database using mysql query

    - by Rose
    i am downloading files from server using WinSCP.Is it possible to write a query to download a large database using mysql query? Or using any other method i have tried with this code but i am not able to get the whole database structure <?php if(file_exists('backup_sql/my_backup.zip')) { unlink('backup_sql/my_backup.zip'); } $tables='*'; $host='MY HOST NAME'; $user='MY_USERNAME'; $pass='MYPASSWORD'; $name='MY_DB_NAME'; $link = mysql_connect($host,$user,$pass); mysql_select_db($name,$link); //get all of the tables if($tables == '*') { $tables = array(); $result = mysql_query('SHOW TABLES'); while($row = mysql_fetch_row($result)) { $tables[] = $row[0]; } } else { $tables = is_array($tables) ? $tables : explode(',',$tables); } $return=''; //cycle through foreach($tables as $table) { $result = mysql_query('SELECT * FROM '.$table); $num_fields = mysql_num_fields($result); //$return.= 'DROP TABLE '.$table.';'; $row2 = mysql_fetch_row(mysql_query('SHOW CREATE TABLE '.$table)); $return.= "\n\n".$row2[1].";\n\n"; for ($i = 0; $i < $num_fields; $i++) { while($row = mysql_fetch_row($result)) { $return.= 'INSERT INTO '.$table.' VALUES('; for($j=0; $j<$num_fields; $j++) { $row[$j] = addslashes($row[$j]); //$row[$j] = ereg_replace("\n","\\n",$row[$j]); if (isset($row[$j])) { $return.= '"'.$row[$j].'"' ; } else { $return.= '""'; } if ($j<($num_fields-1)) { $return.= ','; } } $return.= ");\n"; } } $return.="\n\n\n"; } $rand_var=time(); $files_to_zip = array( "'backup_sql/db-backup-'.$rand_var.'.sql'", ); $name = 'db-backup-'.$rand_var.'.sql'; $data = $return; ?> any one please help me... thank you

    Read the article

  • Is there a way I can choose what individual files to download within a .torrent?

    - by Valamas
    Is there a way to get the list of files under the files tab without the download starting? Sometimes I am after a single file within a torrent, however, to see the list, I need to start downloading. I then select the only file I want to download, but the other files which started to download and were stopped still will appear on the completed side even if 1k/600k of that file has downloaded. You can imagine the mess and confusion this can cause. So, is there a way I can instruct my BitTorrent/uTorrent program to just grab the file list? Solution: You must download the .torrent file. This contains the list of files. If you download using a magnet link, you do not get the list of files until the torrent starts.

    Read the article

  • How do I catch this WPF Bitmap loading exception?

    - by mmr
    I'm developing an application that loads bitmaps off of the web using .NET 3.5 sp1 and C#. The loading code looks like: try { CurrentImage = pics[unChosenPics[index]]; bi = new BitmapImage(CurrentImage.URI); // BitmapImage.UriSource must be in a BeginInit/EndInit block. bi.DownloadCompleted += new EventHandler(bi_DownloadCompleted); AssessmentImage.Source = bi; } catch { System.Console.WriteLine("Something broke during the read!"); } and the code to load on bi_DownloadCompleted is: void bi_DownloadCompleted(object sender, EventArgs e) { try { double dpi = 96; int width = bi.PixelWidth; int height = bi.PixelHeight; int stride = width * 4; // 4 bytes per pixel byte[] pixelData = new byte[stride * height]; bi.CopyPixels(pixelData, stride, 0); BitmapSource bmpSource = BitmapSource.Create(width, height, dpi, dpi, PixelFormats.Bgra32, null, pixelData, stride); AssessmentImage.Source = bmpSource; Loading.Visibility = Visibility.Hidden; AssessmentImage.Visibility = Visibility.Visible; } catch { System.Console.WriteLine("Exception when viewing bitmap."); } } Every so often, an image comes along that breaks the reader. I guess that's to be expected. However, rather than being caught by either of those try/catch blocks, the exception is apparently getting thrown outside of where I can handle it. I could handle it using global WPF exceptions, like this SO question. However, that will seriously mess up the control flow of my program, and I'd like to avoid that if at all possible. I have to do the double source assignment because it appears that many images are lacking in width/height parameters in the places where the microsoft bitmap loader expects them to be. So, the first assignment appears to force the download, and the second assignment gets the dpi/image dimensions happen properly. What can I do to catch and handle this exception? Stack trace: at MS.Internal.HRESULT.Check(Int32 hr) at System.Windows.Media.Imaging.BitmapFrameDecode.get_ColorContexts() at System.Windows.Media.Imaging.BitmapImage.FinalizeCreation() at System.Windows.Media.Imaging.BitmapImage.OnDownloadCompleted(Object sender, EventArgs e) at System.Windows.Media.UniqueEventHelper.InvokeEvents(Object sender, EventArgs args) at System.Windows.Media.Imaging.LateBoundBitmapDecoder.DownloadCallback(Object arg) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.DispatcherOperation.InvokeImpl() at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Threading.DispatcherOperation.Invoke() at System.Windows.Threading.Dispatcher.ProcessQueue() at System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Boolean isSingleParameter) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.TranslateAndDispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame) at System.Windows.Application.RunInternal(Window window) at LensComparison.App.Main() in C:\Users\Mark64\Documents\Visual Studio 2008\Projects\LensComparison\LensComparison\obj\Release\App.g.cs:line 48 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • There's some html content in a file which is downloaded using PHP scripts on firefox

    - by chqiu
    I use the following to download a file with PHP: ob_start(); $browser = id_browser(); header('Content-Type: '.(($browser=='IE' || $browser=='OPERA')? 'application/octetstream':'application/octet-stream')); header('Expires: '.gmdate('D, d M Y H:i:s').' GMT'); header('Content-Transfer-Encoding: binary'); header('Content-Length: '.filesize(realpath($fullpath))); //header("Content-Encoding: none"); if($browser == 'IE') { header('Content-Disposition: attachment; filename="'.$file.'"'); header('Cache-Control: must-revalidate, post-check=0, pre-check=0'); header('Pragma: public'); } else { header('Content-Disposition: attachment; filename="'.$file.'"'); header('Cache-Control: no-cache, must-revalidate'); header('Pragma: no-cache'); } //@set_time_limit( 0 ); ReadFileChunked(utf8_decode($fullpath)); ob_end_flush(); The source code of ReadFileChunked is: function ReadFileChunked($filename,$retbytes=true) { $chunksize = 1*(1024*1024); $remainFileSize = filesize($filename); if($remainFileSize < $chunksize) $chunksize = $remainFileSize; $buffer = ''; $cnt =0; // $handle = fopen($filename, 'rb'); //echo $filename."<br>"; $handle = fopen($filename, 'rb'); if ($handle === false) { //echo 1; return false; } //echo 2; while (!feof($handle)) { //echo "current remain file size $remainFileSize<br>"; //echo "current chunksize $chunksize<br>"; $buffer = fread($handle, $chunksize); echo $buffer; sleep(1); ob_flush(); flush(); if ($retbytes) { $cnt += strlen($buffer); } $remainFileSize -= $chunksize; if($remainFileSize == 0) break; if($remainFileSize < $chunksize) { $chunksize = $remainFileSize; } } $status = fclose($handle); if ($retbytes && $status) { return $cnt; // return num. bytes delivered like readfile() does. } return $status; } The question is : The file downloaded will contiain some html tags which are the content of the html code generated by the php. The error will happened when downloading the txt file with the file size smaller than 4096 bytes. Please help me to slove this problem , thank you very much! Chu

    Read the article

  • Which Opensource License i release my new Youtube video download Project so no one can ask me that this project is against youtube?

    - by John Smiith
    i have designed new opensource youtube video downloader but download videos from youtube is against youtube How can i make this type of message : This is opensource project that is used to download videos from youtube but downloading videos from youtube is against youtube and use of this project under study purpose only. Or Which Opensource License i release so that no one can ask me that this is against youtube?

    Read the article

  • Ubuntu 12.10 not updating. Failed to download repository information

    - by vinay
    I recently tried to update by using the update manager, The update stopped and I received the error message: Failed to download repository information W:Failed to fetch http://ppa.launchpad.net/deluge-team/ppa/ubuntu/dists/quantal/main/source/Sources 404 Not Found , W:Failed to fetch http://ppa.launchpad.net/deluge-team/ppa/ubuntu/dists/quantal/main/binary-i386/Packages 404 Not Found , E:Some index files failed to download. They have been ignored, or old ones used instead. What is the problem?

    Read the article

  • How to share files and folders on a forum so that anyone can download without having an Ubuntu One account?

    - by ashok.biollay
    I just began to discover Ubuntu One. I upload a video on my account, and put it in a folder. I would like to know whether it is possible to share this folder or this file simply by giving the url in forum (in a message), so that anyone can download the file, with no need to have an Ubuntu One account. (a bit like PhotoBucket, where you can share pictures and videos and folders to people with ou without PhotoBucket, and anyone can download them) Thank you in advance for your answers.

    Read the article

  • How can I force apt to optimize the dependency tree for minimal download size?

    - by ObsessiveSSO?
    Some background information: As you may know, in a Debian package, there may be alternative dependencies, written in the CONTROL file as Depends: apache2|something-else, for example. How does apt select which dependencies to choose, and how can I override this so I can minimize download size? I'm on a slow connection on some locations and need it to use the smallest total download size. How can I force it to do so?

    Read the article

  • WebClient on WP7 - Throw "A request with this method cannot have a request body"

    - by Peter Hansen
    If I execute this code in a Consoleapp it works fine: string uriString = "http://url.com/api/v1.0/d/" + Username + "/some?amount=3&offset=0"; WebClient wc = new WebClient(); wc.Headers["Content-Type"] = "application/json"; wc.Headers["Authorization"] = AuthString.Replace("\\", ""); string responseArrayKvitteringer = wc.DownloadString(uriString); Console.WriteLine(responseArrayKvitteringer); But if I move the code to my WP7 project like this: string uriString = "http://url.com/api/v1.0/d/" + Username + "/some?amount=3&offset=0"; WebClient wc = new WebClient(); wc.Headers["Content-Type"] = "application/json"; wc.Headers["Authorization"] = AuthString.Replace("\\", ""); wc.DownloadStringCompleted += new DownloadStringCompletedEventHandler(wc_DownloadStringCompleted); wc.DownloadStringAsync(new Uri(uriString)); void wc_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { MessageBox.Show(e.Result); } I got the exception: A request with this method cannot have a request body. Why? The solution is to remove the Content-type: string uriString = "http://url.com/api/v1.0/d/" + Username + "/some?amount=3&offset=0"; WebClient wc = new WebClient(); //wc.Headers["Content-Type"] = "application/json"; wc.Headers["Authorization"] = AuthString.Replace("\\", ""); wc.DownloadStringCompleted += new DownloadStringCompletedEventHandler(wc_DownloadStringCompleted); wc.DownloadStringAsync(new Uri(uriString)); void wc_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { MessageBox.Show(e.Result); }

    Read the article

  • File upload and Download in a web app using struts2

    - by lakshmanan
    Hi In struts2 upload methods, can I choose where the uploaded file must be saved. I mean, all the examples in web ask me to store in WEB-INF which surely is not a good idea. I want to be able to store the uploaded file in any place in my disk. How should i do it? Can i do it with help of ServletContextAware interceptor ?

    Read the article

  • [C#] A problem of downloading webpage's HTML source.

    - by Nam Gi VU
    I use System.Net.WebClient.DownloadString(url) to get the HTML source of http://kqxs.vn but what I recieved is a caution text from the web server which says in Vietnamese as: "Xin loi. Chung toi khong the dap ung yeu cau truy cap cua ban... Vui long lien he : [email protected]. Chao ban" which is translated in English as "Sorry. We cannot response to your request... Please contact... Good bye." This is strange because when I use a WebControl to get the HTML ( by calling .Navigate(url) and then .DocumentText), I receive the different HTML codes - which in turn is exactly what I see when open the website by Firefox & view the source code from Firefox. I read DownloadData() is downloading source that is completely wrong. Source view in Firefox different than that downloaded. - Stack Overflow and found the answer to my symptom. But I don't know how to set the User-Agent. Please help.

    Read the article

  • Response.TransmitFile() with UNC share (ASP.NET)

    - by frankadelic
    In the comments of this page: http://msdn.microsoft.com/en-us/library/12s31dhy.aspx ..it says that TransmitFile() cannot be used with UNC shares. As far as I can tell, this is the case; I get this error in Event Log when I attempt it: TransmitFile failed. File Name: \\myshare1\e$\file.zip, Impersonation Enabled: 0, Token Valid: 1, HRESULT: 0x8007052e The suggested alternative is to use WriteFile(), however, this is problematic because it loads the file into memory. In my application, the files are 200MB, so this is not going to scale. Is there a method in ASP.NET for streaming files to users that's: scalable (doesn't read entire file into RAM or occupy ASP.NET threads) works with UNC shares Mapping a network drive as a virtual directory is not an option for us. I would like to avoid copying the file to the local web server as well. Thanks

    Read the article

  • New preg-repleace for youtube

    - by marc
    Welcome, I notice that Youtube make some changes into their website code. Anyone have idea how make it working today ? That's my script (don't work anymore) preg_match('/"video_id": "(.*?)"/', $page, $match); $var_id = $match[1]; preg_match('/"t": "(.*?)"/', $page, $match); $var_t = $match[1]; Look at source of example Youtube video page: http://www.youtube.com/watch?v=w_J27GxPNM0 (yes i like that song very much) Now the t variable can be found under <script> (function() { var isIE = /*@cc_on!@*/false; I dont paste full because it's very long. Regards

    Read the article

  • Open-stackoverflow installation/download info

    - by fmsf
    Hey, You probably already know about this I guess it's a programming question. I've been trying to find with google the info, but the amount of results using "open stackoverflow" pointing to questions here in stackoverflow is making the search a bit hard. So what better place to ask than here. Where can i find information/documentation on how to install/use/extend the SO q&a system? I want to create a clone for a local community from my city.

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >