Search Results

Search found 12327 results on 494 pages for 'attachment download'.

Page 60/494 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • [PHP] Using cURL to download large XML files

    - by ndg
    I'm working with PHP and need to parse a number of fairly large XML files (50-75MB uncompressed). The issue, however, is that these XML files are stored remotely and will need to be downloaded before I can parse them. Having thought about the issue, I think using a system() call in PHP in order to initiate a cURL transfer is probably the best way to avoid timeouts and PHP memory limits. Has anyone done anything like this before? Specifically, what should I pass to cURL to download the remote file and ensure it's saved to a local folder of my choice?

    Read the article

  • Download office document without the web server trying to render it

    - by Dan Revell
    I'm trying to download an InfoPath template that's hosted on SharePoint. If I hit the url in internet explorer it asks me where to save it and I get the correct file on my disk. If I try to do this programmatically with WebClient or HttpWebRequest then I get HTML back instead. How can I make my request so that the web server returns the actual xsn file and doesn't try to render it in html. If internet explorer can do this then it's logical to think that I can too. I've tried setting the Accept property of the request to application/x-microsoft-InfoPathFormTemplate but that hasn't helped. It was a shot in the dark.

    Read the article

  • android html download and parse error

    - by Brahadeesh
    I am trying to download the html file using the ul of the page. I am using Jsoup. This is my code: TextView ptext = (TextView) findViewById(R.id.pagetext); Document doc = null; try { doc = (Document) Jsoup.connect(mNewLinkUrl).get(); } catch (IOException e) { Log.d(TAG, e.toString()); e.printStackTrace(); } NodeList nl = doc.getElementsByTagName("meta"); Element meta = (Element) nl.item(0); String title = meta.attr("title"); ptext.append("\n" + mNewLinkUrl); When running it, I am getting an error saying attr is not defined for the type element. What have I done wrong? Pardon me if this seems trivial.

    Read the article

  • download file in iframe in IE

    - by Estelle
    in a webpage I have a link to let the user download file, such as, "showfile.aspx?filename=xxx" in showfile.aspx, I send the file using Response.OutputStream.Write method. now I get some problem when somebody put this webpage in an IFrame and open in IE, as I checked the code, showfile.aspx is requested twice when clicks the link, and in the second time the cookies of authorization and session Id are missing. I tried to add the p3p header but not working. my question is, is this how the IE designed with iframe? is there anyway to work around? thanks.

    Read the article

  • While making an RSS reader which saves articles, how can I prevent duplicates?

    - by Koning Baard
    Lets say I have a RSS feed which lists the 3 newest questions on SO. At 1 o'clock, the feed looks like this: While making an RSS reader which saves articles, how can I prevent duplicates? Convert char array to UNICODE in MFC C++ How to deploy a Java Swing application with an embedded JavaDB database? At 2 o'clock, this feed looks like: django url from another template than the one associated with the view-function While making an RSS reader which saves articles, how can I prevent duplicates? Convert char array to UNICODE in MFC C++ (duplicate articles are bold) I want to download the RSS feed every 5 minutes, parse it and save the articles that aren't already saved, but I do not want duplicates (items that remain in the new, updated feed like the examples above). What can I use to determine if an article is already saved? Thanks

    Read the article

  • Downloading HTTP URLs asynchronously in C++

    - by Joey Adams
    What's a good way to download HTTP URLs (e.g. such as http://0.0.0.0/foo.htm ) in C++ on Linux ? I strongly prefer something asynchronous. My program will have an event loop that repeatedly initiates multiple (very small) downloads and acts on them when they finish (either by polling or being notified somehow). I would rather not have to spawn multiple threads/processes to accomplish this. That shouldn't be necessary. Should I look into libraries like libcurl? I suppose I could implement it manually with non-blocking TCP sockets and select() calls, but that would likely be less convenient.

    Read the article

  • Avoid application becoming unresponsive on download

    - by baron
    Hi I have an application which downloads a file from a network location and then displays that files information. It uses WebClient.DownloadFile to achieve this. The problem is, when the user clicks a button to start the download, the application becomes unresponsive until the file has downloaded. This gives the impression the application may have hung. I would like to seek ideas which would avoid this scenario. Though my solution will need to be 'as quick and cheap as possible' I would be interested in hearing about custom loaders that have been built, or some sort of loading symbol / progress bar / sand time clock thingy : ) Thanks for reading.

    Read the article

  • View returned file from Webservice method

    - by gafda
    I already have a method in my webservice that returns a byte[] containing only the bytes of the file downloading. The invocation is something like: http://www.mysite.com/myWebservice.asmx with: string fileId = "123"; bytes[] fileContent = myWebservice.Download(fileId); What I wanted to do is be able to invoke this method or other (to be made) on a aspx webpage and be able to open a browser window containing the real content of the file. i.e. Most files are TXT and PDF. (Assuming the client has the PDF plugin that alows him\her to view PDF's on the browser.)

    Read the article

  • Android Source code download error

    - by user351850
    Hi all I have followed the instructions on the Android website on how to download the latest android source code files but it gives errors when i run this command: repo init -u git://android2.git.kernel.org/platform/manifest.git It gives the following error: Getting repo ... from git://android.git.kernel.org/tools/repo.git android.git.kernel.org[0: 199.6.1.176]: errno=Connection refused android.git.kernel.org[0: 130.239.17.12]: errno=Connection refused fatal: unable to connect a socket (Connection refused) On checking forums for its resolution, i was told that port 9418 was being blocked. I use Ubuntu 10.04 and ensured that the firewall wasnt blocking the port and also enabled the port and the above IP addresses. I also spoke to the networking peeps who ensured that no traffic from the internet is being blocked. I would be glad if i could get directions on how to proceed next. Many thanks as you respond. Saheed.

    Read the article

  • No download dialog with FileResult

    - by majkinetor
    I am returning File result from action triggered by the form post event. I can't get download dialog. Instead, if I use: return File(Encoding.UTF8.GetBytes(reportPath), "text/plain", "Report.csv"); I get path to the file upon ajax execution in the target div. When I use return File(reportPath, "text/plain", "Report.csv"); I get content of the file in the target div. Any thoughts ? The action is declared as [HttpPost] public virtual ActionResult ExportFilter(Model model) { string outputFile = CreateReport(model); return File(....) }

    Read the article

  • Rails streaming file download

    - by Leonard Teo
    I'm trying to implement a file download with Rails. I want to eventually migrate this code to using S3 to serve the file. I've copied the Rails send_file code almost verbatim and I cannot seem to get it to stream a file to the user. What happens is that it sends 'a' file to the user, but the downloaded file itself simply contains the text.inspect of the Proc: # What am I doing wrong here? options = {} options[:length] = File.size(file.path) options[:filename] = File.basename(file.path) send_file_headers! options render :status => 200, :text => Proc.new { |response, output| len = 4096 File.open(file.path, 'rb') do |fh| while buf = fh.read(len) output.write(buf) end end } Ps: I've read in a number of other posts that it's not advisable to send files through the Rails stack, and if possible serve using the web server, or in the case of S3 use the hashed URL it can provide. Yes, we really do want to serve the file through the Rails stack.

    Read the article

  • Automate downloads from password protected website

    - by Andrew
    I need some help with a work project I have been assigned. At the moment we manually go to the site, logon and then download 2 excel files from a supplier's website every month. The files are then loaded into SQL. We want to automate this process. Now the loading of the files into SQL I can do, but I am not sure how I can automate logging onto the website entering my user details and collecting the files. I mostly deal with SQL and have very little .NET experience, so any code samples would be most appreciated.

    Read the article

  • iPhone SDK: Downloading large files from a server into the app bundle.

    - by Jessica
    Hi, I am building an app that plays multiple video files, But I would like to know How do you download a video file (100mb - 300mb) from a server into the application's bundle so it can later be locally referred to in code? The reason I want this type of a set up in my app is that I don't want the app binary to be made unnecessarily large due to including videos some users may not want. Also does this violate any of apple's terms? Also would it be simple to implement a progress view with this kind of set up and if so how? Any help is appreciated.

    Read the article

  • Getting all pdf files from a domain (for example *.adomain.com)

    - by Zack
    I need to download all pdf files from a certain domain. There are about 6000 pdf on that domain and most of them don't have an html link (either they have removed the link or they never put one in the first place). I know there are about 6000 files because I'm googling: filetype:pdf site:*.adomain.com However, Google lists only the first 1000 results. I believe there are two ways to achieve this: a) Use Google. However, how I can get all 6000 results from Google? Maybe a scraper? (tried scroogle, no luck) b) Skip Google and search directly on domain for pdf files. How do I do that when most them are not linked?

    Read the article

  • using php Download File From a given URL by passing username and password for http authentication

    - by Acharya
    Hi all, I need to download a text file using php code. The file is having http authentication. What procedure I should use for this. Should I use fsocketopen or curl or Is there any other way to do this? I am using fsocketopen but it does not seem to work. $fp=fsockopen("www.example.com",80,$errno,$errorstr); $out = "GET abcdata/feed.txt HTTP/1.1\r\n"; $out .= "User: xyz \r\n"; $out .= "Password: xyz \r\n\r\n"; fwrite($fp, $out); while(!feof($fp)) { echo fgets($fp,1024); } fclose($fp); Here fgets is returning false. Any help!!!

    Read the article

  • How do i get a directory listing for a folder on the web?

    - by JimDel
    How do I get a directory listing for a folder on the web? I'm looking to download a group of small files from a folder on the web. I can do it easily with a a single file but I'm not sure how to do it for multiple files. If there was something similar to the code below but for a folder on the web I think I can do it. private void button1_Click(object sender, EventArgs e) { DirectoryInfo di = new DirectoryInfo("c:/myFolder"); FileInfo[] rgFiles = di.GetFiles("*.*"); foreach (FileInfo fi in rgFiles) { //Do Something with each of them } } Thanks

    Read the article

  • groovy connect to proxy then download files

    - by senzacionale
    i want to grab the grapes but i am behind proxy so i can not download anything. How can i connect to proxy before downloading? import groovy.text.SimpleTemplateEngine import java.security.MessageDigest import org.apache.commons.cli.OptionBuilder import org.apache.commons.cli.Options import org.apache.commons.cli.PosixParser import org.apache.commons.io.FileUtils import org.apache.ivy.core.settings.IvySettings import org.apache.ivy.plugins.parser.m2.PomModuleDescriptorParser import org.apache.tools.ant.Project import org.apache.tools.ant.ProjectHelper import org.apache.tools.ant.types.Path import org.apache.commons.cli.HelpFormatter //First grab the grapes we need for the script and create a few beans to hold some values @Grab(group = 'org.apache.ant', module = 'ant', version = '1.7.1') @Grab(group = 'commons-io', module = 'commons-io', version = '1.4') @Grab(group = 'commons-cli', module = 'commons-cli', version = '1.2') @Grab(group = 'org.apache.ivy', module = 'ivy', version = '2.1.0')

    Read the article

  • JavaScript: How to download JS asynchronously?

    - by Teddyk
    On my web site, I'm trying to accomplishes the fastest page load as possible. I've noticed that it appears my JavaScript are not loading asynchronously. Picture linked below. How my web site works is that it needs to load two external JavaScript files: Google Maps v3 JavaScript, and JQuery JavaScript Once it loads these external javascript files, it then, and only then, can dynamically render the page. The reason why my page can't load until both Google Maps and JQuery are loaded is that - my page, based on the geolocation (using Gmaps) of the user will then display the page based on where they are located (e.g. New York, San Francisco, etc). Meaning, two people in different cities viewing my site will see different frontpages. Question: How can I get my JavaScript files to download asynchronously so that my overall page load time is quicker?

    Read the article

  • ClickOnce File will not associate on open from web browser download

    - by mstrickland
    I have a ClickOnce program that associates with a given extension and that works fine if the file is located on the file system. My problem comes in when this file is downloaded from a website. I have a web handler that prompts the user to Click to download the file. Upon clicking the link the user is presenter with an Open or Save Dialog. If the user chooses Open the program will not launch. If the user saves the file to their hard drive and then clicks the file the association will work. Any advice on getting the association to work on the prompt when the user clicks Open or is a Save required? -Edit : Tested this on both IE8 and Chrome with same result.

    Read the article

  • Create Directory, 'cd' to it and download a file pipeline in Perl

    - by neversaint
    I have a file that looks like this: ftp://url1/files1.tar.gz dir1 ftp://url2/files2.txt dir2 .... many more... What I want to do are these steps: Create directory based on column 2 Unix 'cd' to that directory Download file with 'wget' based on column1 But how come this approach of mine doesn't work while(<>) { chomp; my ($url,$dir) = split(/\t/,$_); system("mkdir $dir"); system("cd $dir"); # Fail here system("wget $url"); # here too } What's the right way to do it?

    Read the article

  • Download Large Files using java

    - by angelina
    Dear All, I M building a application in which i want to download large files on handset (mobile),but if size of file is large i m getting exception socket exception-broken pipe . inputStream = new FileInputStream(path); byte[] buffer = new byte[1024]; int bytesRead = 0; do { bytesRead = inputStream.read(buffer, offset, buffer.length); resp.getOutputStream().write(buffer, 0, bytesRead); } while (bytesRead == buffer.length); resp.getOutputStream().flush(); }

    Read the article

  • iPhone SDK: Downloading large files from a server into the app's documents.

    - by Jessica
    Hi, I am building an app that plays multiple video files, But I would like to know How do you download a video file (100mb - 300mb) from a server into the application's documents so it can later be locally referred to in code? The reason I want this type of a set up in my app is that I don't want the app binary to be made unnecessarily large due to including videos some users may not want. Also does this violate any of apple's terms? Also would it be simple to implement a progress view with this kind of set up and if so how? Any help is appreciated.

    Read the article

  • how do I download a large file (via HTTP) in .NET

    - by nickcartwright
    I need to download a LARGE file (2GB) over HTTP in a C# console app. Problem is, after about 1.2GB, the app runs out of memory. Here's the code I'm using: WebClient request = new WebClient(); request.Credentials = new NetworkCredential(username, password); byte[] fileData = request.DownloadData(baseURL + fName); As you can see... I'm reading the file directly into memory. I'm pretty sure I could solve this if I were to read the data back from HTTP in chunks and write it to a file on disk. Does anyone know how I could do this?

    Read the article

  • Curl download image only if not older than 2 days

    - by mark
    I want to download a image from a remote server only if it is not older than 2 days. I have the constructing as below now. Is this right? I want to now the last_modified data first before downloading. $ch = curl_init($file_source); // the file we are downloading curl_setopt($ch, CURLOPT_TIMEOUT, 20); curl_setopt($ch, CURLOPT_FILE, $wh); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch, CURLOPT_FILETIME, true); curl_exec($ch); $headers = curl_getinfo($ch); $last_modified = $headers['filetime']; if ($last_modified != -1) { // unknown echo date("Y-m-d", $last_modified); //etc } curl_close($ch); fclose($wh);

    Read the article

  • disable download of my paid app in Android

    - by Boy
    I have a paid app in the store which will remove the ads in another app when it is installed on that device. Now I want to remove this 'remove ads' app, as I want to have an in-app payement for this for instance (or maybe I just keep the ads version only). But the problem is, if I unpublish the app, people who bought it will not be able to download it again when they get a new phone or reset their phone. How to I keep the app in the Play Store, but prevent people from buying it? Is this possible? My backup plan is: make the app cost 10.000 euro's and put in the message that this app should not be bought anymore. But I don't like that...

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >