Search Results

Search found 82718 results on 3309 pages for 'large file download'.

Page 94/3309 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Read text file in google GWT?

    - by ipkiss
    Hello all, I am writing a webpage using GWT. Now I need to read a text file and display the content in the webpage but have no idea how to do that with GWT. It is very nice if there is any way in GWT that I can read .properties file. Does anyone have an idea, please? Thanks.

    Read the article

  • Free-as-in-beer binary file format inspector

    - by fbrereto
    I am looking for a utility that gives me the ability to specify a binary file format and then interpret a file of bytes according to that format. (Something along the lines of the 010 Editor, but infinitely more cost-effective). Something that runs on Mac OS X would be preferred, but I'm interested to see what all is out there in general (while more of a hassle I'd be willing to run a tool on Windows if it were superior.) What's your preference?

    Read the article

  • How to use Selector when click a item in list

    - by notenking
    I occur a problem when develop a application on android. There have two image which can be download from server in a list item, it will show one image and will show another image when user select or click this item . if I try to download another image from server when user click it,the user will never have time to see it, for this time is shorter than download image form server, so i want to download two of them,and when I click it again,the app can invoke another image from local. but I do not know how to invoke this image ? Any suggestion will be appreciate. thx

    Read the article

  • IE browser doesn't close and file download popup needs focus

    - by jkranthi
    Hii all , I am trying to to click on a link after it is active which again produces a popup ( file download) after clicking. Here i have 2 problems 1) I start the code and leave it .what the code does is -after long process -it waits for the link to be active .Once the link is active it clicks on the link and a download popups opens (if everything goes well) and then it hangs there ( showing yellow flashing in the task bar which mean i have to click on the explorer for it to process whatever is next ).every time i have to click on the IE whenever the download popup appears .Is there a way to handle this or am i doing some wrong ? 2) The next problem is even if i click on the IE .the IE doesn't get close even though i write ie.close . my code is below : ## if the link is active ie.link(:text,a).click_no_wait prompt_message = "Do you want to open or save this file?" window_title = "File Download" save_dialog =WIN32OLE.new("AutoItX3.Control") save_dialog.WinGetText(window_title) save_dialog_obtained =save_dialog.WinWaitActive(window_title) save_dialog.WinKill(window_title) # end #' #some more code -normal puts statements # ie.close ie is hanging up for some strange reason ..?

    Read the article

  • Import and Export for CSV are both broken in Mathematica

    - by dreeves
    Consider the following 2 by 2 array: x = {{"a b c", "1,2,3"}, {"i \"comma-heart\" you", "i \",heart\" u, too"}} If we Export that to CSV and then Import it again we don't get the same thing back: Import[Export["tmp.csv", d]] Looking at tmp.csv it's clear that the Export didn't work, since the quotes are not escaped properly. According to the RFC which I presume is summarized correctly on Wikipedia's entry on CSV, the right way to export the above array is as follows: a b c, "1,2,3" "i ""heart"" you", "i "",heart"" u, too" Importing the above does not yield the original array either. So Import is broken as well. I've reported these bugs to [email protected] but I'm wondering if others have workarounds in the meantime. One workaround is to just use TSV instead of CSV. I tested the above with TSV and it seems to work (even with tabs embedded in the entries of the array).

    Read the article

  • Most efficient way to search the last x lines of a file in python

    - by Harley
    I have a file and I don't know how big it's going to be (it could be quite large, but the size will vary greatly). I want to search the last 10 lines or so to see if any of them match a string. I need to do this as quickly and efficiently as possible and was wondering if there's anything better than: s = "foo" last_bit = fileObj.readlines()[-10:] for line in last_bit: if line == s: print "FOUND"

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • Clipboard bug in Wordpad in Windows 7 (accidentally pasting large file into application)

    - by frenchglen
    In Win7, I use Wordpad, and I really like it. For my needs it's lean and fast, yet has the formatting functionalities I'm after when working on my TXT/RTF files on a daily basis. I don't intend to change text editors. There's a really bad bug which has ALWAYS plagued me. If you have a large file contained in the clipboard, like a 238MB FLAC file, and you accidentally paste it into Wordpad for whatever reason - it hangs the application for a VERY long time (like 2 hours, it depends on how big the file is, because it tries to 'handle' it). You either have to close the application and lose any unsaved changes, or go do something else until the item has finished pasting into Wordpad (it actually eventually drops the file's icon in wordpad just like how it appears in Windows Explorer). It's a Windows bug, a Wordpad bug. Is there some solution for this? Or is the problem fixed in Windows 8 (if anyone can tell me)? .....I'm not going to try out Win8 myself, merely to answer this question - that's what I'm asking it on SuperUSer for! I'm really hoping it's one of those little-yet-big things that they've fixed in Win8 (like removing the 255-character file path limit in Explorer, which is awesome). Thank you for your help, if you have Win8 handy and can test this. :)

    Read the article

  • Download all image or create zip file of all uploads from the gallary contained uploads

    - by Arpit Vaishnav
    I am on the photo sharing site , and i want to give functionality to download all the images available in the gallery ,, I have taken gallery in a relation where i can get all the iamges by @gallery.uploads , Now what i want is to download this all files , or if its possible to create any zipfile so that we can download that one file containing uploads inside the gallery , thanks

    Read the article

  • Slow download speeds on MacBook Pro

    - by Austin
    Just as the title says, I am getting very low download speeds on my MacBook Pro. I did a speed test at speedtest.net, and am getting 7 MbPS down, .5 up. However, I can only seem to get 270 KB PS max (averaging 100 K), whether on my school's network or on my home network, wired or wireless. I am on Mac OS X 10.5.8, with Google Chrome. My ethernet settings (under System Preferences - Network - Ethernet Connection - Advanced - Ethernet) are set to "Configure Automatically", "Speed: 100TX", "Duplex: full-duplex, flow-control", and "MTU: Standard (1500)". As far as I can tell, there are no throttles or anything between here and the ISP, so... Any ideas on why I'm getting such low download speeds?

    Read the article

  • Reading a file N lines at a time in ruby

    - by Sam
    I have a large file (hundreds of megs) that consists of filenames, one per line. I need to loop through the list of filenames, and fork off a process for each filename. I want a maximum of 8 forked processes at a time and I don't want to read the whole filename list into RAM at once. I'm not even sure where to begin, can anyone help me out?

    Read the article

  • Removing file locks in Windows and Java

    - by Jack
    I have a Java program that opens a file using the RandomAccessFile class. I'd like to be able to rename that file while it is opened by Java. In Unix, this isn't a problem. Does anyone know how I can do this in Windows? Should I set Java to open it a certain way? Thanks in advance.

    Read the article

  • Upload a file to a web server in cocoa

    - by Cam
    Hello, I was wondering the best way to upload file to a web server in cocoa. I cant seem to get my curl code to work even though it works when run from terminal. curl code: system(@"curl -T /file.txt http://webserevertouploadto.com") Thanks for any help

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • cant download youtube video

    - by dsaccount1
    I'm having trouble retrieving the youtube video automatically, heres the code. The problem is the last part. download = urllib.request.urlopen(download_url).read() # Youtube video download script # 10n1z3d[at]w[dot]cn import urllib.request import sys print("\n--------------------------") print (" Youtube Video Downloader") print ("--------------------------\n") try: video_url = sys.argv[1] except: video_url = input('[+] Enter video URL: ') print("[+] Connecting...") try: if(video_url.endswith('&feature=related')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=related')[0] elif(video_url.endswith('&feature=dir')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=dir')[0] elif(video_url.endswith('&feature=fvst')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=fvst')[0] elif(video_url.endswith('&feature=channel_page')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=channel_page')[0] else: video_id = video_url.split('www.youtube.com/watch?v=')[1] except: print("[-] Invalid URL.") exit(1) print("[+] Parsing token...") try: url = str(urllib.request.urlopen('http://www.youtube.com/get_video_info?&video_id=' + video_id).read()) token_value = url.split('video_id='+video_id+'&token=')[1].split('&thumbnail_url')[0] download_url = "http://www.youtube.com/get_video?video_id=" + video_id + "&t=" + token_value + "&fmt=18" except: url = str(urllib.request.urlopen('www.youtube.com/watch?v=' + video_id)) exit(1) v_url=str(urllib.request.urlopen('http://'+video_url).read()) video_title = v_url.split('"rv.2.title": "')[1].split('", "rv.4.rating"')[0] if '&quot;' in video_title: video_title = video_title.replace('&quot;','"') elif '&amp;' in video_title: video_title = video_title.replace('&amp;','&') print("[+] Downloading " + '"' + video_title + '"...') try: print(download_url) file = open(video_title + '.mp4', 'wb') download = urllib.request.urlopen(download_url).read() print(download) for line in download: file.write(line) file.close() except: print("[-] Error downloading. Quitting.") exit(1) print("\n[+] Done. The video is saved to the current working directory(cwd).\n")

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >