Search Results

Search found 82718 results on 3309 pages for 'large file download'.

Page 165/3309 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • C#: Efficiently search a large string for occurences of other strings

    - by Jon
    Hi, I'm using C# to continuously search for multiple string "keywords" within large strings, which are = 4kb. This code is constantly looping, and sleeps aren't cutting down CPU usage enough while maintaining a reasonable speed. The bog-down is the keyword matching method. I've found a few possibilities, and all of them give similar efficiency. 1) http://tomasp.net/articles/ahocorasick.aspx -I do not have enough keywords for this to be the most efficient algorithm. 2) Regex. Using an instance level, compiled regex. -Provides more functionality than I require, and not quite enough efficiency. 3) String.IndexOf. -I would need to do a "smart" version of this for it provide enough efficiency. Looping through each keyword and calling IndexOf doesn't cut it. Does anyone know of any algorithms or methods that I can use to attain my goal?

    Read the article

  • HTTPS with Self-Signed Certificate Issues... Solution or better way?

    - by stormin986
    All I need to do is download some basic text-based and image files from a web server that has a self-signed SSL certificate. I have been trying to figure out how to use HttpClient to do this, but getting the SSL to work is a nightmare that seems to be way too much trouble for such a simple task. Is there a better way to perform these file downloads? Perhaps through a WebView or Browser feature? Reinventing the wheel of making a simple HTTPS GET request is a major pain, and is significantly holding up my development schedule. ** Updated title to more accurately reflect question / solution **

    Read the article

  • Faster forking of large processes on Linux ?

    - by timday
    What's the fastest, best way on modern Linux of achieving the same effect as a fork-execve combo from a large process ? My problem is that the process forking is ~500MByte big, and a simple benchmarking test achieves only about 50 forks/s from the process (c.f ~1600 forks/s from a minimally sized process) which is too slow for the intended application. Some googling turns up vfork as having being invented as the solution to this problem... but also warnings about not to use it. Modern Linux seems to have acquired related clone and posix_spawn calls; are these likely to help ? What's the modern replacement for vfork ? I'm using 64bit Debian Lenny on an i7 (the project could move to Squeeze if posix_spawn would help).

    Read the article

  • Speeding up inner joins between a large table and a small table

    - by Zaid
    This may be a silly question, but it may shed some light on how joins work internally. Let's say I have a large table L and a small table S (100K rows vs. 100 rows). Would there be any difference in terms of speed between the following two options?: OPTION 1: OPTION 2: --------- --------- SELECT * SELECT * FROM L INNER JOIN S FROM S INNER JOIN L ON L.id = S.id; ON L.id = S.id; Notice that the only difference is the order in which the tables are joined. I realize performance may vary between different SQL languages. If so, how would MySQL compare to Access?

    Read the article

  • Write large PDFs with Java sequentially

    - by Benjamin Muschko
    I am looking for a Java library that let's you write large PDFs sequentially with a minimum amount of memory. Most of the libraries I had a look at has to build up the document in memory first before you can actually write it. The problem I have to deal with are OutOfMemoryErrors. It would be great if I could flush the writer programmatically whenever needed e.g. for each page. Does anyone have any recommendations? I need something with a license along the lines of the LGPL (so not the GPL or the Affero GPL that iText uses).

    Read the article

  • Problem with large Canvas in Silverlight

    - by Fury
    Hi, I am developing (using Silvelight 3) an aplication that creates some kind of timeline and places objects on it. For this purpose I need a really large Canvas (up to 2000000 pixels width) with long lines on it, but whenever I create Canvas even 40000 pixels width it behaves very strangely, randomly disappearing. I have found a post with the description of the exactly same problem on silverlight forums and another one here on the stackoverflow. It seems that is a known problem since silverlight 2, but I can't find any good workaround. Does anybody know such workaround or can check is it still an issue in Silverlight 4? Thanks in advance.

    Read the article

  • ruby on rails: audio/mp3 content header download

    - by bandhunt
    How do you set the headers for downloads in ruby/rails? In php I'd set the header for an mp3 download like this: header("Content-Transfer-Encoding: binary"); header("Content-type: audio/mp3"); header("Content-Disposition: attachment; filename=\"$songname.mp3\""); header("Content-Length: " . $size); @readfile("http://example.com/12345.mp3"); Seems like there should be an easy should an easy solution. I did find this: response.headers['Content-type'] = 'Content-type: audio/mp3' But I'm not sure how/where the readfile would come into play and other headers. Thanks!

    Read the article

  • How do you pronounce large hex numbers?

    - by warrenm
    This question might be subjective, but I'm hoping there's some consensus that I just don't know about. Short hex numbers are relatively easy to spell out (e.g., 0xC4A might be "cee-four-ay"). Hex numbers ending with a multiple of three zeros are likewise pretty easy (e.g., 0xC000 might be "cee-thousand"). But is there a concise way to pronounce 0xFFFF0000 or 0xCA000000? Magic numbers like 0xDEADBEEF are popular for their pronounceability, but I'm mostly asking about large-ish, round numbers that seem like they should have a more concise pronunciation.

    Read the article

  • Download office document without the web server trying to render it

    - by Dan Revell
    I'm trying to download an InfoPath template that's hosted on SharePoint. If I hit the url in internet explorer it asks me where to save it and I get the correct file on my disk. If I try to do this programmatically with WebClient or HttpWebRequest then I get HTML back instead. How can I make my request so that the web server returns the actual xsn file and doesn't try to render it in html. If internet explorer can do this then it's logical to think that I can too. I've tried setting the Accept property of the request to application/x-microsoft-InfoPathFormTemplate but that hasn't helped. It was a shot in the dark.

    Read the article

  • Technical reasons for not having large background images in websites

    - by kees-kist
    Most websites tend to have either a solid color as background, or a small image that is repeated. Why aren't more websites using a large image (such as a photo) as background? I can think of the following reasons: 1) Problems with different screen resolutions. Too small and gaps start to appear on the left and/or right side for higher resolutions, too big and lower resolutions only show part of the image. 2) Bandwidth. Although this is unlikely to be a problem for most websites. Are there any other reasons why such backgrounds are not being used more often?

    Read the article

  • android html download and parse error

    - by Brahadeesh
    I am trying to download the html file using the ul of the page. I am using Jsoup. This is my code: TextView ptext = (TextView) findViewById(R.id.pagetext); Document doc = null; try { doc = (Document) Jsoup.connect(mNewLinkUrl).get(); } catch (IOException e) { Log.d(TAG, e.toString()); e.printStackTrace(); } NodeList nl = doc.getElementsByTagName("meta"); Element meta = (Element) nl.item(0); String title = meta.attr("title"); ptext.append("\n" + mNewLinkUrl); When running it, I am getting an error saying attr is not defined for the type element. What have I done wrong? Pardon me if this seems trivial.

    Read the article

  • download file in iframe in IE

    - by Estelle
    in a webpage I have a link to let the user download file, such as, "showfile.aspx?filename=xxx" in showfile.aspx, I send the file using Response.OutputStream.Write method. now I get some problem when somebody put this webpage in an IFrame and open in IE, as I checked the code, showfile.aspx is requested twice when clicks the link, and in the second time the cookies of authorization and session Id are missing. I tried to add the p3p header but not working. my question is, is this how the IE designed with iframe? is there anyway to work around? thanks.

    Read the article

  • Upload Large files(1GB)-ASP.net

    - by Ramya Raj
    I need to upload large files of atleast 1GB file size. I am using ASP.Net, C# and IIS 5.1 as my development platform. I am using HIF.PostedFile.InputStream.Read(fileBytes,0,HIF.PostedFile.ContentLength) before using File.WriteAllBytes(filePath, fileByteArray)(doesnt go here but gives System.OutOfMemoryException' exception) Currently i have set the httpRuntime to executionTimeout="999999" maxRequestLength="2097151"(thats 2GB!) useFullyQualifiedRedirectUrl="true" minFreeThreads="8" minLocalRequestFreeThreads="4" appRequestQueueLimit="5000" enableVersionHeader="true" requestLengthDiskThreshold="8192" Also i have set maxAllowedContentLength="2097151" (guess its only for IIS7) I have changed IIS connection timeout to 999,999 secs too. I am unable to upload files of even 4578KB(Ajaz-Uploader.zip)

    Read the article

  • C# Importing Large Volume of Data from CSV to Database

    - by guazz
    What's the most efficient method to load large volumes of data from CSV (3 million + rows) to a database. The data needs to be formatted(e.g. name column needs to be split into first name and last name, etc.) I need to do this in a efficiently as possible i.e. time constraints I am siding with the option of reading, transforming and loading the data using a C# application row-by-row? Is this ideal, if not, what are my options? Should I use multithreading?

    Read the article

  • Mirror a website with httrack while executing javascript

    - by Martin
    I want do save a mirror of www.youtube.com/tv. I obviously do not want to save the videos. I want the code running the website in a local copy, everything else can stay remote. The code I want is mainly contained in 2 files: live.js and app-prod.js. I tried using httrack. I have issue parsing the javascript to load anything past the first file: live.js. The %P parameter does not help. httrack www.youtube.com/tv +* -r6 --mirror -%P -j It doesn't go further than live.js because some javascript needs to be executed to load the next file. I know I can do this manually with any browser. I want to automate the process. Is httrack able to do this by itself? If yes, how?

    Read the article

  • While making an RSS reader which saves articles, how can I prevent duplicates?

    - by Koning Baard
    Lets say I have a RSS feed which lists the 3 newest questions on SO. At 1 o'clock, the feed looks like this: While making an RSS reader which saves articles, how can I prevent duplicates? Convert char array to UNICODE in MFC C++ How to deploy a Java Swing application with an embedded JavaDB database? At 2 o'clock, this feed looks like: django url from another template than the one associated with the view-function While making an RSS reader which saves articles, how can I prevent duplicates? Convert char array to UNICODE in MFC C++ (duplicate articles are bold) I want to download the RSS feed every 5 minutes, parse it and save the articles that aren't already saved, but I do not want duplicates (items that remain in the new, updated feed like the examples above). What can I use to determine if an article is already saved? Thanks

    Read the article

  • download mail attachment with Java

    - by Don
    Hi, I had a look in the reference doc, and Spring seems to have pretty good support for sending mail. However, I need to login to a mail account, read the messages, and download any attachments. Is downloading mail attachments supported by the Spring mail API? I know you can do this with the Java Mail API, but in the past I've found that very verbose and unpleasant to work with. EDIT: I've received several replies pointing towards tutorials that describe how to send mail with attachments, but what I'm asking about is how to read attachments from received mail. Cheers, Don

    Read the article

  • Unable upload large file size on Google Docs

    - by Preeti
    Hi, I am uploading document on Google Docs as: DocumentsService myService = new DocumentsService(""); myService.setUserCredentials("[email protected]", password ); DocumentEntry newEntry = myService.UploadDocument(@"C:\Sample.txt", "Sample.txt"); But when i try to upload a file of 3 MB it result into exception: An unhandled exception of type 'Google.GData.Client.GDataRequestException' occurred in Google.GData.Client.dll Additional information: Execution of request failed: http://docs.google.com/feeds/documents/private/full How can i upload large size file on Google Docs? I am using Google API ver 2. Thanx

    Read the article

  • Downloading HTTP URLs asynchronously in C++

    - by Joey Adams
    What's a good way to download HTTP URLs (e.g. such as http://0.0.0.0/foo.htm ) in C++ on Linux ? I strongly prefer something asynchronous. My program will have an event loop that repeatedly initiates multiple (very small) downloads and acts on them when they finish (either by polling or being notified somehow). I would rather not have to spawn multiple threads/processes to accomplish this. That shouldn't be necessary. Should I look into libraries like libcurl? I suppose I could implement it manually with non-blocking TCP sockets and select() calls, but that would likely be less convenient.

    Read the article

  • Avoid application becoming unresponsive on download

    - by baron
    Hi I have an application which downloads a file from a network location and then displays that files information. It uses WebClient.DownloadFile to achieve this. The problem is, when the user clicks a button to start the download, the application becomes unresponsive until the file has downloaded. This gives the impression the application may have hung. I would like to seek ideas which would avoid this scenario. Though my solution will need to be 'as quick and cheap as possible' I would be interested in hearing about custom loaders that have been built, or some sort of loading symbol / progress bar / sand time clock thingy : ) Thanks for reading.

    Read the article

  • How do I include extremely long literals in C++ source?

    - by BillyONeal
    Hello everyone :) I've got a bit of a problem. Essentially, I need to store a large list of whitelisted entries inside my program, and I'd like to include such a list directly -- I don't want to have to distribute other libraries and such, and I don't want to embed the strings into a Win32 resource, for a bunch of reasons I don't want to go into right now. I simply included my big whitelist in my .cpp file, and was presented with this error: 1>ServicesWhitelist.cpp(2807): fatal error C1091: compiler limit: string exceeds 65535 bytes in length The string itself is about twice this allowed limit by VC++. What's the best way to include such a large literal in a program?

    Read the article

  • How to execute a large PHP Script ?

    - by atif089
    Well basically I may want to execute a script that may take as much as 1 hours as well. What I really want to do is Send SMS to my users using a third party API. So its basically like I supply my script with an array of phone numbers and fire the method to send SMS. However assuming it take 5 seconds to send 1 SMS and I want to send 1000 SMS which is roughly 1 - 2 hours. I can use set_time_limit() because I am on shared host. One way to do this is store numbers in a session and execute each SMS and use javascript to refresh that page until end. This way I need to keep my browser open and the execution will stop if my Internet Connection is disconnected. So, Is there any better way to do this ? Hope I am clear enough to explain what I want? I want to execute a large script that may take hours to execute without getting timeout.

    Read the article

  • rails large amount of data in single insert activerecord gave out

    - by Nik
    So I have I think around 36,000 just to be safe, a number I wouldn't think was too large for a modern sql database like mysql. Each record has just two attributes. So I do: so I collected them into one single insert statement sql = "INSERT INTO tasks (attrib_a, attrib_b) VALUES (c1,d1),(c2,d2),(c3,d3)...(c36000,d36000);" ActiveRecord::Base.connection.execute sql from C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract_adapter.rb:219:in `log' from C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:323:in `execute_without_analyzer from c:/r/projects/vendor/plugins/rails-footnotes/lib/rails-footnotes/notes/queries_note.rb:130:in `execute' from C:/Ruby/lib/ruby/1.8/benchmark.rb:308:in `realtime' from c:/r/projects/vendor/plugins/rails-footnotes/lib/rails-footnotes/notes/queries_note.rb:130:in `execute' from (irb):53 from C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/vendor/tzinfo-0.3.12/tzinfo/time_or_datetime.rb:242 I don't know if the above info is enough, please do ask for anything that I didn't provide here. So any idea what this is about? THANK YOU!!!!

    Read the article

  • Android Source code download error

    - by user351850
    Hi all I have followed the instructions on the Android website on how to download the latest android source code files but it gives errors when i run this command: repo init -u git://android2.git.kernel.org/platform/manifest.git It gives the following error: Getting repo ... from git://android.git.kernel.org/tools/repo.git android.git.kernel.org[0: 199.6.1.176]: errno=Connection refused android.git.kernel.org[0: 130.239.17.12]: errno=Connection refused fatal: unable to connect a socket (Connection refused) On checking forums for its resolution, i was told that port 9418 was being blocked. I use Ubuntu 10.04 and ensured that the firewall wasnt blocking the port and also enabled the port and the above IP addresses. I also spoke to the networking peeps who ensured that no traffic from the internet is being blocked. I would be glad if i could get directions on how to proceed next. Many thanks as you respond. Saheed.

    Read the article

  • Enable export to XML via HTTP on a large number of models with child relations

    - by Vasil
    I've a large number of models (120+) and I would like to let users of my application export all of the data from them in XML format. I looked at django-piston, but I would like to do this with minimum code. Basically I'd like to have something like this: GET /export/applabel/ModelName/ Would stream all instances of ModelName in applabel together with it's tree of related objects . I'd like to do this without writing code for each model. What would be the best way to do this?

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >