Search Results

Search found 82718 results on 3309 pages for 'large file download'.

Page 163/3309 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • How do I export a large table into 50 smaller csv files of 100,000 records each

    - by Eddie
    I am trying to export one field from a very large table - containing 5,000,000 records, for example - into a csv list - but not all together, rather, 100,000 records into each .csv file created - without duplication. How can I do this, please? I tried SELECT field_name FROM table_name WHERE certain_conditions_are_met INTO OUTFILE /tmp/name_of_export_file_for_first_100000_records.csv LINES TERMINATED BY '\n' LIMIT 0 , 100000 that gives the first 100000 records, but nothing I do has the other 4,900,000 records exported into 49 other files - and how do I specify the other 49 filenames? for example, I tried the following, but the SQL syntax is wrong: SELECT field_name FROM table_name WHERE certain_conditions_are_met INTO OUTFILE /home/user/Eddie/name_of_export_file_for_first_100000_records.csv LINES TERMINATED BY '\n' LIMIT 0 , 100000 INTO OUTFILE /home/user/Eddie/name_of_export_file_for_second_100000_records.csv LINES TERMINATED BY '\n' LIMIT 100001 , 200000 and that did not create the second file... what am I doing wrong, please, and is there a better way to do this? Should the LIMIT 0 , 100000 be put Before the first INTO OUTFILE statement, and then repeat the entire command from SELECT for the second 100,000 records, etc? Thanks for any help. Eddie

    Read the article

  • Download HTML and Images with WGet without first few lines

    - by St. John Johnson
    I'm attempting to use wget with the -p option to download specific documents and the images linked in the HTML. The problem is, the site that is hosting the HTML has some non-html information preceding the HTML. This is causing wget to not interpret the document as HTML and doesn't search for images. Is there a way to have wget strip the first X lines and/or force searching for images? Example URL: http://www.sec.gov/Archives/edgar/data/13239/000119312510070346/ds4.htm First Lines of Content: <DOCUMENT> <TYPE>S-4 <SEQUENCE>1 <FILENAME>ds4.htm <DESCRIPTION>FORM S-4 <TEXT> <HTML><HEAD> <TITLE>Form S-4</TITLE> Last Lines of Content: </BODY></HTML> </TEXT> </DOCUMENT>

    Read the article

  • *Client* scalability for large numbers of remote web service calls

    - by Yuriy
    Hey Guys, I was wondering if you could share best practices and common mistakes when it comes to making large numbers of time-sensitive web service calls. In my case, I have a SOAP and an XML-RPC based web service to which I'm constantly making calls. I predict that this will soon become an issue as the number of calls per second will grow. On a higher level, I was thinking of batching those calls and submitting those to the web services every 100 ms. Could you share what else works? On a lower level side of the things, I use Apache Xml-Rpc client and standard javax.xml.soap.* packages for my client implementations. Are you aware of any client scalability related tricks/tips/warnings with these packages? Thanks in advance Yuriy

    Read the article

  • System.Overflow Exception - int32 is too large or small

    - by LonnieBest
    I need a little advice. I've got windows service that runs at night. In my development environment, it runs without exception, but when I running it "installed on other machines", when I come in the morning, I'm welcomed with a System.Overflow exception that says that I've set an int32 to value that is too large or small. I've carefully combed the service's c# code, and I have try/catch statements around everything, that should catch any error and write it to a log without completely stopping my service with this overflow exception. But still, it occurs and stops the service. I'd appreciate any conceptual advice on how to pin point what's causing an error such as this.

    Read the article

  • PycURL RESUME_FROM

    - by excid3
    I can't seem to get the RESUME_FROM option to work. Here's some example code that I have been testing with: import os import pycurl import sys def progress(total, existing, upload_t, upload_d): try: frac = float(existing)/float(total) except: frac = 0 sys.stdout.write("\r%s %3i%%" % ("file", frac*100) ) url = "http://launchpad.net/keryx/stable/0.92/+download/keryx_0.92.4.tar.gz" filename = url.split("/")[-1].strip() def test(debug_type, debug_msg): print "debug(%d): %s" % (debug_type, debug_msg) c = pycurl.Curl() c.setopt(pycurl.URL, url) c.setopt(pycurl.FOLLOWLOCATION, 1) c.setopt(pycurl.MAXREDIRS, 5) # Setup writing if os.path.exists(filename): f = open(filename, "ab") c.setopt(pycurl.RESUME_FROM, os.path.getsize(filename)) else: f = open(filename, "wb") c.setopt(pycurl.WRITEDATA, f) #c.setopt(pycurl.VERBOSE, 1) c.setopt(pycurl.DEBUGFUNCTION, test) c.setopt(pycurl.NOPROGRESS, 0) c.setopt(pycurl.PROGRESSFUNCTION, progress) c.perform()

    Read the article

  • Indy FTP, large files and NAT routers

    - by Lobuno
    Hello! I have been using Indy to transfers files via FTP for years now but have not been able to find a satisfactory solution for the following problem. When a user is uploading a large file, behind a router, sometimes the following happens: the file is uploaded OK, but under the mean time the command channel gets disconnected because of a timeout. Normally this doesn't happens with a direct connection to the server, because the server "knows" that a transfer is being taking place on the data channel. Some routers are not aware of this, though and the command channel is closed. Many programs send a NOOP command periodically to keep the command channel alive even if this is not part of the standard FTP specification. My question: how do I do that? Do I send the NOOP command in the OnWork event? Does this cause any collateral damage in some way, like, do I need to process some response? How do I best solve this problem?

    Read the article

  • "code too large" compilation error in java

    - by trinity
    Hello all, Is there any maximum size for code in java.. i wrote a function with more than 10,000 lines. Actually , each line assigns a value to an array variable.. arts_bag[10792]="newyorkartworld"; arts_bag[10793]="leningradschool"; arts_bag[10794]="mailart"; arts_bag[10795]="artspan"; arts_bag[10796]="watercolor"; arts_bag[10797]="sculptures"; arts_bag[10798]="stonesculpture"; And while compiling , i get this error : code too large How do i overcome this ?

    Read the article

  • finding a string of random characters (with possible errors) within a large string of random charact

    - by mike
    I am trying to search a large string w/o spaces for a smaller string of characters. using regex I can easily find perfect matches but I can't figure out how to find partial matches. by partial matches i mean one or two extra characters in the string or one or two characters that have been changed, or one of each. the first and last characters will always match though. this would be similar to a spell checker but there are no spaces and the strings dont contain actual words, just random hex digits. i figured a way to find the string if there are no extra characters using indexOf(string.charAt(0)) and indexOf(charAt(string.length()-1) and looping through the characters between the two indexes. but this can be problematic when dealing with randomized characters because of the possibility of finding the first and last characters at the correct spacing but none of the middle characters matching. i've been scratching my head for hours on this issue. any ideas?

    Read the article

  • How to organize a large number of objects

    - by shane
    We have a large number of documents and metadata (xml files) associated with these documents. What is the best way to organize them? Currently we put them into a series of nested folders: /repository/category/date(when they were loaded into our db)/document_number.pdf and .xml We use the path as a unique identifier for the document in our system. This is more versatile than putting them all in a single flat folder. also it is independent from our database/application, so we can reload them in case of failure. Yet, it introduces some limitations. for example we can't move the files once they've been placed in this structure, also it takes work to put them this way. What is the best practice? How websites such as Scribd deal with this problem?

    Read the article

  • Free Large datasets to experiment with Hadoop

    - by Sundar
    Do you know any large datasets to experiment with Hadoop which is free/low cost? Any pointers/links related is appreciated. Prefernce: Atleast one GB of data. Production log data of webserver. Few of them which I found so far: http://dumps.wikimedia.org/enwiki/20100130/ http://wiki.freebase.com/wiki/Data_dumps http://aws.amazon.com/publicdatasets/ Also can we run our own crawler to gather data from sites e.g. Wikipedia? Any pointers on how to do this is appreciated as well.

    Read the article

  • RichFaces rich:insert takes a long time to output large files

    - by Mark Lewis
    Hello I'm using a RichFaces <rich:insert like this: <rich:panel header="my head"> <a4j:outputPanel ajaxRendered="true"> <rich:insert src="#{MyBacking.myPath}" highlight="groovy" /> </a4j:outputPanel> </rich:panel> If I have a 60k file to output, it takes 23 seconds. I've got a requirement to output the contents of some larger files than that and obviously the larger the file, the larger the wait for content. The recommendation in the answer to another related question is to introduce paging. I will, but the question is, why does it take so long to output 60k of text using JSF/RichFaces? That is, reading off a local disk with Windows XP SP2 PC - I can see from the log the data has already been written to disk from the network. Other scripting languages appear to be faster than this - is it something to do with the JSF lifecycle having to handle the text maybe? Thanks

    Read the article

  • Howto: Download local copy of Google's Pacman game

    - by macek
    It looks like this is HTML+JavaScript. Is there a way I can download a copy so I can continue playing after they take it down? Thanks for any help :) Edit Ok, ok, I wasn't completely forthcoming. Not only would I like to continue playing it, I kinda want to look at the source code, too... I was able to find this: Google pacman10-hp.2.js See it reformatted on Github here. Thanks @SteD Github repo I setup a github repo: macek/google_pacman. Check out the README, I think we're very close! Send me pull request if you make any progress. Put any useful details in the README. Let's get this working! :)

    Read the article

  • serving large file using select, epoll or kqueue

    - by xask
    Nginx uses epoll, or other multiplexing techniques(select) for its handling multiple clients, i.e it does not spawn a new thread for every request unlike apache. I tried to replicate the same in my own test program using select. I could accept connections from multiple client by creating a non-blocking socket and using select to decide which client to serve. My program would simply echo their data back to them .It works fine for small data transfers (some bytes per client) The problem occurs when I need to send a large file over a connection to the client. Since i have only one thread to serve all client till the time I am finished reading the file and writing it over to the socket i cannot resume serving other client. Is there a known solution to this problem, or is it best to create a thread for every such request ?

    Read the article

  • Doing a large number of upserts as fast as possible

    - by Jason Swett
    My app (which uses MySQL) is doing a large number of subsequent upserts. Right now my SQL looks like this: INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('VICTOR H KINDELL OR','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('TRACY L WALTER PERSONAL REP FOR','123','123','123') INSERT IGNORE INTO customer (name,customer_number,social_security_number,phone) VALUES ('TRACY L WALTER PERSONAL REP FOR','123','123','123') So far I've found INSERT IGNORE to be the fastest way to achieve upserts. Selecting a record to see if it exists and then either updating it or inserting a new one is too slow. Even this is not as fast as I'd like because I need to do a separate statement for each record. Sometimes I'll have around 50,000 of these statements in a row. Is there a way to take care of all of these in just one statement, without deleting any existing records?

    Read the article

  • latex large division sign in a math formula

    - by Anna
    Hi, I have been looking for an answer for some time now, hope you could give me a quick tip. I have an equation with many divisions inside. i.e: $\frac{\frac{a_1}{a_2}} {\frac{b_1}{b_2}}$ To make it more readable, I decided to change the large fraction into "/" sign. i.e. $\frac{a_1}{a_2} / \frac{b_1}{b_2}$ The problem is that the "/" sign remains small, and it is quite ugly. How do I change the "/" sign to have a big font? How do I make it more readable? Thanks.

    Read the article

  • Send failed from rails server to download in the browser

    - by Markus
    Hi everybody, I have a web application which has some multimedia files stored in a user protected area. To make this files available to logged in customers, I consider using the x-sendfile plugin. x_send_file(path, :type => 'application/pdf') It is just strange that every time I run this function a empty file gets sent to the browser download. I checked the path which is correct (app fails if I change it to a inexistent file). Actually if I use the rails internal send_file method, the same error occurs... Any help is appreciated! Markus

    Read the article

  • SQL Server 2000 tables

    - by klork
    We currently have an SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: Does anyone have any experience with a SQL Server (2000/2005) installation with millions of tables? What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • Split ExtJS for incremental (on demand) download.

    - by Kabeer
    Hello. I had earlier asked whether I can remove un-utilized JavaScript code from ExtJS library. JSBuilder was the answer. What about being able to download widgets on-demand? I ask this because I have discovered this from the of markup generated by Coolite (ASP.Net framework that wraps ExtJS). So do I have to go through a meticulous process of splitting the library myself or is there a better way? BTW I'd like to avoid using Coolite.

    Read the article

  • OutOfMemoryException Processing Large File

    - by Krip
    We are loading a large flat file into BizTalk Server 2006 (Original release, not R2) - about 125 MB. We run a map against it and then take each row and make a call out to a stored procedure. We receive the OutOfMemoryException during orchestration processing, the Windows Service restarts, uses full 2 GB memory, and crashes again. The server is 32-bit and set to use the /3GB switch. Also I've separated the flow into 3 hosts - one for receive, the other for orchestration, and the third for sends. Anyone have any suggestions for getting this file to process wihout error? Thanks, Krip

    Read the article

  • Create Zip file from stream and download it

    - by Navid Farhadi
    I have a DataTable that i want to convert it to xml and then zip it, using DotNetZip. finally user can download it via Asp.Net webpage. My code in below dt.TableName = "Declaration"; MemoryStream stream = new MemoryStream(); dt.WriteXml(stream); ZipFile zipFile = new ZipFile(); zipFile.AddEntry("Report.xml", "", stream); Response.ClearContent(); Response.ClearHeaders(); Response.AppendHeader("content-disposition", "attachment; filename=Report.zip"); zipFile.Save(Response.OutputStream); //Response.Write(zipstream); zipFile.Dispose(); the xml file in zip file is empty.

    Read the article

  • Converting a large SQL Server Database to Azure Storage

    - by Laith
    Hi guys, I have a very large database structure, (Data is not important at this point, I can migrate the info in the db pretty easily if the structure is done) , all reside in SQL Server and I even published it to SQL Azure, but thinking about the limitation of SQL Azure in size, made me decide to switch most of the tables that do not need all the bells and whistles of SQL Azure to Azure Table and blob storage. I was thinking of creating a TT template that dose that, but was wondering if their is a tool that do that. Any ideas or thoughts. The only tables that i would keep in SQL Azure would anything related to transactions like payments. Appreciate your thoughts and advice

    Read the article

  • JsonResult shows up a file download in browser

    - by joshb
    I'm trying to use jquery.Ajax to post data to an ASP.NET MVC2 action method that returns a JsonResult. Everything works great except when the response gets back to the browser it is treated as a file download instead of being passed into the success handler. Here's my code: Javascript: <script type="text/javascript"> $(document).ready(function () { $("form[action$='CreateEnvelope']").submit(function () { $.ajax({ url: $(this).attr("action"), type: "POST", data: $(this).serialize(), dataType: "json", success: function (envelopeData) { alert("test"); } }); }); return false; }); </script> Action method on controller: public JsonResult CreateEnvelope(string envelopeTitle, string envelopeDescription) { //create an envelope object and return return Json(envelope); } If I open the downloaded file the json is exactly what I'm looking for and the mime type is shown as application/json. What am I missing to make the jquery.ajax call receive the json returned?

    Read the article

  • Optimizing a large iteration of PHP objects (EAV-based)

    - by Aron Rotteveel
    I am currently working on a project that utilizes the EAV model. This turns out to work quite well, but like many others I am now stumbling upon some performance issues. The data set in this particular case consists of aproximately 2500 entities, each with aprox. 150 attributes. Each entity and each attribute is represented by a PHP-object. Since most parts of the application only iterate through a filtered set of entities, we have not had very large issues yet. Now, however, I am working on an algorithm that requires iteration over the entire dataset, which causes a major impact on performance. This information is perhaps not very much to work with, but since this is an architectural problem, I am hoping for a architectural pattern to help me on the way as well. Each entity, including it's attributes takes up aprox. 500KB of memory.

    Read the article

  • Building dictionary of words from large text

    - by LiorH
    I have a text file containing posts in English/Italian. I would like to read the posts into a data matrix so that each row represents a post and each column a word. The cells in the matrix are the counts of how many times each word appears in the post. The dictionary should consist of all the words in the whole file or a non exhaustive English/Italian dictionary. I know this is a common essential preprocessing step for NLP. Does anyone know of a tool\project that can perform this task? Someone mentioned apache lucene, do you know if lucene index can be serialized to a data-structure similar to my needs?

    Read the article

  • Upload/Download images to FTP without bothering the user

    - by Dan B
    Hi, I know a lot of posts have been made in regards to FTP, but none have led me to what I need. I'm trying to upload a picture to a server (currently attempting FTP) but do it without notifying requiring the user to be involved. I want to be able to seamlessly upload/download the image when a certain user action occurs, but I don't want to use a third-party app like AndFTP. The idea is that a user will upload a picture, and then another user will be able to grab that picture based on which user put it up. No user will know where it's going or where it came from, nor will they navigate the FTP. Alternatively, does anyone have thoughts on a better way to do that? I thought of using the imgur api, but it can't be used commercially. It would, however, be perfect for my purposes. Is there a similar open-source alternative? Any help is greatly appreciated. Dan

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >