Search Results

Search found 37915 results on 1517 pages for 'text editor'.

Page 210/1517 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • Importing a large delimited file to a MySQL table

    - by Tom
    I have this large (and oddly formatted txt file) from the USDA's website. It is the NUT_DATA.txt file. But the problem is that it is almost 27mb! I was successful in importing the a few other smaller files, but my method was using file_get_contents which it makes sense why an error would be thrown if I try to snag 27+ mb of RAM. So how can I import this massive file to my MySQL DB without running into a timeout and RAM issue? I've tried just getting one line at a time from the file, but this ran into timeout issue. Using PHP 5.2.0. Here is the old script (the fields in the DB are just numbers because I could not figure out what number represented what nutrient, I found this data very poorly document. Sorry about the ugliness of the code): <? $file = "NUT_DATA.txt"; $data = split("\n", file_get_contents($file)); // split each line $link = mysql_connect("localhost", "username", "password"); mysql_select_db("database", $link); for($i = 0, $e = sizeof($data); $i < $e; $i++) { $sql = "INSERT INTO `USDA` (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17) VALUES("; $row = split("\^", trim($data[$i])); // split each line by carrot for ($j = 0, $k = sizeof($row); $j < $k; $j++) { $val = trim($row[$j], '~'); $val = (empty($val)) ? 0 : $val; $sql .= ((empty($val)) ? 0 : $val) . ','; // this gets rid of those tildas and replaces empty strings with 0s } $sql = rtrim($sql, ',') . ");"; mysql_query($sql) or die(mysql_error()); // query the db } echo "Finished inserting data into database.\n"; mysql_close($link); ?>

    Read the article

  • Selecting a radio button when a dropdown value changes

    - by Tim F.
    My setup is something like this: radiobutton1 - selection1 radiobutton2 - selection2 textinput1 I want radiobutton1 to be selected whenever selection1 is changed, and I want radiobutton2 to be selected whenever either selection2 or textinput1 is changed. There has to be a simple javascript solution here... I just can't find it. Here's my actual code ("$semopt" is a string holding the html code for the options on the first selection input): <input type='radio' name='semester' value='existing'/>Existing semester: <select name='sem_id'">$semopt </select><br/> <input type='radio' name='semester' value='new'/>New Semester: <select name='sem_name'"> <option value='Fall'>Fall</option> <option value='Spring'>Spring</option> </select> <input name='sem_year' value='$thisyear' size='5'"/><br/>

    Read the article

  • Reading email content

    - by Steve
    Hi, Hope someone may be able to help. What i am looking to do is create a small winform app in c# to read the content of a email from a pop account, and upload key values to a sql automatically. The email format is always the same for each email, eg, First name : Last name : Phone number : etc... Currently the emails are being stored in a pop 3 account however i want a way to reduce having to key the information into the sql by hand. Can anyone advise how i would go about doing this or could recommend some guides? Thanks. Steve

    Read the article

  • Perl, efficient parsing of csv file

    - by Mike
    I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient. My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application. My question is, what is the most time efficient means of parsing a large CSV file using only built in tools? note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although that might work effectively.

    Read the article

  • sending SMS with VBA

    - by I__
    does anyone know if this is possible? i was able to succesfully run hyperterminal and use it to send texts through my phone which is attached by USB. anyone know how to do it in VBA?

    Read the article

  • I'm trying to build a query to search against a fulltext index in mysql

    - by Rockinelle
    The table's schema is pretty simple. I have a child table that stores a customer's information like address and phone number. The columns are user_id, fieldname, fieldvalue and fieldname. So each row will hold one item like phone number, address or email. This is to allow an unlimited number of each type of information for each customer. The people on the phones need to look up these customers quickly as they call into our call center. I have experimented with using LIKE% and I'm working with a FULLTEXT index now. My queries work, but I want to make them more useful because if someone searches for a telephone area code like 805 that will bring up many people, and then they add the name Bill to narrow it down, '805 Bill'. It will show EVERY customer that has 805 OR Bill. I want it to do AND searches across multiple rows within each customer. Currently I'm using the query below to grab the user_ids and later I do another query to fetch all the details for each user to build their complete record. SELECT DISTINCT `user_id` FROM `user_details` WHERE MATCH (`fieldvalue`) AGAINST ('805 Bill') Again, I want to do the above query against groups of rows that belong to a single user, but those users have to match the search keywords. What should I do?

    Read the article

  • How to index a string like "aaa.bbb.ddd-fff" in Lucene?

    - by user46703
    Hi, I have to index a lot documents that contain reference numbers like "aaa.bbb.ddd-fff". The structure can change but it's always some arbitrary numbers or characters combined with "/","-","_" or some other delimiter. The users want to be able to search for any of the substrings like "aaa" or "ddd" and also for combinations like "aaa.bbb" or "ddd-fff". The best I have been able to come up with is to create my own token filter modeled after the synonym filter in "Lucene in action" which spits out multiple terms for each input. In my case I return "aaa.bbb", "bbb.ddd","bbb.ddd-fff" and all other combinations of the substrings. This works pretty well but when I index large documents (100MB) that contain lots of such strings I tend to get out of memory exceptions because my filter returns multiple terms for each input string. Is there a better way to index these strings?

    Read the article

  • How do I keep a scanner from throwing exceptions when the wrong type is entered? (java)

    - by David
    Here's some sample code: import java.util.Scanner; class In { public static void main (String[]arg) { Scanner in = new Scanner (System.in) ; System.out.println ("how many are invading?") ; int a = in.nextInt() ; System.out.println (a) ; } } if i run the program and give it an int like 4then everything goes fine. if, on the other hand, i answer too many it doesn't laugh at my funny joke. instead i get this: (as expected) Exception in thread "main" java.util.InputMismatchException at java.util.Scanner.throwFor(Scanner.java:819) at java.util.Scanner.next(Scanner.java:1431) at java.util.Scanner.nextInt(Scanner.java:2040) at java.util.Scanner.nextInt(Scanner.java:2000) at In.main(In.java:9) is there a way so that i can make it so that it either ignores entries that aren't ints or re prompts with "how many are invading?"? i'd like to know how to do both of these.

    Read the article

  • FORMSOF Thesaurus in SQL Server

    - by Coolcoder
    Has anyone done any performance measures with this in terms of speed where there is a high number of substitutes for any given word. For instance, I want to use this to store common misspellings; expecting to have 4-10 variations of a word. <expansion> <sub>administration</sub> <sub>administraton</sub> <sub>aministraton</sub> </expansion> When you run a fulltext search, how does performance degrade with that number of variations? for instance, I assume it has to do a separate fulltext search performing an OR? Also, having say 20/30K entries in the Thesaurus xml file - does this impact performance?

    Read the article

  • Problems when going from SQL 2005 to SQL 2008

    - by Nezdet
    Hi! I did go over from SQL server 2005 to 2008. Doing that gave me some problems with the fulltext search. This site is based on Fulltext search. It occurs more deadlocks, the search is slower and sometimes it return empty lists, don't know why. A lot of people has been writning about they having this problem with 2008. But I haven'tgot any solutions why 2005 worked better for my program.. PLS help me out!

    Read the article

  • How can I make keyword order more relevant in my search?

    - by Atomiton
    In my database, I have a keywords field that stores a comma-delimited list of keywords. For example, a Shrek doll might have the following keywords: ogre, green, plush, hero, boys' toys A "Beanie Baby" doll ( that happens to be an ogre ) might have: beanie baby, kids toys, beanbag toys, soft, infant, ogre (That's a completely contrived example.) What I'd like to do is if the consumer searches for "ogre" I'd like the "Shrek" doll to come up higher in the search results. My content administrator feels that if the keyword is earlier in the list, it should get a higher ranking. ( This makes sense to me and it makes it easy for me to let them control the search result relevance ). Here's a simplified query: SELECT p.ProductID AS ContentID , p.ProductName AS Title , p.ProductCode AS Subtitle , 100 AS Rank , p.ProductKeywords AS Keywords FROM Products AS p WHERE FREETEXT( p.ProductKeywords, @SearchPredicate ) I'm thinking something along the lines of replacing the RANK with: , 200 - INDEXOF(@SearchTerm) AS Rank This "should" rank the keyword results by their relevance I know INDEXOF isn't a SQL command... but it's something LIKE that I would like to accomplish. Am I approaching this the right way? Is it possible to do something like this? Does this make sense?

    Read the article

  • Javascript: Make Rows Draggable Through Input Field Handles

    - by Corey O.
    I have created a table with draggable rows. Unfortunately, most of the rows are covered with a large textbox input element. In order to drag the rows, you have to grab the row on the very edge just outside of the textbox. Is there a way to allow the rows to be grabbed through the textboxes without destroying the textbox functionality? (i.e. relay the mouse drag event, but not the mouseclick event?)

    Read the article

  • Search filenames in MySQL database table restricted by filetype?

    - by ju
    Hello I have a MySQL database that I replicate from another server. The database contains a table with this columns ID, FileName and FileSize In the table there are more than 4'000'000 records. I want to make fast a search in FileName (varchar) column I found that I can use for this Sphinx search engine. The problem is that I want to restrict searches by filetype. Do I have to and how (trigers?) to extract file extensions for all rows? May be I have to create another table (because this one is replicated) and join them in 1:1 relation? Can you give me some advices please :)

    Read the article

  • How well does Solr scale over large number of facet values?

    - by Continuation
    I'm using Solr and I want to facet over a field "group". Since "group" is created by users, potentially there can be a huge number of values for "group". Would Solr be able to handle a use case like this? Or is Solr not really appropriate for facet fields with a large number of values? I understand that I can set facet.limit to restrict the number of values returned for a facet field. Would this help in my case? Say there are 100,000 matching values for "group" in a search, if I set facet.limit to 50. would that speed up the query, or would the query still be slow because Solr still needs to process and sort through all the facet values and return the top 50 ones? Any tips on how to tune Solr for large number of facet values? Thanks.

    Read the article

  • Recommended way to perform Lucene search without limit

    - by Thomas
    The Lucene documents tell me that "Hits" will be removed from the API in Lucene 3.0. Deprecated. Hits will be removed in Lucene 3.0. Use search(Query, Filter, int) instead. The proposed overload limits the number of documents returned to the value of the int. So my question is: what is the recommended way to perform a search in Lucene with no limit on the number of documents to be returned?

    Read the article

  • Correct way to write /* and */

    - by billpg
    Hi everyone. I'd like to know, please, the correct way to write the symbols that the C family of languages use to begin and end comments. Before you all respond "a slash followed by an asterisk", I mean what's the correct way to write them on paper. (IE, How many points should the asterisk have? What angle should the slash be? etc) Everything I need so I can sit down and draw correct comment start and end symbols. Please note, I'm looking for the correct standard way. If there is no industry standard, please respond with "there is no standard" and I will accept that answer.

    Read the article

  • What's the fastest way to strip and replace a document of high unicode characters using Python?

    - by Rhubarb
    I am looking to replace from a large document all high unicode characters, such as accented Es, left and right quotes, etc., with "normal" counterparts in the low range, such as a regular 'E', and straight quotes. I need to perform this on a very large document rather often. I see an example of this in what I think might be perl here: http://www.designmeme.com/mtplugins/lowdown.txt Is there a fast way of doing this in Python without using s.replace(...).replace(...).replace(...)...? I've tried this on just a few characters to replace and the document stripping became really slow.

    Read the article

  • How does one implement storage/retrieval of smart-search/mailbox features?

    - by humble_coder
    Hi All, I have a question regarding implementation of smart-search features. For example, consider something like "smart mailboxes" in various email applications. Let's assume you have your data (emails) stored in a database and, depending on the field for which the query will be created, you present different options to the end user. At the moment let's assume the Subject, Verb, Object approach… For instance, say you have the following: SUBJECTs: message, to_address, from_address, subject, date_received VERBs: contains, does_not_contain, is_equal_to, greater_than, less_than OBJECTs: ??????? Now, in case it isn't clear, I want a table structure (although I'm not opposed to an external XMLesque file of some sort) to store (and later retrieve/present) my criteria for smart searches/mailboxes for later use. As an example, using SVO I could easily store then reconstruct a query for "date between two dates" -- simply use "date greater than" AND "date less than". However, what if, in the same smart search, I wanted a "between" OR'ed with another criterion? You can see that it might get out of hand -- not necessarily in the query creation (as that is rather simplistic), but in the option presentation and storage mechanism. Perhaps I need to think more on a more granular level. Perhaps I need to simply allow the user to select AND or OR for each entry independently instead of making it an ALL OR NOTHING type smart search (i.e. instead of MATCH ALL or MATCH ANY, I need to simply allow them to select -- I just don't want it to turn into a Hydra). Any input would be most appreciated. My apologies if the question is a bit incoherent. It is late, and I my brain is toast. Best.

    Read the article

  • zlib gzgets extremely slow?

    - by monkeyking
    I'm doing stuff related to parsing huge globs of textfiles, and was testing what input method to use. There is not much of a difference using c++ std::ifstreams vs c FILE, According to the documentation of zlib, it supports uncompressed files, and will read the file without decompression. I'm seeing a difference from 12 seconds using non zlib to more than 4 minutes using zlib.h This I've tested doing multiple runs, so its not a disk cache issue. Am I using zlib in some wrong way? thanks #include <zlib.h> #include <cstdio> #include <cstdlib> #include <fstream> #define LENS 1000000 size_t fg(const char *fname){ fprintf(stderr,"\t-> using fgets\n"); FILE *fp =fopen(fname,"r"); size_t nLines =0; char *buffer = new char[LENS]; while(NULL!=fgets(buffer,LENS,fp)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } size_t is(const char *fname){ fprintf(stderr,"\t-> using ifstream\n"); std::ifstream is(fname,std::ios::in); size_t nLines =0; char *buffer = new char[LENS]; while(is. getline(buffer,LENS)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } size_t iz(const char *fname){ fprintf(stderr,"\t-> using zlib\n"); gzFile fp =gzopen(fname,"r"); size_t nLines =0; char *buffer = new char[LENS]; while(0!=gzgets(fp,buffer,LENS)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } int main(int argc,char**argv){ if(atoi(argv[2])==0) fg(argv[1]); if(atoi(argv[2])==1) is(argv[1]); if(atoi(argv[2])==2) iz(argv[1]); }

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >