Search Results

Search found 68407 results on 2737 pages for 'text files'.

Page 27/2737 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Highlight Read-Along Text (in a storybook type app for iPhone)

    - by outtoplayinc
    I love this feature (well, my son loves it), and I would like to implement it in a kid's book app I am doing for iPhone, but I'm clueless where to begin. I'm using Cocos2d for all the animated sprite/transition stuff, but I'm not sure how to approach highlighting text as it is narrated. Example: "Jack and Jill, drank their fill, and were too drunk to go for water." As the text is narrated (.mp3 plays on each page), the text would be highlighted. I considered investigating Core animation, but I"m more familiar with Cocs2d at this point (tenuously at best). If someone has a clue, I'd really appreciate it. Brendang

    Read the article

  • Import text file crunching library for Java/Groovy ?

    - by devdude
    In a lot of real life implementations of applications we face the requirement to import some kind of (text) files. Usually we would implement some (hardcoded?) logic to validate the file (eg. proper header, proper number of delimiters, proper date/time value,etc.). Eventually also need to check for the existence of related data in a table (eg. value of field 1 in text file must have an entry in some basic data table). While XML solves this (to some extend) with XSD and DTD, we end up hacking this again and again for proprietary text file formats. Is there any library or framework that allows the creation of templates similar to the xsd approach ? This would make it way more flexible to react on file format changes or implement new formats. Thanks for any hints Sven

    Read the article

  • ASP.NET TextBox verses input type="text" behavior

    - by harrije
    I notice with ASP.NET if the server side control TextBox is used with out autopostback it will not submit (or postback) the form when typed text ends with enter, which is different from the behavior for plain old HTML pages. Fine, I can set autopostback to get the behavior I want after the enter key. However, autopostback will also cause submit (or postback) when the typed text does not end with enter but focus has changed (i.e. with tab or mouse click), which again is different from plain old HTML pages. How can I get an ASP.NET page to behave the same as a plain old HTML page with respect to text input regardless of whether enter key or change of focus occurs?

    Read the article

  • Full-Text Search in SQL Server Express Won't Recognize Latest IFilters

    - by Brandon King
    I'm having difficulty getting full-text search working in SQL Server 2008 Express with Advanced Services. I have a table loaded with .DOCX files as varbinary(MAX) data that I want to use for a full-text catalog, but it doesn't seem to recognize the .DOCX format. Here are the steps that I've taken... Installed the latest Filter Pack 2.0 Exec sp_fulltext_service 'load_os_resources', 1 Exec sys.sp_help_fulltext_system_components 'all' (NOTE: .DOCX is not shown as a filter) Building the full-text catalog fails to identify any key words I initially thought there might be a conflict between x86 SQL Express and x64 Filter Pack on my Windows 7 machine, but I just tried it with everything x86 in a Windows XP virtual machine and got the same result.

    Read the article

  • n-grams from text in PostgreSQL

    - by harshsinghal
    I am looking to create n-grams from text column in PostgreSQL. I currently split(on white-space) data(sentences) in a text column to an array. select regexp_split_to_array(sentenceData,E'\s+') from tableName Once I have this array, how do I go about: Creating a loop to find n-grams, and write each to a row in another table Using unnest I can obtain all the elements of all the arrays on separate rows, and maybe I can then think of a way to get n-grams from a single column, but I'd loose the sentence boundaries which I wise to preserve. Sample SQL code for PostgreSQL to emulate the above scenario create table tableName(sentenceData text); INSERT INTO tableName(sentenceData) VALUES('This is a long sentence'); INSERT INTO tableName(sentenceData) VALUES('I am currently doing grammar, hitting this monster book btw!'); INSERT INTO tableName(sentenceData) VALUES('Just tonnes of grammar, problem is I bought it in TAIWAN, and so there aint any englihs, just chinese and japanese'); select regexp_split_to_array(sentenceData,E'\s+') from tableName; select unnest(regexp_split_to_array(sentenceData,E'\s+')) from tableName;

    Read the article

  • "Intercepting" user input into text box and removing it

    - by James P
    I have a text box that I would like to do some validation on. At the moment I have this code: function updateChanger() { // Validate input var likeMessage = validateInput($("#like").val()); alert(likeMessage); } function validateInput(input) { input = input.replace(/[^a-zA-Z0-9:\(\/\)\s\.,!~]/g, ""); return input; } This successfully trims out unwanted characters in the likeMessage variable, but the character still gets entered into the text box. I would like to stop that from happening. I know it will have something to do with $("#like").val() but the only thing I can think of is just chopping off the end character from the text box value, would this suffice? Thanks for any help!

    Read the article

  • Extracting text from PDF with Poppler (C++)

    - by nico
    I'm trying to get my way through Poppler and its (lack of) documentation. What I want to do is a very simple thing: open a PDF file and read the text in it. I'm then going to process the text, but that doesn't really matter here. So... I saw the poppler_page_get_text function, and it kind of works, but I have to specify a selection rectangle, which is not very handy. Isn't there just a very simple function that would output the PDF text in order (maybe line by line?). Thank you Nicola

    Read the article

  • SQL Server - Percent based Full Text Search

    - by Sukhminder Singh
    Hi I want to conduct search on a particular column of a table in such a way that returning result set should satify following 2 conditions: Returning result set should have records whose 90% of the characters matches with the given search text. Returning result set should have records whose 70% of the consecutive characters matches with the given search text. It implies that when 10 character word Sukhminder is searched, then: it should return records like Sukhmindes, ukhminder, Sukhmindzr, because it fulfils both of the above mentioned conditions. But it should not return records like Sukhmixder because it does not fulfil the second condition. Likewise, It should not return record Sukhminzzz because it does not fulfil the first condition. I am trying to use Full Text Search feature of SQL Server. But, could not formulate the required query yet. Kindly reply ASAP.

    Read the article

  • HTML tag for identifying text

    - by ravi
    I am not very much familiar with HTML programming. If we look at the source code of a page, then we can see what are the HTML tags for which texts and so. It is the case that there is group or class of HTML tags which is used for purpose such that it can be used for main text or so. I mean like '<\input type="radio" name="option"' this tag says that there will a radio button, similar can be make a group of HTML tags such that it consist of text part, which means we look at the tag and not at the content and can say that in between startTag and endTag we have text.

    Read the article

  • PHP: Loop through text file and isolate lines which a specific "starting point"

    - by Mestika
    Hi everyone, I’m trying to reduce some editing time within some textfiles where there approximately are 10.000 lines of text, but I only need around 200 or some. The text file relies on a almost specific pattern but it deviates from time to time but my “focus” in order to select the right line to keep is, that the line always starts with: z3455 and then have a variable afterwards, e.g.: z3455 http://url.com/data1/data1.1/data1.3/ (342kb) I have an algorithm to capture the URL and its content but now I need some way to loop through the text file, deleting all lines except does that starts with z3455 and then “push” them together so they are listed underneath each other. I’ve tried different approaches for this in PHP but can’t seem to find a correct function. I can “isolate” a specific line number but when it deviates I can’t use this approach fully. I hope that someone can help me, either by providing the code or knocking me in the right direction to how I’ll solve this problem. Thanks in advance Sincerely - Mestika

    Read the article

  • reStructured Text - Escape a headline?

    - by Rory
    I have a reStructuredText document. rST uses =====, etc. as a heading format. However I need to include some lines that have this text in it, e.g.: some of my text ===== stuff ===== some more of my text And I don't want the ==== to be interpreted as a heading, i.e. I don't want stuff to be a heading. I would rather that those equal signs are just displayed as is. Is this possible in rST?

    Read the article

  • Copy files in Linux, avoid the copy if files do exist in destination

    - by user10826
    Hi, I need to copy a /home/user folder from one hardisk to another one. It has 100000 files and around 10G size. I use cp -r /origin /destination sometines I get some errors due to broken links, permissions and so on. So I fix the error, and need to start again the copy. I wonder how could I tell the command "cp", once it tries to copy again, not to copy files again if they exist in the destination folder. Thanks

    Read the article

  • CSS - vertical align text where text can be multiple lines

    - by Sniffer
    Hi all I've been given a design by a graphic designer, which I'm trying to put into HTML and CSS. One of the issues I'm facing is on a user input form. In the design the labels for each input are a fixed width - say 100px. The container for each label/input pair is fixed at 2em. The design I've been given has asked that the text for each label is vertically aligned. So the structure is like this: <containerTag> <label /> <input /> </containerTag> No problems as long as the text is on one line (I would have just used line-height of 2em to match the container), but some of the text in the labels are wrapping to two or even lines. Is there a semantic and nice way to get around this problem? I need something that will work in IE6-9, Firefox 3.5+, Chrome and Safari. Although I am using progressive enhancement, so if there is a solution that will only work on the later browsers, but won't break the older ones, then this would be acceptable. Any help gratefully received! Thanks for your time S

    Read the article

  • Python: How to read huge text file into memory

    - by asmaier
    I'm using Python 2.6 on a Mac Mini with 1GB RAM. I want to read in a huge text file $ ls -l links.csv; file links.csv; tail links.csv -rw-r--r-- 1 user user 469904280 30 Nov 22:42 links.csv links.csv: ASCII text, with CRLF line terminators 4757187,59883 4757187,99822 4757187,66546 4757187,638452 4757187,4627959 4757187,312826 4757187,6143 4757187,6141 4757187,3081726 4757187,58197 So each line in the file consists of a tuple of two comma separated integer values. I want to read in the whole file and sort it according to the second column. I know, that I could do the sorting without reading the whole file into memory. But I thought for a file of 500MB I should still be able to do it in memory since I have 1GB available. However when I try to read in the file, Python seems to allocate a lot more memory than is needed by the file on disk. So even with 1GB of RAM I'm not able to read in the 500MB file into memory. My Python code for reading the file and printing some information about the memory consumption is: #!/usr/bin/python # -*- coding: utf-8 -*- import sys infile=open("links.csv", "r") edges=[] count=0 #count the total number of lines in the file for line in infile: count=count+1 total=count print "Total number of lines: ",total infile.seek(0) count=0 for line in infile: edge=tuple(map(int,line.strip().split(","))) edges.append(edge) count=count+1 # for every million lines print memory consumption if count%1000000==0: print "Position: ", edge print "Read ",float(count)/float(total)*100,"%." mem=sys.getsizeof(edges) for edge in edges: mem=mem+sys.getsizeof(edge) for node in edge: mem=mem+sys.getsizeof(node) print "Memory (Bytes): ", mem The output I got was: Total number of lines: 30609720 Position: (9745, 2994) Read 3.26693612356 %. Memory (Bytes): 64348736 Position: (38857, 103574) Read 6.53387224712 %. Memory (Bytes): 128816320 Position: (83609, 63498) Read 9.80080837067 %. Memory (Bytes): 192553000 Position: (139692, 1078610) Read 13.0677444942 %. Memory (Bytes): 257873392 Position: (205067, 153705) Read 16.3346806178 %. Memory (Bytes): 320107588 Position: (283371, 253064) Read 19.6016167413 %. Memory (Bytes): 385448716 Position: (354601, 377328) Read 22.8685528649 %. Memory (Bytes): 448629828 Position: (441109, 3024112) Read 26.1354889885 %. Memory (Bytes): 512208580 Already after reading only 25% of the 500MB file, Python consumes 500MB. So it seem that storing the content of the file as a list of tuples of ints is not very memory efficient. Is there a better way to do it, so that I can read in my 500MB file into my 1GB of memory?

    Read the article

  • Reading a large text file to memory in C++

    - by NoneType
    Is there a way to read a large text file (~60MB) into memory at once (like a compiler flag to increase program memory limit) ? Currently, ofstream's open function throws a segmentation fault while trying to read this file. ifstream fis; fis.open("my_large_file.txt"); // Segfaults here The file just consists of rows of the form number_1<tabspace>number_2 i.e., two numbers separated by a tabspace.

    Read the article

  • Autosaving on emacs or xemacs files (preferably on loss of focus)

    - by Spencer
    Ideally I want to replicate with emacs functionality from TextMate, whereby on loss of focus i.e. I click away from the buffer, my file saves. If this isn't possible, I want to customize emacs so that it will autosave the file for every character I write. When I say this I don't mean I want to autosave to the ~ backup files. I want to save the file I am currently working on. I am working on a Fedora VM. Note I am not looking for a backup or autosave. I want the file I am actually in to save, so that if I loaded the html file I am editing in a web browser it would reflect my new changes without me having to explicitly change it.

    Read the article

  • Awk filtering values between two files when regions intersect (any solutions welcome)

    - by user964689
    This is building upon an earlier question Awk conditional filter one file based on another (or other solutions) I have an awk program that outputs a column from rows in a text file 'refGene.txt if values in that row match 2 out of 3 values in another text file. I need to include an additional criteria for finding a match between the two files. The criteria is inclusion if the range of the 2 numberical values specified in each row in file 1 overlap with the range of the two values in a row in refGene.txt. An example of a line in File 1: chr1 10 20 chr2 10 20 and an example line in file 2(refGene.txt) of the matching columns ($3, $5, $ 6): chr1 5 30 Currently the awk program does not treat this as a match because although the first column matches neither the 2nd or 3rd columns do no. But I would like a way to treat this as a match because the region 10-20 in file 1 is WITHIN the range of 5-30 in refGene.txt. However the second line in file 1 should NOT match because the first column does not match, which is necessary. If there is a way to include cases when any of the range in file 1 overlaps with any of the range in refGene.txt that would be really helpful. It should also replace the below conditional statements as it would also find all the cases currently described below. Please let me know if my question is unclear. Any help is really appreciated, thanks it advance! (solutions do not have to be in awk) Rubal FILES=/files/*txt for f in $FILES ; do awk ' BEGIN { FS = "\t"; } FILENAME == ARGV[1] { pair[ $1, $2, $3 ] = 1; next; } { if ( pair[ $3, $5, $6 ] == 1 ) { print $13; } } ' $(basename $f) /files/refGene.txt > /files/results/$(basename $f) ; done

    Read the article

  • Binary files printing and desired precision

    - by yCalleecharan
    Hi, I'm printing a variable say z1 which is a 1-D array containing floating point numbers to a text file so that I can import into Matlab or GNUPlot for plotting. I've heard that binary files (.dat) are smaller than .txt files. The definition that I currently use for printing to a .txt file is: void create_out_file(const char *file_name, const long double *z1, size_t z_size){ FILE *out; size_t i; if((out = _fsopen(file_name, "w+", _SH_DENYWR)) == NULL){ fprintf(stderr, "***> Open error on output file %s", file_name); exit(-1); } for(i = 0; i < z_size; i++) fprintf(out, "%.16Le\n", z1[i]); fclose(out); } I have three questions: Are binary files really more compact than text files?; If yes, I would like to know how to modify the above code so that I can print the values of the array z1 to a binary file. I've read that fprintf has to be replaced with fwrite. My output file say dodo.dat should contain the values of array z1 with one floating number per line. I have %.16Le up in my code but I think that %.15Le is right as I have 15 precision digits with long double. I have put a dot (.) in the width position as I believe that this allows expansion to an arbitrary field to hold the desired number. Am I right? As an example with %.16Le, I can have an output like 1.0047914240730432e-002 which gives me 16 precision digits and the width of the field has the right width to display the number correctly. Is placing a dot (.) in the width position instead of a width value a good practice? Thanks a lot...

    Read the article

  • How can I quickly parse large (>10GB) files?

    - by Andrew
    Hi - I have to process text files 10-20GB in size of the format: field1 field2 field3 field4 field5 I would like to parse the data from each line of field2 into one of several files; the file this gets pushed into is determined line-by-line by the value in field4. There are 25 different possible values in field2 and hence 25 different files the data can get parsed into. I have tried using Perl (slow) and awk (faster but still slow) - does anyone have any suggestions or pointers toward alternative approaches? FYI here is the awk code I was trying to use; note I had to revert to going through the large file 25 times because I wasn't able to keep 25 files open at once in awk: chromosomes=(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25) for chr in ${chromosomes[@]} do awk < my_in_file_here -v pat="$chr" '{if ($4 == pat) for (i = $2; i <= $2+52; i++) print i}' >> my_out_file_"$chr".query done

    Read the article

  • Batch deletion of smaller files from group of files via unix command line

    - by artlung
    I have a large number (more than 400) of directories full of photos. What I want to do is to keep the larger sizes of these photos. Each directory has 31 to 66 files in it. Each directory has thumbnails, and larger versions, plus a file called example.jpg I dispatched the example.jpg file easily with: rm */example.jpg I initially thought that it would be easy to delete the thumbnails, but the problem is they are not consistently named. The typical pattern was photo1.jpg and photo1s.jpg. I did rm */photo*s.jpg but it ended up some of the files named photoXs.jpg were actually larger and not smaller. Argh. So what I want to do is scan each directory for filesize and delete (or move) the thumbnails. I initially thought I'd just ls -R every file and extract the size of each file and save those under a threshold. The problem? In one directory the large will be 1.1 MB and the thumb is 200k. In another the large is 200k and the small 30k. Even worse, the files really are mostly named photo1.jpg - so simply putting them all in the same folder, sorting by size, and deleting in groups would not work without renaming already, and if it's possible I'd prefer to keep them in their folders. I was almost resolved to just doing this all manually, but then thought I'd ask here. How would you do this task?

    Read the article

  • Android TextView Justify Text

    - by Peter
    Hey, How do you get the text of a TextView to be Justified (with text flush on the left- and right- hand sides)? I found a possible solution here, but it does not work (even if you change vertical-center to center_vertical, etc). Cheers, Pete

    Read the article

  • Sitecore Rich Text Html Editor Profile - set global default

    - by misteraidan
    OK I can't believe this can't be found anywhere so I'm asking the question. Is there a way to set the default Html Editor Profile in Sitecore so I don't have the override the Source field on each individual Rich Text field? e.g. I want to make this the default option for the Html editor: /sitecore/system/Settings/Html Editor Profiles/Rich Text Medium

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >