Search Results

Search found 46505 results on 1861 pages for 'full text'.

Page 19/1861 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Convert Audio File to text using System.Speech

    - by Kushal Kalambi
    I am looking to convert a .wav file recorded through an android phone at 16000 to text using C#; namely the System.Speech namespace. My code is mentioned below; recognizer.SetInputToWaveFile(Server.MapPath("~/spoken.wav")); recognizer.LoadGrammar(new DictationGrammar()); RecognitionResult result = recognizer.Recognize(); label1.Text = result.Text; The is working perfectly with sample .wav "Hello world" file. However when i record something on teh phone and try to convert to on the pc, the converted text is no where close to what i had recoreded. Is there some way to make sure the audio file is transcribed accurately?

    Read the article

  • How to overwrite specific lines on text files

    - by iTayb
    I have two text files. I'd like to copy a specific part in the first text file and replace it with a part of the second text file. This is how I read the files: List<string> PrevEp = File.ReadAllLines(string.Format(@"{0}naruto{1}.ass", url, PrevEpNum)).ToList(); List<string> Ep = File.ReadAllLines(string.Format(@"{0}naruto{1}.ass", url, EpNum)).ToList(); The part in PrevEp that I need: from the start until it meets a line that includes Creditw,,0000,0000,0000. The part I would like to overwrite in Ep: from the start to a line which is exactly Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text. I'm not so sure how may I do it. Could you lend me a hand? Thank you very much, gentlemen.

    Read the article

  • jQuery getting text-shadow variabile

    - by Mircea
    Hi, I want to get 4 variables when I click on a span that have the CSS3 text-shadow propriety. So for a css propriety of text-shadow: -4px 11px 8px rgb(30, 43, 2);, my code should be: $("#element").click(function () { var text-shadow = $("#element").css("text-shadow") }); Would it be possible to get it split like: var y = "-4px"; var x = "11px"; var blur = "8px"; color = "rgb(30, 43, 2)"; I need to somehow split the first variable to get this data. Thanx

    Read the article

  • Reading a text file in c++

    - by Yavuz Karacabey
    string numbers; string fileName = "text.txt"; ifstream inputFile; inputFile.open(fileName.c_str(),ios_base::in); inputFile >> numbers; inputFile.close(); cout << numbers; And my text.txt file is: 1 2 3 4 5 basically a set of integers separated by tabs. The problem is the program only reads the first integer in the text.txt file and ignores the rest for some reason. If I remove the tabs between the integers it works fine, but with tabs between them, it won't work. What causes this? As far as I know it should ignore any white space characters or am I mistaken? If so is there a better way to get each of these numbers from the text file?

    Read the article

  • Text wrap in a <canvas> element

    - by Gwood
    I am trying to add text on an image using the <canvas> element. First the image is drawn and on the image the text is drawn. So far so good. But where I am facing a problem is that if the text is too long, it gets cut off in the start and end by the canvas. I don't plan to resize the canvas, but I was wondering how to wrap the long text into multiple lines so that all of it gets displayed. Can anyone point me at the right direction?

    Read the article

  • Conditionaly strip the last line from a text file

    - by fraXis
    Hello, I posted this yesterday on SO, and I received an answer that works great, but I need to change it around and I don't know how. Here is my original message: I need to strip the last line from a text file. I know how to open and save text files in C#, but how would I strip the last line of the text file? The text file will always be different sizes (some have 80 lines, some have 20). Can someone please show me how to do this? Here is the code that someone gave me to do this (which works fine) //Delete the last line from the file. This line could be 8174, 10000, or anything. This is from SO string tempfile = @"C:\junk_temp.txt"; using (StreamReader reader2 = new StreamReader(newfilename)) { using (StreamWriter writer2 = new StreamWriter(tempfile)) { string line = reader2.ReadLine(); while (!reader2.EndOfStream) { writer2.WriteLine(line); line = reader2.ReadLine(); } // by reading ahead, will not write last line to file } } File.Delete(newfilename); File.Move(tempfile, newfilename); File.Delete(tempfile); How would I change this to only delete the last line of the text file if it is a 4 or 5 digit string (such as 8001 or 99999). If it is anything other than that, such as a %, then I don't want to delete the last line. Can someone please modify the above code to do this for me? Thanks so much.

    Read the article

  • MS SQL Server Text Datatype Maxlength = 65,535?

    - by craigmj
    Software I'm working with uses a text field to store XML. From my searches online, the text datatype is supposed to hold 2^31 - 1 characters. Currently SQL Server is truncating the XML at 65,535 characters every time. I know this is caused by sqlserver, because if I add a 65,536th character to the field directly in Management Studio, it states that it will not update because characters will be truncated. Is the Maxlength really 65,535 or could this be because the database was designed in an earlier version of MS SQL Server (2000) and it's using the legacy text datatype instead of 2005's? If this is the case, will Altering the datatype to Text in sql server 2005 fix this issue?

    Read the article

  • Justify Text in a HTML/XHTML TextArea

    - by Matt
    I am currently trying to justify text in a textarea, unfortunately the CSS: text-align: justify; Doesn't work on the text like center, left and right do. I've tried this in both Firefox 3 and IE 7 with no luck. Is there any way around this?

    Read the article

  • How to convert Xml files to Text Files

    - by John
    Hi all, I have around 8000 xml files that needs to be converted into text files. The text file must contain title, description and keywords of the xml file without the tags and removing other elements and attributes as well. In other words, i need to create 8000 text files containing the title,description and keywords of the xml file. I need codings for this to be done systematically. Any help would be greatly appreciated. Thanks in advance.

    Read the article

  • How to justify text on a TextView made easy- Android

    - by Juan
    I'm looking for a simple way to forget that I'm using a WebView to have justified text in my TextView. Has someone made a custom view for this? I'm well aware that I can do something like this: WebView view = new WebView(this); view.loadData("my html with text justification","text/html","utf-8"); But it gets ugly when you want to set the size, the color or other common properties of the TextView, there must be a more convenient way of doing it.

    Read the article

  • loading Data in VBA from a text file

    - by omegayen
    I am not very familiar with VBA but need to use it for a new software program I am using (not Microsoft related) I have a text file that has columns of data I would like to read into VBA. Specifically the text file has 4 entries per row. Thus I would like to load in the column vectors (N by 1). The text file is separated by a space between each entry. So for example I want to load in column one and save it as array A, then column two and save as array B, then column three and save as array C, and then column four and save as array D. This code snippet found below from http://www.tek-tips.com/faqs.cfm?fid=482 is something I found that can load in text to an array, but I need to adapt it to be able to save the columns as different arrays as specified above... Open "MyFile.txt" For Input As #1 ReDim Txt$(0) Do While Not EOF(1) ReDim Preserve Txt$(UBound(Txt$) + 1) Input #1, Txt$(UBound(Txt$)) Loop Close #1

    Read the article

  • predict location of single-line text from a UITextView

    - by William Jockusch
    Is this possible? Specifically, I have a UITextView, and the text is short enough that it will fit on a single line. I want to predict where it will appear, so that (for example) if I wanted to, I could set up a UILabel that rendered the text in exactly the same location. Once I get that figured out, what I really want to do is pick contentInset and/or contentOffset so that the text of the UITextView and the left-justified UILabel will render in the same location. But I figure the above will let me do that. EDIT: In response to a comment, the fundamental problem I am trying to get around is that UITextField does not let you set the location of the cursor. It appears a lot of people have tried to get around this without success. I need to be able to move the cursor -- inserting/deleting text there is not enough. Control cursor position in UITextField Insert string at cursor position of UITextField Moving the cursor to the beginning of UITextField iOS -- dealing with the inability to set the cursor position in a UITextField

    Read the article

  • Replacement Text Syntax for JavaScript’s String.replace()

    - by Jan Goyvaerts
    A RegexBuddy user told me that he couldn’t easily find a detailed explanation of the replacement text syntax supported by the String.replace() function in JavaScript. I had to admin that my own web page about JavaScript’s regular expression support was also lacking. I’ve now added a new Replacement Syntax section that has all the details. I’ll summarize it here: $1: Text matched by the first capturing group or the literal text $1 if the regex has no capturing groups. $99: Text matched by the 99th capturing group if the regex has 99 or more groups. Text matched by the 9th capturing group followed by a literal 9 if the regex has 9 or more but less than 99 groups. The literal text $99 if the regex has fewer than 9 groups. $+: Text matched by the highest-numbered capturing group. Replaced with nothing if the highest-numbered group didn’t participate in the match. $&: Text matched by the entire regex. You cannot use $0 for this. $` (backtick): Text to the left of the regex match. $' (single quote): Text to the right of the regex match. $_: The entire subject string.

    Read the article

  • WYSIWYG editor for structured text (suitable for SVN versioning)

    - by chris_l
    I'm looking for an open source cross platform WYSIWYG editor that I can use to write documentation. I'm not looking for a web based solution - i.e. it should work without a web server, and I want to save my files directly to disk. The result could be any structured format, like Wiki markup, ReStructuredText, DocBook, or a small subset of HTML, ... But it's important, that Subversion diff can be used to see differences between the versions easily (this wouldn't work with .odt or .rtf files for example) I'm currently thinking about using Open Office, and saving the files as HTML, but is there a better solution?

    Read the article

  • Good text editors or viewers for large log files

    - by Kristopher Johnson
    Log files and other textual data files are often tens or hundreds of megabytes in size, and some editors choke when you try to open something so large. What are some good applications for viewing large files? Bonus points for apps that can open compressed files, search for things with regular expressions, parse output lines, etc.

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • Looking for Linux text editor

    - by Daniel
    I'm looking for VIM replacement. My key points are: Extensible in sane language (such as Python, Ruby, or even Lua, after vimscript everything will do). Also GUI part should be extensible too, so no SublimeText2. GUI. Preferrably GTK+. Lightweight. I don't understand IDEs like Eclipse/NetBeans consuming up to 1G of RAM. File browser panel. Splits, tabs and windows. There should be ability to split views tabs infinite number of times (or while they fit to screen). VCS support (optional: especially Git) Snippets & autocompletion (not mandatory, but I would very love to have those) Any ideas?

    Read the article

  • Modify Sublime Text 2 whitespace representation?

    - by Mike Grace
    Is there a way to modify the whitespace representation characters so I can change it from dots and dashes to something else? Because I currently have whitespace characters being drawn always, it looks like this. I don't need it turned off, just interested in changing how it's represented. I like how TextMate shows invisible characters but I would be ok with just being able to change the spaces to show a blank space instead of a dot.

    Read the article

  • Recovering text files in terminal using grep on Mac OS X Snow Leopard

    - by littlejim84
    I foolishly removed some source code from my Mac OS X Snow Leopard machine with rm -rf when doing something with buildout. I want to try and recover these files again. I haven't touched the system since to try and seek an answer. I found this article and it seems like the grep method is the way to go, but when running it on my machine I'm getting 'Resource busy' when trying to run it on the disk. I'm using this command: sudo grep -a -B1000 -A1000 'video_output' /dev/disk0s2 > file.txt Where 'dev/disk0s2' is what came up when I ran df. I get this when running: grep: /dev/disk0s2: Resource busy I'm not an expert with this stuff, I'm trying my best. Please can anyone help me further? I'm on the verge of losing two days of source code work! Thank you

    Read the article

  • Indexing text file content with command line query

    - by Drew Carlton
    I take daily notes in a plaintext file labeled with date in the YYYYMMDD format. These files are no more than 100 lines long, and are written in a blog style format. I'd like to be able search these files as if they were blog posts indexed by google, with some phrase query returning the most relevant/recent date filenames, with a snippet containing the relevant part. Ideally it would be something like this: #searchindex "laptop no sound" returns: 20100909.txt: ... laptop sound isn't working... 20100101.txt ... sound is too loud... debating what laptop to buy... and so on and so forth. I'm working on a linux platform (Debian with GNOME). I've looked at beagle and tracker, but they just seem complete overkill for what I want.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >