Search Results

Search found 34935 results on 1398 pages for 'text coloring'.

Page 158/1398 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • Removing a pattern from the beggining and end of a string in ruby

    - by seaneshbaugh
    So I found myself needing to remove <br /> tags from the beginning and end of strings in a project I'm working on. I made a quick little method that does what I need it to do but I'm not convinced it's the best way to go about doing this sort of thing. I suspect there's probably a handy regular expression I can use to do it in only a couple of lines. Here's what I got: def remove_breaks(text) if text != nil and text != "" text.strip! index = text.rindex("<br />") while index != nil and index == text.length - 6 text = text[0, text.length - 6] text.strip! index = text.rindex("<br />") end text.strip! index = text.index("<br />") while index != nil and index == 0 text = test[6, text.length] text.strip! index = text.index("<br />") end end return text end Now the "<br />" could really be anything, and it'd probably be more useful to make a general use function that takes as an argument the string that needs to be stripped from the beginning and end. I'm open to any suggestions on how to make this cleaner because this just seems like it can be improved.

    Read the article

  • Scalable way to store files on server (PHP)?

    - by Nathaniel Bennett
    I'm creating my first web application - a really simplistic online text editor. What I need to do is find the best way to store text based files - a lot of them. These text files can be past 10,000 words in size (text words not computer words.) in essence I want the text documents to be limitless in size. I was thinking about storing the text files in my MySQL database - but thought there was a better way. Instead I'm planing on storing the text files in XML based format in a directory on my server. The rows in the database define the name of the xml based text file and the user who created the text along with basic metadata. An ID is generated using a V4 GUID generator , which gives the text an id and stores the text in the "/store" directory on my server. The text definitions in my server contain this id, and the android app I'm developing gets the contents of the text file by retrieving the text definition and then downloading the text to the local device using the GUID in the text definition. I just think this is a botch job? how can I improve this system? There has been cases of GUID colliding. I don't want this to happen. A "slim" possibility isn't good enough - I need to make sure there is absolutely no chance in a GUID collision. I was planning on checking the database for texts that have the same id before storing the text with a particular id - I however believe with over 20,000 pieces of text in my database this would take an long time and produce unneeded stress on the server. How can I make GUID safe? What happens when a GUID collides? The server backend is going to be written in PHP.

    Read the article

  • Could breadcrumbs be considered as rich anchor text, and drop my ranking?

    - by Gkhan14
    I have breadcrumbs on my site, and the anchor text for the links are an exact match for the keyword I want to rank for. So in other words, it's "rich" anchor text. An example of my breadcrumbs could look like: Home Apple Cheats Apple Points Cheats Apple Seeds Game Cheats Well I've also used RDF markup to construct my breadcrumbs, following the Google article Rich snippets - Breadcrumbs which will allow Google to recognize it. So overall, do I need to worry about exact match anchor text that will get me penalized by something like Penguin? If so, I can remove the breadcrumbs easily.

    Read the article

  • How can you print a text file via gedit from the command line?

    - by dan
    I'd like to use gedit or some similar program just as a page formatter and pipe some text through it and onto the printer. | lpr just doesn't cut it in the presentation department. The printed output is subpar, even if I try to tinker with the margin and font size options. But I like the way text looks like when printed from gedit. Is there a way to have the best of both worlds and use a command line pipeline to print a text file with gedit-quality formatting?

    Read the article

  • How to load a text file from a server into iPhone game with AS3 in Adobe AIR?

    - by Phil
    Im creating an iPhone game with Adobe AIR, and I want to be able to load a simple text msg into an dynamic text box on the games front screen from my server (and then be able to update that text file on the server, so it updates automatically in the game after the game is on the app store) How would I go about acheiving that? is it as simple as using a getURL? are there any specifical issues with trying to do this on the iPhone via AIR that I should be aware of? Thanks for any advice.

    Read the article

  • How to disable drag and drop of text within the textarea? [migrated]

    - by Manoj Agarwal
    I am working on UI Design, where I need to use Html textarea object. The sample code is: <textarea rows="5" cols="60" spellcheck="false" style="font-size:12px; font-family: Verdana;"> Abc Xyz Mnp Pqr </textarea> I don't want to disable the textarea, since there are some cross-browser issues. If I point on 'y' in 'xyz' and drag it after text 'Mnp', it will be shifted as 'Mnpyz'. I want to avoid this drag and drop feature of text within the text area.

    Read the article

  • Does Google pick up anchor text that is in nested elements?

    - by dangerDAN
    When Google looks at anchor text on a website, will it pickup the text if it is inside nested elements? So for example: <a href="http://www.google.com/">Visit Google</a> To: <a href="http://www.google.com/"> <div class="circle"> <span>Visit Google</span> </div> </a> The reason I ask is because I want to use css3 elements for certain links on my website, to style them as circles. But the anchor text needs to be picked up for these links, so I want to know wether or not the above is bad practice in this case.

    Read the article

  • Execute SQL on CSV files via JDBC

    - by Markos Fragkakis
    Dear all, I need to apply an SQL query to CSV files (comma-separated text files). My SQL is predefined from another tool, and is not eligible to change. It may contain embedded selects and table aliases in the FROM part. For my task I have found two open-source (this is a project requirement) libraries that provide JDBC drivers: CsvJdbc XlSQL JBoss Teiid Create an Apache Derby DB, load all CSVs as tables and execute the query. These are the problems I encountered: it does not accept the syntax of the SQL (it uses internal selects and table aliases). Furthermore, it has not been maintained since 2004. I could not get it to work, as it has as dependency a SAX Parser that causes exception when parsing other documents. Similarly, no change since 2004. Have not checked if it supports the syntax, but seems like an overhead. It needs several entities defines (Virtual Databases, Bindings). From the mailing list they told me that last release supports runtime creation of required objects. Has anyone used it for such simple task (normally it can connect to several types of data, like CSV, XML or other DBS and create a virtual, unified one)? Can this even be done easily? From the 4 things I considered/tried, only 3 and 4 seem to me viable. Any advice on these, or any other way in which I can query my CSV files? Cheers

    Read the article

  • Remove undesired indexed keywords from Sql Server FTS Index

    - by Scott
    Could anyone tell me if SQL Server 2008 has a way to prevent keywords from being indexed that aren't really relevant to the types of searches that will be performed? For example, we have the IFilters for PDF and Word hooked in and our documents are being indexed properly as far as I can tell. These documents, however, have lots of numeric values in them that people won't really be searching for or bring back meaningful results. These are still being indexed and creating lots of entries in the full text catalog. Basically we are trying to optimize our search engine in any way we can and assumed all these unnecessary entries couldn't be helping performance. I want my catalog to consist of alphabetic keywords only. The current iFilters work better than I would be able to write in the time I have but it just has more than I need. This is an example of some of the terms from sys.dm_fts_index_keywords_by_document that I want out: $1,000, $100, $250, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 129, 13.1, 14, 14.12, 145, 15, 16.2, 16.4, 18, 18.1, 18.2, 18.3, 18.4, 18.5 These are some examples from the same management view that I think are desirable for keeping and searching on: above, accordingly, accounts, add, addition, additional, additive Any help would be greatly appreciated!

    Read the article

  • Django sphinx works only after app restart.

    - by Lhiash
    Hi, I've set up django-sphinx in my project, which works perfectly only for some time. Later it always returns empty result set. Surprisingly restarting django app fixes it. And search works again but again only for short time (or very limiter number of queries). Heres my sphinx.conf: source src_questions { # data source type = mysql sql_host = xxxxxx sql_user = xxxxxx #replace with your db username sql_pass = xxxxxx #replace with your db password sql_db = xxxxxx #replace with your db name # these two are optional sql_port = xxxxxx #sql_sock = /var/lib/mysql/mysql.sock # pre-query, executed before the main fetch query sql_query_pre = SET NAMES utf8 # main document fetch query sql_query = SELECT q.id AS id, q.title AS title, q.tagnames AS tags, q.html AS text, q.level AS level \ FROM question AS q \ WHERE q.deleted=0 \ # optional - used by command-line search utility to display document information sql_query_info = SELECT title, id, level FROM question WHERE id=$id sql_attr_uint = level } index questions { # which document source to index source = src_questions # this is path and index file name without extension # you may need to change this path or create this folder path = /home/rafal/core_index/index_questions # docinfo (ie. per-document attribute values) storage strategy docinfo = extern # morphology morphology = stem_en # stopwords file #stopwords = /var/data/sphinx/stopwords.txt # minimum word length min_word_len = 3 # uncomment next 2 lines to allow wildcard (*) searches min_infix_len = 1 enable_star = 1 # charset encoding type charset_type = utf-8 } # indexer settings indexer { # memory limit (default is 32M) mem_limit = 64M } # searchd settings searchd { # IP address on which search daemon will bind and accept # optional, default is to listen on all addresses, # ie. address = 0.0.0.0 address = 127.0.0.1 # port on which search daemon will listen port = 3312 # searchd run info is logged here - create or change the folder log = ../log/sphinx.log # all the search queries are logged here query_log = ../log/query.log # client read timeout, seconds read_timeout = 5 # maximum amount of children to fork max_children = 30 # a file which will contain searchd process ID pid_file = searchd.pid # maximum amount of matches this daemon would ever retrieve # from each index and serve to client max_matches = 1000 } and heres my search part from views.py: content = Question.search.query(keywords) if level: content = content.filter(level=level)#level is array of integers There are no errors in any logs, it just isnt returning any results. All help would be most appreciated.

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • How to get levels for Fry Graph readability formula?

    - by Vic
    Hi, I'm working in an application (C#) that applies some readability formulas to a text, like Gunning-Fog, Precise SMOG, Flesh-Kincaid. Now, I need to implement the Fry-based Grade formula in my program, I understand the formula's logic, pretty much you take 3 100-words samples and calculate the average on sentences per 100-words and syllables per 100-words, and then, you use a graph to plot the values. Here is a more detailed explanation on how this formula works. I already have the averages, but I have no idea on how can I tell my program to "go check the graph and plot the values and give me a level." I don't have to show the graph to the user, I only have to show him the level. I was thinking that maybe I can have all the values in memory, divided into levels, for example: Level 1: values whose sentence average are between 10.0 and 25+, and whose syllables average are between 108 and 132. Level 2: values whose sentence average are between 7.7 and 10.0, and .... so on But the problem is that so far, the only place in which I have found the values that define a level, are in the graph itself, and they aren't too much accurate, so if I apply the approach commented above, trying to take the values from the graph, my level estimations would be too much imprecise, thus, the Fry-based Grade will not be accurate. So, maybe any of you knows about some place where I can find exact values for the different levels of the Fry-based Grade, or maybe any of you can help me think in a way to workaround this. Thanks

    Read the article

  • MySQL FULLTEXT aggravation

    - by southof40
    Hi - I'm having problems with case-sensitivity in MySQL FULLTEXT searches. I've just followed the FULLTEXT example in the MySQL doco at http://dev.mysql.com/doc/refman/5.1/en/fulltext-boolean.html . I'll post it here for ease of reference ... CREATE TABLE articles ( id INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY, title VARCHAR(200), body TEXT, FULLTEXT (title,body) ); INSERT INTO articles (title,body) VALUES ('MySQL Tutorial','DBMS stands for DataBase ...'), ('How To Use MySQL Well','After you went through a ...'), ('Optimizing MySQL','In this tutorial we will show ...'), ('1001 MySQL Tricks','1. Never run mysqld as root. 2. ...'), ('MySQL vs. YourSQL','In the following database comparison ...'), ('MySQL Security','When configured properly, MySQL ...'); SELECT * FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); ... my problem is that the example shows that SELECT returning the first and fifth rows ('..DataBase..' and '..database..') but I only get one row ('database') ! The example doesn't demonstrate what collation the table in the example had but I have ended up with latin1_general_cs on the title and body columns of my example table. My version of MySQL is 5.1.39-log and the connection collation is utf8_unicode_ci . I'd be really grateful is someone could suggest why my experience differs from the example in the manual ! Be grateful for any advice.

    Read the article

  • CoInitialize fails in dll

    - by Quandary
    Question: I have the following program, which uses COM to use the Microsoft Speech API (SAPI) to take a text and output it as sound. Now it works fine as long as I have it in a .exe. When I load it as .dll, it fails. Why? I used dependencywalker, and saw the exe doesn't have MSVCR100D and ole32, so I loaded them like this: LoadLibraryA("MSVCR100D.DLL"); LoadLibraryA("ole32.dll"); but it didn't help... Any idea why ? #include <windows.h> #include <sapi.h> #include <cstdlib> int main(int argc, char* argv[]) { ISpVoice * pVoice = NULL; if (FAILED(::CoInitialize(NULL))) return FALSE; HRESULT hr = CoCreateInstance(CLSID_SpVoice, NULL, CLSCTX_ALL, IID_ISpVoice, (void **) &pVoice); if( SUCCEEDED( hr ) ) { hr = pVoice->Speak(L"Noobie was fragged by GSG9 Googlebot", 0, NULL); hr = pVoice->Speak(L"Test Test", 0, NULL); hr = pVoice->Speak(L"This sounds normal <pitch middle = '-10'/> but the pitch drops half way through", SPF_IS_XML, NULL ); pVoice->Release(); pVoice = NULL; } ::CoUninitialize(); return EXIT_SUCCESS; }

    Read the article

  • c# sending sms from computer

    - by I__
    i have this code: private SerialPort port = new SerialPort("COM1", 115200, Parity.None, 8, StopBits.One); Console.WriteLine("Incoming Data:"); port.WriteTimeout = 5000; port.ReadTimeout = 5000; // Attach a method to be called when there is data waiting in the port's buffer port.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived); // Begin communications port.Open(); #region PhoneSMSSetup port.Write("AT+CMGF=1\r\n"); Thread.Sleep(500); port.Write("AT+CNMI=2,2\r\n"); Thread.Sleep(500); port.Write("AT+CSCA=\"+4790002100\"\r\n"); Thread.Sleep(500); #endregion // Enter an application loop which keeps this thread alive Application.Run(); i got it from here: http://www.experts-exchange.com/Programming/Languages/C_Sharp/Q_22832563.html i have a new winforms empty application: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace WindowsFormsApplication1 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { } } } can you please tell me: where exactly would i paste the code? how do i get the code to run? i am sending AT COMMANDS to my cell phone that is attached to the computer

    Read the article

  • Indexing and Searching Over Word Level Annotation Layers in Lucene

    - by dmcer
    I have a data set with multiple layers of annotation over the underlying text, such as part-of-tags, chunks from a shallow parser, name entities, and others from various natural language processing (NLP) tools. For a sentence like The man went to the store, the annotations might look like: Word POS Chunk NER ==== === ===== ======== The DT NP Person man NN NP Person went VBD VP - to TO PP - the DT NP Location store NN NP Location I'd like to index a bunch of documents with annotations like these using Lucene and then perform searches across the different layers. An example of a simple query would be to retrieve all documents where Washington is tagged as a person. While I'm not absolutely committed to the notation, syntactically end-users might enter the query as follows: Query: Word=Washington,NER=Person I'd also like to do more complex queries involving the sequential order of annotations across different layers, e.g. find all the documents where there's a word tagged person followed by the words arrived at followed by a word tagged location. Such a query might look like: Query: "NER=Person Word=arrived Word=at NER=Location" What's a good way to go about approaching this with Lucene? Is there anyway to index and search over document fields that contain structured tokens?

    Read the article

  • Pull specific information from a long list with Perl

    - by melignus
    The file that I've got to work with here is the result of an LDAP extraction but I need to ultimately get the information formatted over to something that a spreadsheet can use. So, the data is as follows: DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData displayName: John Doe name: ##userName DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData displayName: Jane Doe name: ##userName DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData displayName: Ted Doe name: ##userName The format that I need to export to is: firstName lastName userName firstName lastName userName firstName lastName userName Where the spaces are tabs so I can then impor that file into a database. I have experience doing this in VBScript but I'm trying to switch over to using Perl for as much server administration as possible. I'm not sure on the syntax for what I want which is basically while not endoffile{ detect "displayName: " & $firstName & " " & $lastName detect "name: ##" & $userName write $firstName tab $lastName tab $userName to file } Also if someone could point me to a resource specifically on the text parsing syntax that Perl uses, I'd be very grateful. Most of the resources that I've come across haven't been very helpful.

    Read the article

  • SpeechRecognizer causes ANR... I need help with Android speech API.

    - by Dondo Chaka
    I'm trying to use Android's speech recognition package to record user speech and translate it to text. Unfortunately, when I attempt initiate listening, I get an ANR error that doesn't point to anything specific. As the SpeechRecognizer API indicates, a RuntimeException is thrown if you attempt to call it from the main thread. This would make me wonder if the processing was just too demanding... but I know that other applications use the Android API for this purpose and it is typically pretty snappy. java.lang.RuntimeException: SpeechRecognizer should be used only from the application's main thread Here is a (trimmed) sample of the code I'm trying to call from my service. Is this the proper approach? Thanks for taking the time to help. This has been a hurdle I haven't been able to get over yet. Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, "com.domain.app"); SpeechRecognizer recognizer = SpeechRecognizer .createSpeechRecognizer(this.getApplicationContext()); RecognitionListener listener = new RecognitionListener() { @Override public void onResults(Bundle results) { ArrayList<String> voiceResults = results .getStringArrayList(RecognizerIntent.EXTRA_RESULTS); if (voiceResults == null) { Log.e(getString(R.string.log_label), "No voice results"); } else { Log.d(getString(R.string.log_label), "Printing matches: "); for (String match : voiceResults) { Log.d(getString(R.string.log_label), match); } } } @Override public void onReadyForSpeech(Bundle params) { Log.d(getString(R.string.log_label), "Ready for speech"); } @Override public void onError(int error) { Log.d(getString(R.string.log_label), "Error listening for speech: " + error); } @Override public void onBeginningOfSpeech() { Log.d(getString(R.string.log_label), "Speech starting"); } }; recognizer.setRecognitionListener(listener); recognizer.startListening(intent);

    Read the article

  • MySQL "OR MATCH" hangs (very slow) on multiple tables

    - by Kerry
    After learning how to do MySQL Full-Text search, the recommended solution for multiple tables was OR MATCH and then do the other database call. You can see that in my query below. When I do this, it just gets stuck in a "busy" state, and I can't access the MySQL database. SELECT a.`product_id`, a.`name`, a.`slug`, a.`description`, b.`list_price`, b.`price`, c.`image`, c.`swatch`, e.`name` AS industry, MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) AS relevance FROM `products` AS a LEFT JOIN `website_products` AS b ON (a.`product_id` = b.`product_id`) LEFT JOIN ( SELECT `product_id`, `image`, `swatch` FROM `product_images` WHERE `sequence` = 0) AS c ON (a.`product_id` = c.`product_id`) LEFT JOIN `brands` AS d ON (a.`brand_id` = d.`brand_id`) INNER JOIN `industries` AS e ON (a.`industry_id` = e.`industry_id`) WHERE b.`website_id` = %d AND b.`status` = %d AND b.`active` = %d AND MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) OR MATCH ( d.`name` ) AGAINST ( '%s' IN BOOLEAN MODE ) GROUP BY a.`product_id` ORDER BY relevance DESC LIMIT 0, 9 Any help would be greatly appreciated. EDIT All the tables involved are MyISAM, utf8_general_ci. Here's the EXPLAIN SELECT statement: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY a ALL NULL NULL NULL NULL 16076 Using temporary; Using filesort 1 PRIMARY b ref product_id product_id 4 database.a.product_id 2 1 PRIMARY e eq_ref PRIMARY PRIMARY 4 database.a.industry_id 1 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 23261 1 PRIMARY d eq_ref PRIMARY PRIMARY 4 database.a.brand_id 1 Using where 2 DERIVED product_images ALL NULL NULL NULL NULL 25933 Using where I don't know how to make that look neater -- sorry about that UPDATE it returns the query after 196 seconds (I think correctly). The query without multiple tables takes about .56 seconds (which I know is really slow, we plan on changing to solr or sphinx soon), but 196 seconds?? If we could add a number to the relevance if it was in the brand name ( d.name ), that would also work

    Read the article

  • Replacing all GUIDs in a file with new GUIDs from the command line

    - by Josh Petrie
    I have a file containing a large number of occurrences of the string Guid="GUID HERE" (where GUID HERE is a unique GUID at each occurrence) and I want to replace every existing GUID with a new unique GUID. This is on a Windows development machine, so I can generate unique GUIDs with uuidgen.exe (which produces a GUID on stdout every time it is run). I have sed and such available (but no awk oddly enough). I am basically trying to figure out if it is possible (and if so, how) to use the output of a command-line program as the replacement text in a sed substitution expression so that I can make this replacement with a minimum of effort on my part. I don't need to use sed -- if there's another way to do it, such as some crazy vim-fu or some other program, that would work as well -- but I'd prefer solutions that utilize a minimal set of *nix programs since I'm not really on *nix machines. To be clear, if I have a file like this: etc etc Guid="A" etc etc Guid="B" I would like it to become this: etc etc Guid="C" etc etc Guid="D" where A, B, C, D are actual GUIDs, of course. (for example, I have seen xargs used for things similar to this, but it's not available on the machines I need this to run on, either. I could install it if it's really the only way, although I'd rather not)

    Read the article

  • C# Combining lines

    - by Mike
    Hey everybody, this is what I have going on. I have two text files. Umm lets call one A.txt and B.txt. A.txt is a config file that contains a bunch of folder names, only 1 listing per folder. B.txt is a directory listing that contains folders names and sizes. But B contains a bunch of listing not just 1 entry. What I need is if B, contains A. Take all lines in B that contain A and write it out as A|B|B|B ect.... So example: A.txt: Apple Orange Pear XBSj HEROE B.txt: Apple|3123123 Apple|3434 Orange|99999999 Orange|1234544 Pear|11 Pear|12 XBSJ|43949 XBSJ|43933 Result.txt: Apple|3123123|3434 Orange|99999999|1234544 Pear|11|12 XBSJ|43949|43933 This is what I had but it's not really doing what I needed. string[] combineconfig = File.ReadAllLines(@"C:\a.txt"); foreach (string ccline in combineconfig) { string[] readlines = File.ReadAllLines(@"C:\b.txt"); if (readlines.Contains(ccline)) { foreach (string rdlines in readlines) { string[] pslines = rdlines.Split('|'); File.AppendAllText(@"C:\result.txt", ccline + '|' + pslines[0]); } } I know realize it's not going to find the first "if" because it reads the entire line and cant find it. But i still believe my output file will not contain what I need.

    Read the article

  • DefaultStyledDocument.styleChanged(Style style) may not run in a timely manner?

    - by Paul Reiners
    I'm experiencing an intermittent problem with a class that extends javax.swing.text.DefaultStyledDocument. This document is being sent to a printer. Most of the time the formatting of the document looks correct, but once in a while it doesn't. It looks like some of the changes in the formatting have not been applied. I took a look at the DefaultStyledDocument.styleChanged(Style style) code: /** * Called when any of this document's styles have changed. * Subclasses may wish to be intelligent about what gets damaged. * * @param style The Style that has changed. */ protected void styleChanged(Style style) { // Only propagate change updated if have content if (getLength() != 0) { // lazily create a ChangeUpdateRunnable if (updateRunnable == null) { updateRunnable = new ChangeUpdateRunnable(); } // We may get a whole batch of these at once, so only // queue the runnable if it is not already pending synchronized(updateRunnable) { if (!updateRunnable.isPending) { SwingUtilities.invokeLater(updateRunnable); updateRunnable.isPending = true; } } } } /** * When run this creates a change event for the complete document * and fires it. */ class ChangeUpdateRunnable implements Runnable { boolean isPending = false; public void run() { synchronized(this) { isPending = false; } try { writeLock(); DefaultDocumentEvent dde = new DefaultDocumentEvent(0, getLength(), DocumentEvent.EventType.CHANGE); dde.end(); fireChangedUpdate(dde); } finally { writeUnlock(); } } } Does the fact that SwingUtilities.invokeLater(updateRunnable) is called, rather than invokeAndWait(updateRunnable), mean that I can't count on my formatting changes appearing in the document before it is rendered? If that is the case, is there a way to ensure that I don't proceed with rendering until the updates have occurred?

    Read the article

  • Lucene raise document score if sibling entity matches query

    - by Pitagoras
    I have the following design situation. I use hibernate search (lucene in the back). Tha application manages ITEMs which have title, description and tags. These are full text indexed. On the other hand, we have COLLECTION of ITEMs. The user can create a COLLECTION and add as many ITEMs as she wants. ITEMs can also belong to many COLLECTIONs. I have a boosted query so that search terms that appear in the tags are more important than in the title, and lastly in the description. But I need an additional matching criteria: for a given ITEM, it whould rank better if other documents in some COLLECTION where the ITEM belongs, also match the query. This is like to say: the title/tags/description of "fellow" items (i.e. items in some shared collection) make the item rank better. I was thinking that adding an ITEM to a COLLECTION would add something like "extra tags" to every other ITEM in the collection, being these extra tags the elements to match in the added ITEM. I feel a more clever solution lucene-wise should exists. Any ideas/pointers are welcome. Thanks.

    Read the article

  • Basic formatting. sed, or cut, or what?

    - by dsclough
    Very new to this whole Unix thing. I'm currently using korn shell to try and format some lines of text. My input has a couple of lines that look something like this Date/Time :- Monday June 03 00:00:00 EDT 2013 Host Name :- HostNameHere PIDS :- NumbersNLetters Product Name :- ProductName The desired output would be as follows: Date/Time="Monday June 03 00:00:00 EDT 2013" HostName="HostNameHere" PIDS="NumbersNLetters" ProductName="ProductName" So, I need to get rid of any spaces in the leftmost column, and throw everything in the rightmost column between quotations. I've looked at the cut command, and got this far: Cut -f 1,2 -d - Which might produce a result like Date/Time:Monday June 03 00:00:00 EDT 2013, which is close to what I want, but not quite. I wasn't sure if cut could let me add parentheses, and it doesn't look like I can remove spaces that way either. sed seems like it might be closer to the answer, but I wasn't able to find through googling how I might just look for any pattern and not a specific one. I apologize for the incredibly basic question, but reading documentation only gets you so far before your brain starts to ache... If there are any better resources I should be looking at I would be happy to get pointed in the right direction. Thanks!

    Read the article

  • How can we change color of a text programmatically ?

    - by user297535
    My code is -(UIImage *)addText:(UIImage *)img text:(NSString *)text1{ int w = img.size.width; int h = img.size.height; CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst); CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage); CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 1); char* text = (char *)[text1 cStringUsingEncoding:NSASCIIStringEncoding]; CGContextSelectFont(context, "Arial", 18, kCGEncodingMacRoman); CGContextSetTextDrawingMode(context, kCGTextFill); CGContextSetRGBFillColor(context, 255, 255, 255, 2); CGContextShowTextAtPoint(context, 10, 170, text, strlen(text)); CGImageRef imageMasked = CGBitmapContextCreateImage(context); CGContextRelease(context); CGColorSpaceRelease(colorSpace); return [UIImage imageWithCGImage:imageMasked]; } -(UIImage *)addText:(UIImage *)img text:(NSString *)text1{ int w = img.size.width; int h = img.size.height; CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(NULL, w, h, 8, 4 * w, colorSpace, kCGImageAlphaPremultipliedFirst); CGContextDrawImage(context, CGRectMake(0, 0, w, h), img.CGImage); CGContextSetRGBFillColor(context, 0.0, 0.0, 1.0, 1); char* text = (char *)[text1 cStringUsingEncoding:NSASCIIStringEncoding]; CGContextSelectFont(context, "Arial", 18, kCGEncodingMacRoman); CGContextSetTextDrawingMode(context, kCGTextFill); CGContextSetRGBFillColor(context, 255, 255, 255, 2); CGContextShowTextAtPoint(context, 10, 170, text, strlen(text)); CGImageRef imageMasked = CGBitmapContextCreateImage(context); CGContextRelease(context); CGColorSpaceRelease(colorSpace); return [UIImage imageWithCGImage:imageMasked]; } How can we change the color of the text programmatically? Answers will be greatly appreciated!

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >