Search Results

Search found 428 results on 18 pages for 'delimited'.

Page 2/18 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Fastest way to generate delimited string from 1d numpy array

    - by Abiel
    I have a program which needs to turn many large one-dimensional numpy arrays of floats into delimited strings. I am finding this operation quite slow relative to the mathematical operations in my program and am wondering if there is a way to speed it up. For example, consider the following loop, which takes 100,000 random numbers in a numpy array and joins each array into a comma-delimited string. import numpy as np x = np.random.randn(100000) for i in range(100): ",".join(map(str, x)) This loop takes about 20 seconds to complete (total, not each cycle). In contrast, consider that 100 cycles of something like elementwise multiplication (x*x) would take than one 1/10 of a second to complete. Clearly the string join operation creates a large performance bottleneck; in my actual application it will dominate total runtime. This makes me wonder, is there a faster way than ",".join(map(str, x))? Since map() is where almost all the processing time occurs, this comes down to the question of whether there a faster to way convert a very large number of numbers to strings.

    Read the article

  • trying to read a delimited text file from resources - but it wont run

    - by Bigfatty
    I'm having a problem where instead of reading a text file from the location string, I changed it to read the text file from the resource location and it messes up my program. I've also used the insert snippet method to get most of this code, so it is safe to say I don't know what is going on. Could some one please help? 'reads the text out of a delimited text file and puts the words and hints into to separate arrays ' this works and made the program run ' Dim filename As String = Application.StartupPath + "\ProggramingList.txt" 'this dosnt work and brings back a Illegal characters in path error. dim filename as string = My.Resources.GamesList Dim fields As String() 'my text files are delimited Dim delimiter As String = "," Using parser As New TextFieldParser(filename) parser.SetDelimiters(delimiter) While Not parser.EndOfData ' Read in the fields for the current line fields = parser.ReadFields() ' Add code here to use data in fields variable. 'put the result into two arrays (the fields are the arrays im talking about). one holds the words, and one holds the corresponding hint Programingwords(counter) = Strings.UCase(fields(0)) counter += 1 'this is where the hint is at Programingwords(counter) = (fields(1)) counter += 1 End While End Using

    Read the article

  • Data Validation of a Comma Delimited List

    - by Brad
    I need a simple way of taking a comma seperated list in a cell, and providing a drop down box to select one of them. For Example, the cell could contain: 24, 32, 40, 48, 56, 64 And in a further cell, using Data Validation, I want to provide a drop-down list to select ONE of those values I need to do this without VBA or Macros please. Apolgies, I want this to work with Excel 2010 and later. I have been playing around with counting the number of commas in the list and then trying to split this into a number of rows of single numbers etc with no joy yet.

    Read the article

  • Limit user input to allowable comma delimited words with regular expression using javascript

    - by Marc
    I want to force the user to enter any combination of the following words. the words need to be comma delimited and no comma at the beginning or end of the string the user should only be able to enter one of each word. Examples admin basic,ectech admin,ectech,advanced basic,advanced,admin,ectech my attempt ^((basic|advanced)|admin|ectech)((,basic|,advanced)|,admin|,ectech){0,2}$ Thanks Marc

    Read the article

  • Potential annoyances of tab delimited Python source?

    - by user86432
    I want to start a new project, and I want this to be my first Python project. I was looking through the style guide, http://www.python.org/dev/peps/pep-0008/, which "strongly recommends" using a 4-spaces indentation style for new projects. But I just hate this idea! In my opinion, tabs are better for this purpose. What annoyances could crop up one day if another developer wanted to work on my tab-delimited files?

    Read the article

  • fastest way convert tab-delimited file to csv in linux

    - by andrewj
    I have a tab-delimited file that has over 200 million lines. What's the fastest way in linux to convert this to a csv file? This file does have multiple lines of header information which I'll need to strip out down the road, but the number of lines of header is known. I have seen suggestions for sed and gawk, but I wonder if there is a "preferred" choice.

    Read the article

  • dos batch iterate through a delimited string

    - by bjax-bjax
    I have a delimited list of IPs I'd like to process individually. The list length is unknown ahead of time. How do I split and process each item in the list? @echo off FOR /f "tokens=* delims=," %%a IN ("127.0.0.1,192.168.0.1,10.100.0.1") DO call :sub %%a :sub echo In subroutine echo %1 exit /b Outputs: In subroutine 127.0.0.1 In subroutine ECHO is off.

    Read the article

  • Text viewing in a delimited space

    - by Maurizio Reginelli
    I need to visualize a text into a delimited space. If I add a simple TextBlock I have a problem: when the text is longer than the available space, it is cropped at the end. I tried to insert the TextBlock inside a Viewbox: this solution works for a text longer than the available space, but increase the size of the text when it is shorter. Is there a way to reduce the size of the text only when it is longer than the available space?

    Read the article

  • Passing delimited string to stored procedure to search database

    - by rs
    How can i pass a string delimited by space or comma to stored procedure and filter result? I'm trying to do something like - Parameter Value -------------------------- @keywords key1 key2 key3 Then is stored procedure i want to first find all records with first or last name like key1 filter step 1 with first or last name like key2 filter step 2 with first or last name like key 3

    Read the article

  • how to read a string from a \n delimited file

    - by Matias
    I'm trying to read a return delimited file. full of phrases. I'm trying to put each phrase into a string. The problem is that when I try to read the file with fscanf(file,"%50s\n",string); the string only contains one word. when it bumps with a space it stops reading the string

    Read the article

  • Return all users from group(s) specified as comma delimited value

    - by todor
    I have the following two table scenario: users id groups 1 1,2,3 2 2,3 3 1,3 4 3 and groups id 1 2 3 How do I return the IDs of all users that belong to group 2 and 1 for example? Should I look into join, a helper group_membership table or function to separate the comma delimited group IDs to get something like this: group_membership user_id group_id 1 1 1 2 1 3 2 2 2 3 ... ...

    Read the article

  • Processing a tab delimited file with shell script processing

    - by Lilly Tooner
    Hello, normally I would use Python/Perl for this procedure but I find myself (for political reasons) having to pull this off using a bash shell. I have a large tab delimited file that contains six columns and the second column is integers. I need to shell script a solution that would verify that the file indeed is six columns and that the second column is indeed integers. I am assuming that I would need to use sed/awk here somewhere. Problem is that I'm not that familiar with sed/awk. Any advice would be appreciated. Many thanks! Lilly

    Read the article

  • Lucene.Net support phrases?: What is best approach to tokenize comma-delimited data (atomically) in

    - by Pete Alvin
    I have a database with a column I wish to index that has comma-delimited names, e.g., User.FullNameList = "Helen Ready, Phil Collins, Brad Paisley" I prefer to tokenize each name atomically (name as a whole searchable entity). What is the best approach for this? Did I miss a simple option to set the tokenize delimiter? Do I have to subclass or write my own class that to roll my own tokenizer? Something else? ;) Or does Lucene.net not support phrases? Or is it smart enough to handle this use case automatically? I'm sure I'm not the first person to have to do this. Googling produced no noticeable solutions.

    Read the article

  • Read tab delimited text file into MySQL table with PHP

    - by Simon S
    I am trying to read in a series of tab delimited text files into existing MySQL tables. The code I have is quite simple: $lines = file("import/file_to_import.txt"); foreach ($lines as $line_num => $line) { if($line_num > 1) { $arr = explode("\t", $line); $sql = sprintf("INSERT INTO my_table VALUES('%s', '%s', '%s', %s, %s);", trim((string)$arr[0]), trim((string)$arr[1]), trim((string)$arr[2]), trim((string)$arr[3]), trim((string)$arr[4])); mysql_query($sql, $database) or die(mysql_error()); } } But no matter what I do (hence the casting before each variable in the sprintf statement) I get the "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 1" error. I echo out the code, paste it into a MySQL editor and it runs fine, it just won't execute from the PHP script. What am I doing wrong?? Si

    Read the article

  • Nhibernate ValueType Collection as delimited string in DB

    - by JWendel
    Hi I have a legacy db that I am mapping with Nhibernate. And in several locations a list och strigs or domain objects are mapped as a delimited string in the database. Either 'string|string|string' in the value type cases and like 'domainID|domainID|domainID' in the references type cases. I know I can create a dummy property on the class and map to that fields but I would like to do it in a more clean way, like when mapping Enums as their string representation with the EnumStringType class. Is a IUserType the way to go here? Thanks in advance /Johan

    Read the article

  • C#: split a string into runs of characters, numbers and delimited strings and process it

    - by nrkn
    OK my regex is a bit rusty and I've been struggling with this particular problem... I need to split and process a string containing any number of the following, in any order: Chars (lowercase letters only) Quote delimited strings Ints The strings are pretty weird (I don't have control over them). When there's more than one number in a row in the string they're seperated by a comma. They need to be processed in the same order that they appeared in the original string. For example, a string might look like: abc20a"Hi""OK"100,20b With this particular string the resulting call stack would look a bit like: ProcessLetters( new[] { 'a', 'b', 'c' } ); ProcessInts( 20 ); ProcessLetters( 'a' ); ProcessStrings( new[] { "Hi", "OK" } ); ProcessInts( new[] { 100, 20 } ); ProcessLetters( 'b' ); What I could do is treat it a bit like CSV, where you build tokens by processing the characters one at a time, but I think it could be more easily done with a regex?

    Read the article

  • Find duplicates lines based on some delimited fileds on line

    - by Oliv
    Hello, I have a file with lines having some fields delimited by "|". I have to extract the lines that are identical based on some of the fileds (i.e. find lines which contain the same values for fields 1,2,3 12,and 13) Other fields contents have no importance for searching but the whole extracted lines have to be complete. Can anyone tell me how I can do that in KSH scripting (By exemple a script with some arguments (order dependent) that define the fileds separator and the fields which have to be compared to find duplicates lines in input file ) Thanks in advance and kind regards Oli

    Read the article

  • Looking for a good text parsing library for C#

    - by Chris Stewart
    Has anyone run across a quality library that will parse, line by line, CSV, tab-delimited, and Excel files? I've started to do it manually but have noticed some of the intricacies in parsing a comma-delimited file. Such as situations where a cell has a comma in it as part of the data (blah,\"LastName, Jr.\",blah,blah).

    Read the article

  • Correct escaping of delimited identifers in SQL Server without using QUOTENAME

    - by Ross Bradbury
    Is there anything else that the code must do to sanitize identifiers (table, view, column) other than to wrap them in double quotation marks and "double up" and double quotation marks present in the identifier name? References would be appreciated. I have inherited a code base that has a custom object-relational mapping (ORM) system. SQL cannot be written in the application but the ORM must still eventually generate the SQL to send to the SQL Server. All identifiers are quoted with double quotation marks. string QuoteName(string identifier) { return "\" + identifier.Replace("\"", "\"\"") + "\""; } If I were building this dynamic SQL in SQL, I would use the built-in SQL Server QUOTENAME function: declare @identifier nvarchar(128); set @identifier = N'Client"; DROP TABLE [dbo].Client; --'; declare @delimitedIdentifier nvarchar(258); set @delimitedIdentifier = QUOTENAME(@identifier, '"'); print @delimitedIdentifier; -- "Client""; DROP TABLE [dbo].Client; --" I have not found any definitive documentation about how to escape quoted identifiers in SQL Server. I have found Delimited Identifiers (Database Engine) and I also saw this stackoverflow question about sanitizing. If it were to have to call the QUOTENAME function just to quote the identifiers that is a lot of traffic to SQL Server that should not be needed. The ORM seems to be pretty well thought out with regards to SQL Injection. It is in C# and predates the nHibernate port and Entity Framework etc. All user input is sent using ADO.NET SqlParameter objects, it is just the identifier names that I am concerned about in this question. This needs to work on SQL Server 2005 and 2008.

    Read the article

  • Parsing tab delimited file with double quotes in Perl

    - by sfactor
    I have a data set that is tab delimited with the user-agent strings in double quotes. I need to parse each of these columns and based on the answer of my other post I used the Text::CSV module. 94410634 0 GET "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; GTB6.6; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; AskTB5.5)" 1 The code is a simple one. #!/usr/bin/perl use strict; use warnings; use Text::CSV; my $csv = Text::CSV->new(sep_char => "\t"); while (<>) { if ($csv->parse($_)) { my @columns = $csv->fields(); print "@columns\n"; } else { my $err = $csv->error_input; print "Failed to parse line: $err"; } } But i get the Failed to parse line: error when I try it on this dataset. what am I doing wrong? I need to extract the 4th column containing the user-agent strings for further processing.

    Read the article

  • Python: Parsing a colon delimited file with various counts of fields

    - by Mark
    I'm trying to parse a a few files with the following format in 'clientname'.txt hostname:comp1 time: Fri Jan 28 20:00:02 GMT 2011 ip:xxx.xxx.xx.xx fs:good:45 memory:bad:78 swap:good:34 Mail:good Each section is delimited by a : but where lines 0,2,6 have 2 fields... lines 1,3-5 have 3 or more fields. (A big issue I've had trouble with is the time: line, since 20:00:02 is really a time and not 3 separate fields. I have several files like this that I need to parse. There are many more lines in some of these files with multiple fields. ... for i in clients: if os.path.isfile(rpt_path + i + rpt_ext): # if the rpt exists then do this rpt = rpt_path + i + rpt_ext l_count = 0 for line in open(rpt, "r"): s_line = line.rstrip() part = s_line.split(':') print part l_count = l_count + 1 else: # else break break First I'm checking if the file exists first, if it does then open the file and parse it (eventually) As of now I'm just printing the output (print part) to make sure it's parsing right. Honestly, the only trouble I'm having at this point is the time: field. How can I treat that line specifically different than all the others? The time field is ALWAYS the 2nd line in all of my report files.

    Read the article

  • Parsing concatenated, non-delimited XML messages from TCP-stream using C#

    - by thaller
    I am trying to parse XML messages which are send to my C# application over TCP. Unfortunately, the protocol can not be changed and the XML messages are not delimited and no length prefix is used. Moreover the character encoding is not fixed but each message starts with an XML declaration <?xml>. The question is, how can i read one XML message at a time, using C#. Up to now, I tried to read the data from the TCP stream into a byte array and use it through a MemoryStream. The problem is, the buffer might contain more than one XML messages or the first message may be incomplete. In these cases, I get an exception when trying to parse it with XmlReader.Read or XmlDocument.Load, but unfortunately the XmlException does not really allow me to distinguish the problem (except parsing the localized error string). I tried using XmlReader.Read and count the number of Element and EndElement nodes. That way I know when I am finished reading the first, entire XML message. However, there are several problems. If the buffer does not yet contain the entire message, how can I distinguish the XmlException from an actually invalid, non-well-formed message? In other words, if an exception is thrown before reading the first root EndElement, how can I decide whether to abort the connection with error, or to collect more bytes from the TCP stream? If no exception occurs, the XmlReader is positioned at the start of the root EndElement. Casting the XmlReader to IXmlLineInfo gives me the current LineNumber and LinePosition, however it is not straight forward to get the byte position where the EndElement really ends. In order to do that, I would have to convert the byte array into a string (with the encoding specified in the XML declaration), seek to LineNumber,LinePosition and convert that back to the byte offset. I try to do that with StreamReader.ReadLine, but the stream reader gives no public access to the current byte position. All this seams very inelegant and non robust. I wonder if you have ideas for a better solution. Thank you. EDIT: I looked around and think that the situation is as follows (I might be wrong, corrections are welcome): I found no method so that the XmlReader can continue parsing a second XML message (at least not, if the second message has an XmlDeclaration). XmlTextReader.ResetState could do something similar, but for that I would have to assume the same encoding for all messages. Therefor I could not connect the XmlReader directly to the TcpStream. After closing the XmlReader, the buffer is not positioned at the readers last position. So it is not possible to close the reader and use a new one to continue with the next message. I guess the reason for this is, that the reader could not successfully seek on every possible input stream. When XmlReader throws an exception it can not be determined whether it happened because of an premature EOF or because of a non-wellformed XML. XmlReader.EOF is not set in case of an exception. As workaround I derived my own MemoryBuffer, which returns the very last byte as a single byte. This way I know that the XmlReader was really interested in the last byte and the following exception is likely due to a truncated message (this is kinda sloppy, in that it might not detect every non-wellformed message. However, after appending more bytes to the buffer, sooner or later the error will be detected. I could cast my XmlReader to the IXmlLineInfo interface, which gives access to the LineNumber and the LinePosition of the current node. So after reading the first message I remember these positions and use it to truncate the buffer. Here comes the really sloppy part, because I have to use the character encoding to get the byte position. I am sure you could find test cases for the code below where it breaks (e.g. internal elements with mixed encoding). But up to now it worked for all my tests. The parser class follows here -- may it be useful (I know, its very far from perfect...) class XmlParser { private byte[] buffer = new byte[0]; public int Length { get { return buffer.Length; } } // Append new binary data to the internal data buffer... public XmlParser Append(byte[] buffer2) { if (buffer2 != null && buffer2.Length > 0) { // I know, its not an efficient way to do this. // The EofMemoryStream should handle a List<byte[]> ... byte[] new_buffer = new byte[buffer.Length + buffer2.Length]; buffer.CopyTo(new_buffer, 0); buffer2.CopyTo(new_buffer, buffer.Length); buffer = new_buffer; } return this; } // MemoryStream which returns the last byte of the buffer individually, // so that we know that the buffering XmlReader really locked at the last // byte of the stream. // Moreover there is an EOF marker. private class EofMemoryStream: Stream { public bool EOF { get; private set; } private MemoryStream mem_; public override bool CanSeek { get { return false; } } public override bool CanWrite { get { return false; } } public override bool CanRead { get { return true; } } public override long Length { get { return mem_.Length; } } public override long Position { get { return mem_.Position; } set { throw new NotSupportedException(); } } public override void Flush() { mem_.Flush(); } public override long Seek(long offset, SeekOrigin origin) { throw new NotSupportedException(); } public override void SetLength(long value) { throw new NotSupportedException(); } public override void Write(byte[] buffer, int offset, int count) { throw new NotSupportedException(); } public override int Read(byte[] buffer, int offset, int count) { count = Math.Min(count, Math.Max(1, (int)(Length - Position - 1))); int nread = mem_.Read(buffer, offset, count); if (nread == 0) { EOF = true; } return nread; } public EofMemoryStream(byte[] buffer) { mem_ = new MemoryStream(buffer, false); EOF = false; } protected override void Dispose(bool disposing) { mem_.Dispose(); } } // Parses the first xml message from the stream. // If the first message is not yet complete, it returns null. // If the buffer contains non-wellformed xml, it ~should~ throw an exception. // After reading an xml message, it pops the data from the byte array. public Message deserialize() { if (buffer.Length == 0) { return null; } Message message = null; Encoding encoding = Message.default_encoding; //string xml = encoding.GetString(buffer); using (EofMemoryStream sbuffer = new EofMemoryStream (buffer)) { XmlDocument xmlDocument = null; XmlReaderSettings settings = new XmlReaderSettings(); int LineNumber = -1; int LinePosition = -1; bool truncate_buffer = false; using (XmlReader xmlReader = XmlReader.Create(sbuffer, settings)) { try { // Read to the first node (skipping over some element-types. // Don't use MoveToContent here, because it would skip the // XmlDeclaration too... while (xmlReader.Read() && (xmlReader.NodeType==XmlNodeType.Whitespace || xmlReader.NodeType==XmlNodeType.Comment)) { }; // Check for XML declaration. // If the message has an XmlDeclaration, extract the encoding. switch (xmlReader.NodeType) { case XmlNodeType.XmlDeclaration: while (xmlReader.MoveToNextAttribute()) { if (xmlReader.Name == "encoding") { encoding = Encoding.GetEncoding(xmlReader.Value); } } xmlReader.MoveToContent(); xmlReader.Read(); break; } // Move to the first element. xmlReader.MoveToContent(); // Read the entire document. xmlDocument = new XmlDocument(); xmlDocument.Load(xmlReader.ReadSubtree()); } catch (XmlException e) { // The parsing of the xml failed. If the XmlReader did // not yet look at the last byte, it is assumed that the // XML is invalid and the exception is re-thrown. if (sbuffer.EOF) { return null; } throw e; } { // Try to serialize an internal data structure using XmlSerializer. Type type = null; try { type = Type.GetType("my.namespace." + xmlDocument.DocumentElement.Name); } catch (Exception e) { // No specialized data container for this class found... } if (type == null) { message = new Message(); } else { // TODO: reuse the serializer... System.Xml.Serialization.XmlSerializer ser = new System.Xml.Serialization.XmlSerializer(type); message = (Message)ser.Deserialize(new XmlNodeReader(xmlDocument)); } message.doc = xmlDocument; } // At this point, the first XML message was sucessfully parsed. // Remember the lineposition of the current end element. IXmlLineInfo xmlLineInfo = xmlReader as IXmlLineInfo; if (xmlLineInfo != null && xmlLineInfo.HasLineInfo()) { LineNumber = xmlLineInfo.LineNumber; LinePosition = xmlLineInfo.LinePosition; } // Try to read the rest of the buffer. // If an exception is thrown, another xml message appears. // This way the xml parser could tell us that the message is finished here. // This would be prefered as truncating the buffer using the line info is sloppy. try { while (xmlReader.Read()) { } } catch { // There comes a second message. Needs workaround for trunkating. truncate_buffer = true; } } if (truncate_buffer) { if (LineNumber < 0) { throw new Exception("LineNumber not given. Cannot truncate xml buffer"); } // Convert the buffer to a string using the encoding found before // (or the default encoding). string s = encoding.GetString(buffer); // Seek to the line. int char_index = 0; while (--LineNumber > 0) { // Recognize \r , \n , \r\n as newlines... char_index = s.IndexOfAny(new char[] {'\r', '\n'}, char_index); // char_index should not be -1 because LineNumber>0, otherwise an RangeException is // thrown, which is appropriate. char_index++; if (s[char_index-1]=='\r' && s.Length>char_index && s[char_index]=='\n') { char_index++; } } char_index += LinePosition - 1; var rgx = new System.Text.RegularExpressions.Regex(xmlDocument.DocumentElement.Name + "[ \r\n\t]*\\>"); System.Text.RegularExpressions.Match match = rgx.Match(s, char_index); if (!match.Success || match.Index != char_index) { throw new Exception("could not find EndElement to truncate the xml buffer."); } char_index += match.Value.Length; // Convert the character offset back to the byte offset (for the given encoding). int line1_boffset = encoding.GetByteCount(s.Substring(0, char_index)); // remove the bytes from the buffer. buffer = buffer.Skip(line1_boffset).ToArray(); } else { buffer = new byte[0]; } } return message; } }

    Read the article

  • Regex: Comma delimited integers

    - by Metju
    Hi Guys, I'm trying to create a regex that accept: An empty string, a single integer or multiple integers separated by a comma but can have no starting and ending comma. I managed to find this, but I cannot undertsand how to remove the digit limit ^\d{1,10}([,]\d{10})*$

    Read the article

  • the best way to convert delimited to fixed width

    - by ehosca
    What is the BEST way to convert this : FirstName,LastName,Title,BirthDate,HireDate,City,Region Nancy,Davolio,Sales Representative,1948-12-08,1992-05-01,Seattle,WA Andrew,Fuller,Vice President Sales,1952-02-19,1992-08-14,Tacoma,WA Janet,Leverling,Sales Representative,1963-08-30,1992-04-01,Kirkland,WA Margaret,Peacock,Sales Representative,1937-09-19,1993-05-03,Redmond,WA Steven,Buchanan,Sales Manager,1955-03-04,1993-10-17,London,NULL Michael,Suyama,Sales Representative,1963-07-02,1993-10-17,London,NULL Robert,King,Sales Representative,1960-05-29,1994-01-02,London,NULL Laura,Callahan,Inside Sales Coordinator,1958-01-09,1994-03-05,Seattle,WA Anne,Dodsworth,Sales Representative,1966-01-27,1994-11-15,London,NULL to this : FirstName LastName Title BirthDate HireDate City Region ---------- -------------------- ------------------------------ ----------- ---------- --------------- --------------- Nancy Davolio Sales Representative 1948-12-08 1992-05-01 Seattle WA Andrew Fuller Vice President, Sales 1952-02-19 1992-08-14 Tacoma WA Janet Leverling Sales Representative 1963-08-30 1992-04-01 Kirkland WA Margaret Peacock Sales Representative 1937-09-19 1993-05-03 Redmond WA Steven Buchanan Sales Manager 1955-03-04 1993-10-17 London NULL Michael Suyama Sales Representative 1963-07-02 1993-10-17 London NULL Robert King Sales Representative 1960-05-29 1994-01-02 London NULL Laura Callahan Inside Sales Coordinator 1958-01-09 1994-03-05 Seattle WA Anne Dodsworth Sales Representative 1966-01-27 1994-11-15 London NULL

    Read the article

  • Comma or semicolon-delimited AutoComplete TextBox

    - by Ecyrb
    I would like to have a TextBox that supports AutoComplete and lets users type multiple words separated by a comma or semicolon, offering suggestions for each word. I have a standard TextBox with textBox.AutoCompleteCustomSource.AddRange(new[] { "apple", "banana", "carrot" }); textBox.AutoCompleteMode = AutoCompleteMode.SuggestAppend; textBox.AutoCompleteSource = AutoCompleteSource.CustomSource; Unfortunately it will only suggest for the first word. Anything typed after that and it stops suggesting. I want to be able to perform the following scenario: Type "ap" Have it suggest "apple" Press the comma Have it fill in "apple," with the cursor after the comma Type "ba" Have it suggest "banana" Press the comma Have it append "banana," resulting in "apple,banana," I've tried Googling for a solution, but haven't had much luck. This seems to be a popular feature for web apps, but apparently not for winforms. Any suggestions?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >