Search Results

Search found 428 results on 18 pages for 'delimited'.

Page 11/18 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Code golf: Reverse Polish notation (postfix) evaluator

    - by Dario
    After having had code-golf contests on ordinary mathematical expressions, I just thought it would also be quite interesting how short an evaluation function for postfix notation (RPN) can be. Examples for RPN: 1 2 + == 3 1 2 + 3 * == 9 0 1 2 3 + + - 2 6 * + 3 / 1 - == 1 3 2 / 2.0 + == 3.5 To make things shorter, only +, -, * and /, all just in their binary version, must be supported. Operators/Operands are delimited by one space, divsions by zero don't have to be handled. The resulting code should be a function that takes the postfix string as input and returns the resulting number.

    Read the article

  • DB2 load partitioned data in parallel

    - by Erik Paulson
    I have a 10-node DB2 9.5 database, with raw data on each machine (ie node1:/scratch/data/dataset.1 node2:/scratch/data/dataset.2 ... node10:/scratch/data/dataset.10 There is no shared NFS mount - none of my machines could handle all of the datasets combined. each line of a dataset file is a long string of text, column delimited. The first column is the key. I don't know the hash function that DB2 will use, so dataset is not pre-partitioned. Short of renaming all of my files, is there any way to get DB2 to load them all in parallel? I'm trying to do something like load from '/scratch/data/dataset' of del modified by coldel| fastparse messages /dev/null replace into TESTDB.data_table part_file_location '/scratch/data'; but I have no idea how to suggest to db2 that it should look for dataset.1 on the first node, etc.

    Read the article

  • Best way to implement keywords for image upload gallery

    - by Dan Berlyoung
    I'm starting to spec out an image gallery type system similar to Facebook's. Members of the site will be able to create image galleries and upload images for others to view. Images will have keywords the the uploader can specify. Here's the question, what's the best way to model this? With image and keyword tables linked vi a HABTM relation? Or a single image table with the keywords saved as comma delimited values in a text field in the image record? Then search them using a LIKE or FULL TEXT index function? I want to be able to pull up all images containing a given keyword as well as generate a keyword cloud. I'm leaning toward the HABTM setup but I wanted to see what everyone else though. Thanks!!

    Read the article

  • MySQL Export with Column Heading

    - by st4nt0n
    Hello - I am very, very, new to mySQL. I've got experience in general technical terms, but not with the syntax or concepts of mySQL. I have been tasked with exporting a table from MySQL into a pipe delimited .txt or .xls that I can use to add 7500 more records to manually, then import back into the table. I tried to use INTO OUTFILE, but I don't get column headings, which I need for reference to merge the new records. Is there a good resource that can explain this to a complete novice? I would usually go down to my bookstore and start learning, but I'm on a bit of a time crunch. Thanks all!

    Read the article

  • Excel validation range limits

    - by richardtallent
    When Excel saves a file, it attempts to combine identical Validation settings into a single rule with multiple ranges. This creates one of three issues, depending on the file type you choose to save: When saving as a standard Excel file (Office 2000 BIFF), a maximum of 1024 non-contiguous ranges that can have the same validation setting. When saving as a SpreadsheetML (Office 2002/2003 XML) file, you are limited to the number of non-contiguous ranges that can be represented, comma-delimited in R1C1 format, in 1024 characters. When saving as an Open Office XML (Office 2007 *.xlsx), there is a maximum of 511 non-contiguous ranges that can have the same validation setting. (I don't have Office 2007, I'm using the file converter for Office 2003). Once you bust any of these limits, the remaining ranges with the same Validation settings have their Validation settings wiped. For (1) and (3), Excel warns you that it can't save all of the formatting, but for (2) it does not.

    Read the article

  • Printing array with delimiters in Perl

    - by Mark B
    I have an array in Perl I want to print with space delimiters between each element, except every 10th element which should be newline delimited. There aren't any spaces in the elements if that matters. I've written a function to do it with for and a counter, but I wondered if there's a better/shorter/canonical Perl way, perhaps a special join syntax or similar. My function to illustrate: sub PrintArrayWithNewlines { my $counter = 0; my $newlineIndex = shift @_; foreach my $item (@_) { ++$counter; print "$item"; if($counter == $newlineIndex) { $counter = 0; print "\n"; } else { print " "; } } }

    Read the article

  • How should I handle incomplete packet buffers?

    - by Benjamin Manns
    I am writing a client for a server that typically sends data as strings in 500 or less bytes. However, the data will occasionally exceed that, and a single set of data could contain 200,000 bytes, for all the client knows (on initialization or significant events). However, I would like to not have to have each client running with a 50 MB socket buffer (if it's even possible). Each set of data is delimited by a null \0 character. What kind of structure should I look at for storing partially sent data sets? For example, the server may send ABCDEFGHIJKLMNOPQRSTUV\0WXYZ\0123!\0. I would want to process ABCDEFGHIJKLMNOPQRSTUV, WXYZ, and 123! independently. Also, the server could send ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890LOL123HAHATHISISREALLYLONG without the terminating character. I would want that data set stored somewhere for later appending and processing. Also, I'm using asynchronous socket methods (BeginSend, EndSend, BeginReceive, EndReceive) if that matters.

    Read the article

  • Perl: parsing string enclosed by double quotes

    - by sfactor
    I need to parse tab/space delimited files that have a lot of columns in Perl. The values are such that the there are large strings enclosed within double quotes. These strings can have any characters such as tabs and spaces or anything else. When I try to parse them with the split function it splits these strings as well. Now how can I make perl understand that the strings within the " " are a single column entry? A simple example is, 12 345546.67677 "Hello World!!!" -567.55656 0.5465767 "Hello_Again; "

    Read the article

  • C# pulling out columns from combobox text

    - by Mike
    What i have is four comboboxes and two files. If the column matches the combobox i need to write it out to a file but it has to appened with the second combobox. So for example Combobox1: Apple | Orange Combobox2: Pineapple | Plum I have selected Apple Plum I need to search threw a text file and find whatever columns is Apple or Plum: Orange|Pear|Peach|Turnip|Monkey|Apple|Grape|Plum and then i need to write out just the columns Apple|Plum to a new text file. Any help would be awesome! Better example Combobox1 selected item:Apple Combobox2 selected item:Plum Text FILE: APPLE|Pear|Plum|Orange 1|2|3|4 215|3|45|98 125|498|76|4 4165|465|4|65 Resulting File: 1|3 215|45 125|76 4165|4 Thanks for the advice, i dont need help on adding to combobox or reading the files, just how to create a file from a delimited file having multiple columns.

    Read the article

  • Alternative design for a synonyms table?

    - by Majid
    I am working on an app which is to suggest alternative words/phrases for input text. I have doubts about what might be a good design for the synonyms table. Design considerations: number of synonyms is variable, i.e. football has one synonym (soccer), but in particular has two (particularly, specifically) if football is a synonym to soccer, the relation exists in the opposite direction as well. our goal is to query a word and find its synonyms we want to keep the table small and make adding new words easy What comes to my mind is a two column design with col a = word and col b = delimited list of synonyms Is there any better alternative? What about using two tables, one for words and the other for relations?

    Read the article

  • Writing my own iostream utility class: Is this a good idea?

    - by Alex
    I have an application that wants to read word by word, delimited by whitespace, from a file. I am using code along these lines: std::istream in; string word; while (in.good()) { in>>word; // Processing, etc. ... } My issue is that the processing on the words themselves is actually rather light. The major time consumer is a set of mySQL queries I run. What I was thinking is writing a buffered class that reads something like a kilobyte from the file, initializes a stringstream as a buffer, and performs extraction from that transparently to avoid a great many IO operations. Thoughts and advice?

    Read the article

  • Export products and variants from MSSQL

    - by mickyjtwin
    I have a SQL DB that has a table of products, and another table which contains a list of the sku variants of each product if it has one. I want to export all the products and their SKU's into excel. At the moment, I have a helper SQL function which performs the subquery against a product_id and concatenates all the SKU's into a comma-delimited string, e.g: Product Code, Name, SKUs 111 P1 77, 22, 11 Is there an easier way to do this, so that each SKU is a row which the associated product code as well, i.e: Product Code, Name, SKUs 111 P1 77 111 P1 22 111 P1 11

    Read the article

  • Can a .csv file be used as a data source in Visual Studio 2008?

    - by Kevin
    I'm pretty new to C# and Visual Studio. I'm writing a small program that will read a .csv file and then write the records read to a SQL Server database table. I can manually parse the .csv file, but I was wondering if it is possible to somehow "describe" the .csv file to Visual Studio so that I can use it as a data source? I should mention that the first two lines in the .csv file contain header information and the following lines are the actual comma-delimited data. Also, I should mention that this program is a stand-alone console program with no user interface.

    Read the article

  • How do you set the "global delimiter" in Excel using VBA (or unicorns)?

    - by DanM
    I've noticed that if I use the text-to-columns feature with comma as the delimiter, any comma-delimited data I paste into Excel after that will be automatically split into columns. This makes me think Excel must have some kind of global delimiter. If this is true, how would I set this global delimiter using Excel VBA? Is it possible to do this directly, or do I need to "trick" Excel by doing a text-to-columns on some junk data, then delete the data? My ultimate goal is to be able to paste in a bunch of data from different files using a macro, and have Excel automatically split it into columns according to the delimiter I set.

    Read the article

  • Code/Approach Golf: Find row in text file with too many columns

    - by awshepard
    Given a text file that is supposed to contain 10 tab-delimited columns (i.e. 9 tabs), I'd like to find all rows that have more than 10 columns (more than 9 tabs). Each row ends with CR-LF. Assume nothing about the data, field widths, etc, other than the above. Comments regarding approach, and/or working code would be extremely appreciated. Bonus for printing line numbers of offending lines as well. Thanks in advance!

    Read the article

  • SQL Server union selects built dynamically from list of words

    - by Adam Tuttle
    I need to count occurrence of a list of words across all records in a given table. If I only had 1 word, I could do this: select count(id) as NumRecs where essay like '%word%' But my list could be hundreds or thousands of words, and I don't want to create hundreds or thousands of sql requests serially; that seems silly. I had a thought that I might be able to create a stored procedure that would accept a comma-delimited list of words, and for each word, it would run the above query, and then union them all together, and return one huge dataset. (Sounds reasonable, right? But I'm not sure where to start with that approach...) Short of some weird thing with union, I might try to do something with a temp table -- inserting a row for each word and record count, and then returning select * from that temp table. If it's possible with a union, how? And does one approach have advantages (performance or otherwise) over the other?

    Read the article

  • Export view data programmatically in Access/SQL Server

    - by andy
    We have an Access application front-end connected to a SQL Server 2000 database. We would like to be able to programmatically export the results of some views to whatever format we can (ideally Excel, but CSV / tab delimited is fine). Up until now we've just hit F11, opened up the view, and hit File-Save As, but we're starting to get results with more than 16,000 results, which can't be exported. I'd like some sort of server side stored procedure we can trigger that will do this. I'm aware of the sp_makewebtask procedure that does this, however it requires administrative rights on the server, and for obvious reasons we can't give that to everyone. Any ideas?

    Read the article

  • PHP or C# script to parse CSV table values to fill in one-to-many table

    - by Yaaqov
    I'm looking for an example of how to split-out comma-delimited data in a field of one table, and fill in a second table with those individual elements, in order to make a one-to-many relational database schema. This is probably really simple, but let me give an example: I'll start with everything in one table, Widgets, which has a "state" field to contain states that have that widget: Table: WIDGET =============================== | id | unit | states | =============================== |1 | abc | AL,AK,CA | ------------------------------- |2 | lmn | VA,NC,SC,GA,FL | ------------------------------- |3 | xyz | KY | =============================== Now, what I'd like to create via code is a second table to be joined to WIDGET called *Widget_ST* that has widget id, widget state id, and widget state name fields, for example Table: WIDGET_ST ============================== | w_id | w_st_id | w_st_name | ------------------------------ |1 | 1 | AL | |1 | 2 | AK | |1 | 3 | CA | |2 | 1 | VA | |2 | 2 | NC | |2 | 1 | SC | |2 | 2 | GA | |2 | 1 | FL | |3 | 1 | KY | ============================== I am learning C# and PHP, so responses in either language would be great. Thanks.

    Read the article

  • Export products and variants from SQL Server

    - by mickyjtwin
    I have a SQL Server DB that has a table of products, and another table which contains a list of the sku variants of each product if it has one. I want to export all the products and their SKU's into excel. At the moment, I have a helper SQL function which performs the subquery against a product_id and concatenates all the SKU's into a comma-delimited string, e.g: Product Code, Name, SKUs 111 P1 77, 22, 11 Is there an easier way to do this, so that each SKU is a row which the associated product code as well, i.e: Product Code, Name, SKUs 111 P1 77 111 P1 22 111 P1 11

    Read the article

  • SSIS - Can I get the column schema for a flat file source from a database?

    - by Steve Clement
    We receive a nightly data export from a vendor in the form of about 10 tab-delimited flat file without column headers. In addition, the vendor provides us with the SQL scripts for the database tables so that we can import the files into our system. Unfortunately, the vendor recently changed the schema for the flat files. Each file has upwards 150 columns, and having to go through the DB schema and adjust column types on a Flat File Data Source in SSIS is extremely time consuming, not to mention a royal pain. Since I know the file data layout in the database schema, is there any way I can dynamically pull that into a Flat File source to set the columns correctly? Or am I just stuck with manually setting everything?

    Read the article

  • Bash: how to supress newlines?

    - by gilgongo
    I'm trying to extract fields from a pipe-delimited file and provide them as arguments to an external program in a loop. The file contains lines like this: value1|value2 value3|value4 So I came up with: while read line; do echo -n "${line}" | awk -F '|' '{print $1}'; echo -n " something "; echo -n "${line}" | awk -F '|' '{print $2}'; echo " somethingelse"; done < <(cat $FILE) I want to see the following output: value1 something value2 somethingelse value3 something value4 somethingelse But instead I'm getting: value1 something value2 somethingelse value3 something value4 somethingelse Perhaps I shouldn't be using echo?

    Read the article

  • Split string into smaller part with constrain [PHP RegEx HTML]

    - by Sadi
    Hello, I need to split long string into a array with following constrains: Each part will have a limited number of character (e.g. not more than 8000 character) Each part can contain multiple sentences (delimited by . [full stop]) but never a partial sentences. Except if the last part of the string (as last part may not have any full stop. The string may contain HTML tags. But the tag can not be divided as ( to ). That means HTML tag should be intact. But starting tag and ending tag can be stay on different segment/chunk. I think regular expression with preg_split can do it. Would please help me with the proper RegEx. Thank you Sadi

    Read the article

  • How would I strip an '&' symbol out of an ESRI dropdown before any of it's default control logic is

    - by Carter
    I have an ESRI dropdown control inside of an ESRI toolbar. One of the items in the dropdown needs to have an '&' symbol in it. As it turns out ESRI stuff builds it's callback strings as & delimited strings, so when an item is selected the parent toolbar immediately builds and handles the callback string. At one point it splits strings based on the '&' crashing the app. In effect, having an ampersand in an esri dropdown causes nasty stuff to happen when you select the item. What I need to do is find out how I can hop in before the callback stuff starts happening and strip that & out. I was thinking that perhaps I'd have to create a custom esri toolbar control, but I'm not sure and that'd be pretty undesirable. Any ideas?

    Read the article

  • Multiple LIKE in SQL

    - by ninumedia
    I wanted to search through multiple rows and obtain the row that contains a particular item. The table in mySQL is setup so each id has a unique list (comma-delimited) of values per row. Ex: id | order 1 | 1,3,8,19,34,2,38 2 | 4,7,2,190,38 Now if I wanted to pull the row that contained just the number 19 how would I go about doing this? The possibilities I could figure in the list with a LIKE condition would be: 19, ,19 ,19, I tried the following and I cannot obtain any results, Thank you for your help! SELECT * FROM categories WHERE order LIKE '19,%' OR '%,19%' OR '%,19%' LIMIT 0 , 30

    Read the article

  • php search and replace

    - by Dave
    I am trying to create a database field merge into a document (rtf) using php i.e if I have a document that starts Dear Sir, Customer Name: [customer_name], Date of order: [order_date] After retrieving the appropriate database record I can use a simple search and replace to insert the database field into the right place. So far so good. I would however like to have a little more control over the data before it is replaced. For example I may wish to Title Case it, or convert a delimited string into a list with carriage returns. I would therefore like to be able to add extra formatting commands to the field to be replaced. e.g. Dear Sir, Customer Name: [customer_name, TC], Date of order: [order_date, Y/M/D] There may be more than one formatting command per field. Is there a way that I can now search for these strings? The format of the strings is not set in stone, so if I have to change the format then I can. Any suggestions appreciated.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >