Search Results

Search found 428 results on 18 pages for 'delimited'.

Page 10/18 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • SHELL OR PERL QUESTION

    - by user150674
    I have a very large file, named 'ColCheckMe', tab-delimited, that you are asked to process. You are told that each line in 'ColCheckMe' has 7 columns, and that the values in the 5th column are integers. Using shell functions indicate how you would verify that these conditions were satisfied in 'ColCheckMe' K got this... nawk ‘ NF != 7 { Printf(“[%d] has invalid [%d] number of fileds\n”, FNR, NF) } $5 !~ /^[0-9]+$/ { Printf(“[%d] 5th field is invalid [%s]\n”, FNR, $5) }’ ColCheckMe Now, 2. In with the similar file, you are told that each value in column 1 is unique. How would I verify that? Also write a shell function that counts the number of occurrences of the word “SpecStr” in the file 'ColCheckMe' Any one can help in SHELL or everything including the first in PERL Scripting.

    Read the article

  • Importing Thawte trial certificates into a Java keystore

    - by lindelof
    Hello, I'm trying to configure a Tomcat server with SSL. I've generated a keypair thus: $ keytool -genkeypair -alias tomcat -keyalg RSA -keystore keys Next I generate a certificate signing request: $ keytool -certreq -keyalg RSA -alias tomcat -keystore keys -file tomcat.csr Then I copy-paste the contents of tomcat.csr into a form on Thawte's website, asking for a trial SSL certificate. In return I get two certificates delimited with -----BEGIN ... -----END, that I save under tomcat.crt and thawte.crt. (Thawte calls the second certificate a 'Thawte Test CA Root' certificate). When I try to import either of them it fails: $ keytool -importcert -alias tomcat -file tomcat.crt -keystore keys Enter keystore password: keytool error: java.lang.Exception: Failed to establish chain from reply $ keytool -importcert -alias thawte -file thawtetest.crt -keystore keys Enter keystore password: keytool error: java.lang.Exception: Input not an X.509 certificate Adding the -trustcacerts option to either of these commands doesn't change anything either. Any idea what I am doing wrong here?

    Read the article

  • How to avoid OLEDB converting "."s into "#"s in column names?

    - by Andrew Miner
    I'm using the ACE OLEDB driver to read from an Excel 2007 spreadsheet, and I'm finding that any '.' character in column names get converted to a '#' character. For example, if I have the following in a spreadsheet: Name Amt. Due Due Date Andrew 12.50 4/1/2010 Brian 20.00 4/12/2010 Charlie 1000.00 6/30/2010 the name of the second column would be reported as "Amt# Due" when read with the following code: OleDbConnection connection = new OleDbConnection( "Provider=Microsoft.ACE.OLEDB.12.0; Data Source={0}; " + "Extended Properties=\"Excel 12.0 Xml;HDR=YES;FMT=Delimited;IMEX=1\""); OldDbCommand command = new OleDbCommand("SELECT * FROM MyTable", connection); OleDbReader dataReader = command.ExecuteReader(); System.Console.WriteLine(dataReader.GetName(1)); I've read through all the documentation I can find and I haven't found anything which even mentions that this will happen. Has anyone run into this before? Is there a way to fix this behavior?

    Read the article

  • How can I prevent tab characters etc. from being pasted into an asp.net textbox?

    - by Neal
    Hello, I'd like to have an asp.net textbox that people can paste content into and it works like notepad, i.e. no formatting or special characters will get entered. I take text and pass it to a web service which manipulates it and converts it into a tab delimited file. The problem I've experienced is sometimes people copy from MS Word and paste that content in and somehow even the tab characters etc. get passed to the web service. I run routines now to strip that information out but it would be so much easier if the textbox on the web page didn't capture anything but the text itself, i.e. visible characters (numbers, letters, punctuation). Anyone have a suggestion to have a textbox that doesn't capture formatting and non-visual characters? Thank you.

    Read the article

  • Best practices for combining Lucene.NET and a relational database?

    - by FlySwat
    I'm working on a project where I will have a LOT of data, and it will be searchable by several forms that are very efficiently expressed as SQL Queries, but it also needs to be searched via natural language processing. My plan is to build an index using Lucene for this form of search. My question is that if I do this, and perform a search, Lucene will then return the ID's of matching documents in the index, I then have to lookup these entities from the relational database. This could be done in two ways (That I can think of so far): N amount of queries (Horrible) Pass all the ID's to a stored procedure at once (Perhaps as a comma delimited parameter). This has the downside of being limited to the max parameter size, and the slow performance of a UDF to split the string into a temporary table. I'm almost tempted to mirror everything into lucenes index, so that I can periodicly generate the index from the backing store, but only need to access it for the frontend. Advice?

    Read the article

  • how to do this in shell

    - by user150674
    I have a very large file, named 'ColCheckMe', tab-delimited, that you are asked to process. You are told that each line in 'ColCheckMe' has 7 columns, and that the values in the 5th column are integers. Using shell functions indicate how you would verify that these conditions are satisfied in 'ColCheckMe' if In the same file, each value in column 1 is unique. How would I verify that? Also how to write a shell function that counts the number of occurrences of the word “SpecStr” in the file 'ColCheckMe' I tried the first part which checks for the valid number of field and checks the 5th field being integer field. nawk ' NF != 7 { printf("[%d] has invalid [%d] number of fields\n", FNR, NF) } $5 !~ /^[0-9]+$/ { printf("[%d] 5th field is invalid [%s]\n", FNR, $5) }' ColCheckMe now i wanna verify in the same file if the value in column 1 is unique. Also is there a way to write a shell function to count the occurrences of the world "SpecStr" in the file 'ColCheckMe' Thanks a lot

    Read the article

  • Faster bulk inserts in sqlite3?

    - by scubabbl
    I have a file of about 30000 lines of data I want to load into a sqlite3 database. Is there a faster way that generating insert statements for each line of data? The data is space delimited and maps directly to the sqlite3 table. Is there any sort of bulk insert method for adding volume data to a database? Has anyone devised some deviously wonderful way of doing this if it's not built in? I should preface this by asking is there a c++ way to do it from the API? Thanks.

    Read the article

  • "Simple" Text replace function

    - by YourMomzThaBomb
    I have a string which is basically a list of "words" delimited by commas. These "words" can be pretty much any character e.g. "Bart Simpson, Ex-girlfriend, dude, radical" I'm trying to use javascript, jQuery, whatever i can to replace a word based on a search string with nothing (in essence, removing the word from the list). For example, the function is defined as such: function removeWord(myString, wordToReplace) {...}; So, passing the string listed above as myString and passing "dude" as wordToReplace would return the string "Bart Simpson, Ex-girlfriend, radical" Here's the line of code I was tinkering around with...please help me figure out what's wrong with it or some alternative (better) solution:$myString.val($myString.val().replace(/wordToReplace\, /, ""));

    Read the article

  • named groups splitting regardless of position of match

    - by jeremy
    Having a hard time explaining what I mean, so here is what I want to do I want any sentenced to be parsed along the pattern of text #something a few words [someothertext] for this, the matching sentence would be Jeremy is trying #20 times to [understand this] And I would name 4 groups, as text, time, who, subtitle However, I could also write #20 Jeremy is trying [understand this] times to and still get the tokens #20 Jeremy is trying times to understand this corresponding to the right groups As long as the delimited tokens can separate the 2 text only tokens, I'm fine. Is this even possible? I've tried a few regex's and failed miserably (am still experimenting but finding myself spending way too much time learning it)

    Read the article

  • Memory efficient import many data files into panda DataFrame in Python

    - by richardh
    I import into a panda DataFrame a directory of |-delimited.dat files. The following code works, but I eventually run out of RAM with a MemoryError:. import pandas as pd import glob temp = [] dataDir = 'C:/users/richard/research/data/edgar/masterfiles' for dataFile in glob.glob(dataDir + '/master_*.dat'): print dataFile temp.append(pd.read_table(dataFile, delimiter='|', header=0)) masterAll = pd.concat(temp) Is there a more memory efficient approach? Or should I go whole hog to a database? (I will move to a database eventually, but I am baby stepping my move to pandas.) Thanks! FWIW, here is the head of an example .dat file: cik|cname|ftype|date|fileloc 1000032|BINCH JAMES G|4|2011-03-08|edgar/data/1000032/0001181431-11-016512.txt 1000045|NICHOLAS FINANCIAL INC|10-Q|2011-02-11|edgar/data/1000045/0001193125-11-031933.txt 1000045|NICHOLAS FINANCIAL INC|8-K|2011-01-11|edgar/data/1000045/0001193125-11-005531.txt 1000045|NICHOLAS FINANCIAL INC|8-K|2011-01-27|edgar/data/1000045/0001193125-11-015631.txt 1000045|NICHOLAS FINANCIAL INC|SC 13G/A|2011-02-14|edgar/data/1000045/0000929638-11-00151.txt

    Read the article

  • What is the best way to join/merge two tables by column cell matching in Excel?

    - by blunders
    I've found this excel add-in to buy that appears to do what I need, but I'd rather have code that's open to use as I wish. While a GUI is nice, it's not required. In an attempt to make the question more clear, I'm adding some two sample "input" tables in tab delimited form, and the resulting output table: SAMPLE_INPUT_TABLE_01 NAME<tab>Location John<tab>US Mike<tab>CN Tom<tab>CA Sue<tab>RU SAMPLE_INPUT_TABLE_02 NAME<tab>Age John<tab>18 Mike<tab>36 Tom<tab>54 Mary<tab>18 SAMPLE_OUTPUT_TABLE_02 NAME<tab>Age<Location> John<tab>18<tab>US Mike<tab>36<tab>CN Tom<tab>54<tab>CA Sue<tab>""<tab>RU Mary<tab>18<tab>"" If it matters, I'm using Office 2010 on Windows 7.

    Read the article

  • What's the easiest way to parse numbers in clojure?

    - by Rob Lachlan
    I've been using java to parse numbers, e.g. (. Integer parseInt numberString) Is there a more clojuriffic way that would handle both integers and floats, and return clojure numbers? I'm not especially worried about performance here, I just want to process a bunch of white space delimited numbers in a file and do something with them, in the most straightforward way possible. So a file might have lines like: 5 10 0.0002 4 12 0.003 And I'd like to be able to transform the lines into vectors of numbers.

    Read the article

  • T-SQL: Opposite to string concatenation - how to split string into multiple records

    - by kristof
    I have seen a couple of questions related to string concatenation in SQL. I wonder how would you approach the opposite problem: splitting coma delimited string into rows of data: Lets say I have tables: userTypedTags(userID,commaSeparatedTags) 'one entry per user tags(tagID,name) And want to insert data into table userTag(userID,tagID) 'multiple entries per user Inspired by Which tags are not in the database? question EDIT Thanks for the answers, actually more then one deserves to be accepted but I can only pick one, and the solution presented by Cade Roux with recursions seems pretty clean to me. It works on SQL Server 2005 and above. For earlier version of SQL Server the solution provided by miies can be used. For working with text data type wcm answer will be helpful. Thanks again.

    Read the article

  • Reading correctly alphanumeric fields into R

    - by gd047
    A tab-delimited text file, which is actually an export (using bcp) of a database table, is of that form: 102 1 01 e113c 3224.96 12 102 1 01 e185 101127.25 12 102 2 01 e185 176417.90 12 102A 3 01 e185 26261.03 12 I tried to import it in R with a command like data <- read.delim("C:\\test.txt", header = FALSE, sep = "\t") The problem is that the 3rd column which is actually a varchar field (alphanumeric) is mistakenly read as integer (as there are no letters in the entire column) and the leading zeros disappeared. The same thing happened when I imported the data directly from the database, using odbcConnect. Again that column was read as integer. str(data) $ code: int 1 1 1 1 1 1 6 1 1 8 ... How can I import such a dataset in R correctly, so as to be able to safely populate that db table again, after doing some data manipulations?

    Read the article

  • How to make Excel strip ALL quotes from CSV text fields

    - by Klay
    When importing a CSV file into Excel, it only strips the double-quotes from the FIRST field on the line, but leaves them on all other fields. How can I force Excel to strip the quotes from ALL strings? For instance, I have a CSV file: "text1", "text2", "numeric1", "numeric 2" "abc", "def", 123, 456 "abc", "def", 123, 456 "abc", "def", 123, 456 "abc", "def", 123, 456 I import it into Excel using Data Import External Data Import Data. I specify that the fields are delimited by commas, and that the text delimiter is the double-quote character. Both the data preview and the actual Excel spreadsheet columns only strip the double-quotes from the first text field. All other text fields still have quotes around them. What's really strange is that Access is able to import this data correctly (i.e. strips quotes from every text field. Note that this is NOT a matter of internal commas or quotes or escape characters. This happens in Excel 2003 and Excel 2007.

    Read the article

  • NSScanner scanFloat returning unexpected results

    - by E-Madd
    I'm trying to build a UIColor from a comma-delimited list of values for RGB, which is "0.45,0.53,0.65", represented here by the colorConfig object... NSScanner *scanner = [NSScanner scannerWithString:colorConfig]; [scanner setCharactersToBeSkipped:[NSCharacterSet characterSetWithCharactersInString:@"\n, "]]; float red, green, blue; return [UIColor colorWithRed:[scanner scanFloat:&red] green:[scanner scanFloat:&green] blue:[scanner scanFloat:&blue] alpha:1]; But my color is always coming back as black. So I logged the values to my console and I'm seeing Red = -1.988804, Green = -1.988800, Blue = -1.988796 What am I doing wrong?

    Read the article

  • Trying to drop all tables from my schema with no rows?

    - by Vineet
    I am trying to drop all tables in schema with no rows,but when i am executing this code i am getting an error THis is the code: create or replace procedure tester IS v_count NUMBER; CURSOR emp_cur IS select table_name from user_tables; BEGIN FOR emp_rec_cur IN emp_cur LOOP EXECUTE IMMEDIATE 'select count(*) from '|| emp_rec_cur.table_name INTO v_count ; IF v_count =0 THEN EXECUTE IMMEDIATE 'DROP TABLE '|| emp_rec_cur.table_name; END IF; END LOOP; END tester; ERROR at line 1: ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400: data cartridge error KUP-00554: error encountered while parsing access parameters KUP-01005: syntax error: found "identifier": expecting one of: "badfile, byteordermark, characterset, data, delimited, discardfile, exit, fields, fixed, load, logfile, nodiscardfile, nobadfile, nologfile, date_cache, processing, readsize, string, skip, variable" KUP-01008: the bad identifier was: DELIMETED KUP-01007: at line 1 column 9 ORA-06512: at "SYS.ORACLE_LOADER", line 14 ORA-06512: at line 1 ORA-06512: at "SCOTT.TESTER", line 9 ORA-06512: at line 1

    Read the article

  • large amount of data in many text files - how to process?

    - by Stephen
    Hi, I have large amounts of data (a few terabytes) and accumulating... They are contained in many tab-delimited flat text files (each about 30MB). Most of the task involves reading the data and aggregating (summing/averaging + additional transformations) over observations/rows based on a series of predicate statements, and then saving the output as text, HDF5, or SQLite files, etc. I normally use R for such tasks but I fear this may be a bit large. Some candidate solutions are to 1) write the whole thing in C (or Fortran) 2) import the files (tables) into a relational database directly and then pull off chunks in R or Python (some of the transformations are not amenable for pure SQL solutions) 3) write the whole thing in Python Would (3) be a bad idea? I know you can wrap C routines in Python but in this case since there isn't anything computationally prohibitive (e.g., optimization routines that require many iterative calculations), I think I/O may be as much of a bottleneck as the computation itself. Do you have any recommendations on further considerations or suggestions? Thanks

    Read the article

  • Exporting info from PS script to csv

    - by George
    Hi, This is a powershell/AD/Exchange question.... I'm running a script against a number of Users to check some of their attributes however I'm having trouble getting this to output to CSV. The script runs well and does exactly what I need it to do, the output on screen is fine, I'm just having trouble getting it to directly export to csv. The input is a comma seperated txt file of usernames (eg "username1,username2,username3") I've experimented with creating custom ps objects, adding to them and then exporting those but its not working.... Any suggestions gratefully received.. Thanks George $array = Get-Content $InputPath #split the comma delimited string into an array $arrayb = $array.Split(","); foreach ($User in $arrayb) { #find group memebership Write-Host "AD group membership for $User" Get-QADMemberOf $User #Get Mailbox Info Write-Host "Mailbox info for $User" Get-Mailbox $User | select ServerName, Database, EmailAddresses, PrimarySmtpAddress, WindowsEmailAddress #get profile details Write-Host "Home drive info for $User" Get-QADUser $User| select HomeDirectory,HomeDrive #add space between users Write-Host "" Write-Host "******************************************************" } Write-Host "End Script"

    Read the article

  • Python - compare nested lists and append matches to new list?

    - by Seafoid
    Hi, I wish to compare to nested lists of unequal length. I am interested only in a match between the first element of each sub list. Should a match exist, I wish to add the match to another list for subsequent transformation into a tab delimited file. Here is an example of what I am working with: x = [['1', 'a', 'b'], ['2', 'c', 'd']] y = [['1', 'z', 'x'], ['4', 'z', 'x']] match = [] def find_match(): for i in x: for j in y: if i[1] == j[1]: match.append(j) return match This results in a series of empty lists. Is it better to use tuples and/or tuples of tuples for the purposes of comparison? Any help is greatly appreciated. Regards, Seafoid.

    Read the article

  • rails arguments to after_save observer

    - by ash34
    Hi, I want users to enter a comma-delimited list of logins on the form, to be notified by email when a new comment/post is created. I don't want to store this list in the database so I would use a form_tag_helper 'text_area_tag' instead of a form helper text_field. I have an 'after_save' observer which should send an email when the comment/post is created. As far as I am aware, the after_save event only takes the model object as the argument, so how do I pass this non model backed list of logins to the observer to be passed on to the Mailer method that uses them in the cc list. thanks

    Read the article

  • Is there a way to pass a custom type from C# to Oracle using System.Data.OracleClient?

    - by Frankie Simon
    I was searching online for a solution to passing a string array to a stored procedure in Oracle. The "easy" but messy way was using a comma delimited string. I had found a sample based on which I created my own type of TABLE of VARCHAR2(200). I understood that I could use a constructor created "behind-the-scenes" by Oracle to give a list of values that would, in the PL/SQL, be treated as a TABLE I could iterate through. But when I got to the C# I saw that there is no way for me to create an OracleParameter object that would allow me to use this implicit constructor. All the samples I'm finding online for now are dealing with Oracle Data Adapter, none say anything about System.Data.OracleClient. Is there a way to achieve this?

    Read the article

  • Dealing with multiple parameters in Nginx rewrite

    - by x3sphere
    I have a rewrite that nginx calls like so: location ~* (css)$ { rewrite ^(.*),(.*)$ /min/index.php?f=$1,/min/$2 last; } And it's used on pages like this: http://domain.com/min/framework.css,dropdown.css Works all fine and dandy, but it's not scalable. Adding another element to the URL means I have to directly edit the nginx config. Ideally, I'd like to have nginx rewrite according to how many comma-delimited parameters are passed through the URL, rather than setting a fixed amount in the config. Is this possible?

    Read the article

  • Can I use ANTLR for both two-way parsing/generating?

    - by Mike Q
    Hi all, I need to both parse incoming messages and generate outgoing messages in EDIFACT format (basically a structured delimited format). I would like to have a Java model that will be generated by parsing a message. Then I would like to use the same model to create an instance and generate a message. The first half is fine, I've used ANTLR before to go from raw - Java objects. But I've never done the reverse, or if I have it's been custom. Does ANTLR support generating using a grammar or is it really just a parse-only tool?

    Read the article

  • How do you set the "global delimiter" in Excel using VBA?

    - by DanM
    I've noticed that if I use the text-to-columns feature with comma as the delimiter, any comma-delimited data I paste into Excel after that will be automatically split into columns. This makes me think Excel must have some kind of global delimiter. If this is true, how would I set this global delimiter using Excel VBA? Is it possible to do this directly, or do I need to "trick" Excel by doing a text-to-columns on some junk data, then delete the data? My ultimate goal is to be able to paste in a bunch of data from different files using a macro, and have Excel automatically split it into columns according to the delimiter I set.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >