Search Results

Search found 6329 results on 254 pages for 'linq to csv'.

Page 162/254 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Using FileReadFields with Wonderware

    - by hypoxide
    I suppose this is a long shot considering how few Wonderware questions I've seen on here, but anyway... The FileReadFields function in Wonderware is supposed to parse a CSV file into memory tags. There are no debug messages when stuff doesn't work in Wonderware (not my choice of HMI software, that's for sure), so I have no idea why this isn't working: FileReadFields("C:\NASA\Sample.csv", 0, Profile_Setup_Name, 1); Everything is cased correctly and the file is not in-use. I can't figure out how to make it work.

    Read the article

  • No application is associated with the specified file exception

    - by baron
    UnhandledException: System.ComponentModel.Win32Exception: No application is associated with the specified file for this operation at System.Diagnostics.Process.StartWithShellExecuteEx(ProcessStartInfo startInfo) at System.Diagnostics.Process.Start() at System.Diagnostics.Process.Start(ProcessStartInfo startInfo) at System.Diagnostics.Process.Start(String fileName) Hi everyone, I am getting the following exception on one machine I am testing on when trying to use Process.Start to open a .csv file. I think this is happening because no file association has been set for .csv files on this box. So how would you avoid this situation? Force the Process.Start to open in Notepad? - Ideally it should be opened in excel, but what do you do if excel then doesn't exist on that computer? Thanks

    Read the article

  • EFCreateError with JvCsvDataSet

    - by Kim Jensen
    I have been using JvCSVDataSet with Delphi 5 and it works fine. I just moved over to Delphi 2007 and now with the same program I get EFCreateError, cannot create file "" I got the error description from MAdexcept 3.0. Here are the code,I get the error in the line 'CADDCOUNT', but if I rem out that line then I don't get the error before I close the dataset. jvCsvDataSet1.FileName := 'C:\TEST.CSV'; jvCsvDataSet1.SaveToFile('C:\TEST.CSV'); jvCsvDataSet1.Active := True; jvCsvDataSet1.Append; jvCsvDataSet1.FieldByName('LINETYPE').Asstring := 'VERSION"; jvCsvDataSet1.FieldByName('CADDCOUNT').AsString := 'Company Name and address'; jvCsvDataSet1.Post; jvCsvDataSet1.Active := False; Thanks, for any help. Kim

    Read the article

  • Complex reporting on subversion (possibly Export Subversion log into database for reporting)

    - by James A. N. Stauffer
    What is the best way to do complex reporting on subversion logs like the following for each file? file, directory, last revision date, previous revision date(where revision date is at least 30 older than last), days diff(between revision dates) Since Subversion allows on revision to change multiple files I assume svn log needs to be run against each file individually. Ideas (that don't seem very good): Shell scripting to produce a csv file to be imported to a DB. The following is a start but doesn't show the filename: find . -name "." -print | xargs -l svn log -l 2 Shell scripting to produce XML and then use XSLT to create CSV to import to a DB. It might use a similar command to above but would still have some of the same limitation. Write a program to just parse the log on the whole directory tree, make one insert to DB per revision/file combination, and then query the DB.

    Read the article

  • Create Excel (.XLS and .XLSX) file from C#

    - by mistrmark
    What is the best tool for creating an Excel Spreadsheet with C#. Ideally, I would like open source so I don't have to add any third party dependencies to my code, and I would like to avoid using Excel directly to create the file (using OLE Automation.) The .CSV file solution is easy, and is the current way I am handling this, but I but I would like to control the output formats. EDIT: I am still looking at these to see the best alternative for my solution. Interop will work, but it requires Excel to be on the machine you are using. Also the OLEDB method is intriguing, but may not yield much more than what I can achieve with CSV files. I will look more into the 2003 xml format, but that also puts a Excel 2003 requirement on the file. I am currently looking at a port of the PEAR (PHP library) Excel Writer that will allow some pretty good XLS data and formatting and it is in the Excel_97 compatible format that all modern versions of Excel support. The PEAR Excel Writer is here: PEAR - Excel Writer

    Read the article

  • R: NA/NaN/Inf in foreign function call (arg 1)

    - by Ma Changchen
    When i use a package named HydroMe to fit a model, some data groups will return the following errors: Error in qr.default(.swts * attr(rhs, "gradient")) : NA/NaN/Inf in foreign function call (arg 1) Actually,there is no missing value in the data groups. the codes are as followed: library(HydroMe) fortst<-read.csv(file="F:/fortst.csv") van.lis <-nlsList(y~SSvan(x,Thr, Ths, alp, scal)|Sample,data=fortst) datas are as following: Sample x y 1116 0.000001 0.4003 1116 10 0.3402 1116 20 0.3439 1116 30 0.3432 1116 40 0.3426 1116 60 0.3379 1116 90 0.3325 1116 180 0.3212 1116 405 0.3033 1116 810 0.2843 1116 1630 0.2659 1117 0.000001 0.3785 1117 10 0.3173 1117 20 0.3199 1117 30 0.3193 1117 40 0.3179 1117 60 0.313 1117 90 0.308 1117 180 0.2973 1117 405 0.2789 1117 810 0.2608 1117 1630 0.2405 the example data can be downloaded from here.

    Read the article

  • Problem fetching contacts from Yahoo! Address Book using PHP's CURL.

    - by Ravi
    Hi I had to get the user's yahoo address book using PHP's CURL when user gave login name and password. It was working fine. Address book has been got as CSV format. But now suddenly things are stop working. I am just getting some yahoo's html code instead of CSV format. I am guessing that yahoo is somehow restricted fetching address book using CURL. I did one experiment that I manually did the import contacts from Yahoo service. Before importing contacts yahoo shown the CAPTCHA to verify. I guess this CAPTCHA mechanism is recently added. Is this CAPTCHA mechanism preventing to get the address book when I am using PHP's CURL? Actually I do not want get address book using Yahoo OAuth or BBAuth. Any one have idea?

    Read the article

  • How to create a UDF that takes a query string and returns the query's resultset

    - by Martin
    I want to create a stored procedure that takes a simple SELECT statement and return the resultset as a CSV string. So the basic idea is get the sql statement from user input, run it using EXEC(@stmt) and convert the resultset to text using cursors. However, as SQLServer doesn't allow: select * from storedprocedure(@sqlStmt) UDF with EXEC(@sqlStmt) so I tried Insert into #tempTable EXEC(@sqlStmt), but this doesn't work (error = "invalid object name #tempTable"). I'm stuck. Could you please shed some light on this matter? Many thanks EDIT: Actually the output (e.g CSV string) is not important. The problem is I don't know how to assign a cursor to the resultset returned by EXEC. SP and UDF do not work with Exec() while creating a temp table before inserting values is impossible without knowing the input statement. I thought of OPENQUERY but it does not accept variables as its parameters.

    Read the article

  • Korn Shell code to send attachments with mailx and uuencode?

    - by Nano Taboada
    I need to attach a file with mailx but at the moment I'm not having a lot of success. Here's my code: subject="Something happened" to="[email protected]" body="Attachment Test" attachment=/path/to/somefile.csv uuencode $attachment | mailx -s "$subject" "$to" << EOF The message is ready to be sent with the following file or link attachments: somefile.csv Note: To protect against computer viruses, e-mail programs may prevent sending or receiving certain types of file attachments. Check your e-mail security settings to determine how attachments are handled. EOF Any feedback would be highly appreciated. Update I've added the attachment var to avoid having to use the path every time.

    Read the article

  • Python + MySQLdb executemany

    - by lhahne
    I'm using Python and its MySQLdb module to import some measurement data into a Mysql database. The amount of data that we have is quite high (currently about ~250 MB of csv files and plenty of more to come). Currently I use cursor.execute(...) to import some metadata. This isn't problematic as there are only a few entries for these. The problem is that when I try to use cursor.executemany() to import larger quantities of the actual measurement data, MySQLdb raises a TypeError: not all arguments converted during string formatting My current code is def __insert_values(self, values): cursor = self.connection.cursor() cursor.executemany(""" insert into values (ensg, value, sampleid) values (%s, %s, %s)""", values) cursor.close() where values is a list of tuples containing three strings each. Any ideas what could be wrong with this? Edit: The values are generated by yield (prefix + row['id'], row['value'], sample_id) and then read into a list one thousand at a time where row is and iterator coming from csv.DictReader.

    Read the article

  • C# File Exception: cannot access the file because it is being used by another process

    - by Lirik
    I'm trying to download a file from the web and save it locally, but I get an exception: C# The process cannot access the file 'blah' because it is being used by another process. This is my code: File.Create("data.csv"); // create the file request = (HttpWebRequest)WebRequest.CreateDefault(new Uri(url)); request.Timeout = 30000; response = (HttpWebResponse)request.GetResponse(); using (Stream file = File.OpenWrite("data.csv"), // <-- Exception here input = response.GetResponseStream()) { // Save the file using Jon Skeet's CopyStream method CopyStream(input, file); } I've seen numerous other questions with the same exception, but none of them seem to apply here. Any help?

    Read the article

  • Removing New line character in Fields PHP

    - by Aruna
    Hi, i am trying to upload an excel file and to store its contents in the Mysql database. i am having a problem in saving the contents.. like My csv file is in the form of "1","aruna","IEEE paper" "2","nisha","JOurnal magazine" actually i am having 2 records and i am using the code <?php $string = file_get_contents( $_FILES["file"]["tmp_name"] ); //echo $string; foreach ( explode( "\n", $string ) as $userString ) { echo $userString; } ? since in the Csv record there is a new line inserted in between IEEE and paper it is dispaying me as 3 records.. How to remove this new line code wise and to modify the code so that only the new line between the records 1 and 2 is considered... Pls help me....

    Read the article

  • Add columns to a datatable in c#?

    - by Pandiya Chendur
    I have a csv reader class that reads a .csv file and its values.... I have created datatable out of it... Consider my Datatable contains three header columns Name,EmailId,PhoneNo.... The values have been added successfully.... Now i want to add two columns IsDeleted,CreatedDate to this datatable... I have tried this but it doesn't seem to work, foreach (string strHeader in headers) { dt.Columns.Add(strHeader); } string[] data; while ((data = reader.GetCSVLine()) != null) { dt.Rows.Add(data); } dt.Columns.Add("IsDeleted", typeof(byte)); dt.Columns.Add(new DataColumn("CreatedDate", typeof(DateTime))); foreach (DataRow dr in dt.Rows) { dr["IsDeleted"] = Convert.ToByte(0); dr["CreatedDate"] = Convert.ToDateTime(System.DateTime.Now.ToString()); dt.Rows.Add(dr); } When i try to add isdeleted values an error saying This row already belongs to this table. ....

    Read the article

  • How do I plot the warping of DTW result using gnuplot?

    - by Ekkmanz
    Hello, Right now I have implemented Dynamic Time Warping algorithm for warping two 3D trajectories. Currently, gnuplot is my plotting tool of choice and it works fine when I plot multiple trajectories at a time. However, when I implement DTW one of the real use for plotting tool is to visualize the point warping, like this picture. Currently, the output of my DTW program is two time series in CSV files and another CSV file which indicate the warp (X in series 1 - Y in series 2). Is there any possible way to do that in gnuplot?

    Read the article

  • How do i convert hdb file? ... believed to be from act! source

    - by Wardy
    Any ideas ? I think the original source was a goldmine database, looking around it appears that the file was likely built using an application called ACT which I gather is a huge product I don't really want to be deploying for a one off file total size less than 5 meg. So ... Anyone know of a simple tool that I can run this file through to convert it to a standard CSV or something? It does appear to be (when looking at it in notepad and excel) in some sort of csv type format but it's like the data is encrypted somehow.

    Read the article

  • Data sync solution?

    - by user321088
    For some security issues I'm in an envorinment where third party apps can't access my DB. For this reason I should have some service/tool/script (dunno what yet... i'm open to the best option, still reading to see what I'm gonna do...) which enables me to generate on a regular basis(daily, weekly, monthly) some csv file with all new/modified records for a certain application. I should be able to automate this process and also export at any time a new file. So it should keep track for each application which records he still needs. Each application will need some data in some other format (csv/xls/sql), also some fields will be needed for some application and some aren't... It should be fairly flexible... What is the best option for me? Creating some custom tables for each application? Based on that extracting modified data?

    Read the article

  • Database on the fly with scripting languages

    - by afilatun
    I have a set of .csv files that I want to process. It would be far easier to process it with SQL queries. I wonder if there is some way to load a .csv file and use SQL language to look into it with a scripting language like python or ruby. Loading it with something similar to ActiveRecord would be awesome. The problem is that I don't want to have to run a database somewhere prior to running my script. I souldn't have additionnal installations needed outside of the scripting language and some modules. My question is which language and what modules should I use for this task. I looked around and can't find anything that suits my need. Is it even possible?

    Read the article

  • How do bind a List<object> to a DataGrid in Silverlight?

    - by Ben McCormack
    I'm trying to create a simple Silverlight application that involves parsing a CSV file and displaying the results in a DataGrid. I've configured my application to parse the CSV file to return a List<CSVTransaction> that contains properties with names: Date, Payee, Category, Memo, Inflow, Outflow. The user clicks a button to select a file to parse, at which point I want the DataGrid object to be populated. I'm thinking I want to use data binding, but I can't seem to figure out how to get the data to show up in the grid. My XAML for the DataGrid looks like this: <data:DataGrid IsEnabled="False" x:Name="TransactionsPreview"> <data:DataGrid.Columns> <data:DataGridTextColumn Header="Date" Binding="{Binding Date}" /> <data:DataGridTextColumn Header="Payee" Binding="{Binding Payee}"/> <data:DataGridTextColumn Header="Category" Binding="{Binding Category}"/> <data:DataGridTextColumn Header="Memo" Binding="{Binding Memo}"/> <data:DataGridTextColumn Header="Inflow" Binding="{Binding Inflow}"/> <data:DataGridTextColumn Header="Outflow" Binding="{Binding Outflow}"/> </data:DataGrid.Columns> </data:DataGrid> The code-behind for the xaml.cs file looks like this: private void OpenCsvFile_Click(object sender, RoutedEventArgs e) { try { CsvTransObject csvTO = new CsvTransObject.ParseCSV(); //This returns a List<CsvTransaction> and passes it //to a method which is supposed to set the DataContext //for the DataGrid to be equal to the list. BindCsvTransactions(csvTO.CsvTransactions); TransactionsPreview.IsEnabled = true; MessageBox.Show("The CSV file has a valid header and has been loaded successfully."); } catch (Exception ex) { MessageBox.Show(ex.Message); } } private void BindCsvTransactions(List<CsvTransaction> listYct) { TransactionsPreview.DataContext = listYct; } My thinking is to bind the CsvTransaction properties to each DataGridTextColumn in the XAML and then set the DataContext for the DataGrid to the List<CsvTransaction at run-time, but this isn't working. Any ideas about how I might approach this (or do it better)?

    Read the article

  • Opening a file from a pack URI in WPF

    - by cptmorgan
    Hi All, I am looking to open a .csv file from the application pack to do some unit testing. So what I would really love is some analog to File.ReadAllText(string path) which is instead X.ReadAllText(Uri uri). I haven't as yet been able to find this. Does anyone know if it is possible to read text / bytes (don't mind which) from a file in the pack without compiling this file to disk first? Oh and btw, File.ReadAllText(@"pack://application:,,,/SpreadSheetEngine/Tests/Example.csv") didn't work for me.. Thanks in advance.. Gav

    Read the article

  • C - how to get fscanf() to determine whether what it read is only digits, and no characters

    - by hatorade
    Imagine I have a csv with and each value is an integer. so the first value is the INTEGER 100. I want fscanf() to read this line, and either tell me it's an integer ONLY, or not. So, it would pass 100 but fail on 100t. What i've been trying to get work is "%d," where the comma is the delimiter of my CSV. so the whole function is fscanf(fp, "%d,", &count) Unfortunately, this fails to fail on '100t,' works on '100' and works on 't'. so it just isn't distinguishing between 100 and 100t (all of these numbers are followed by commas, of course

    Read the article

  • MySQL Column Value Pivot

    - by manyxcxi
    I have a MySQL InnoDB table laid out like so: id (int), run_id (int), element_name (varchar), value (text), line_order, column_order `MyDB`.`MyTable` ( `id` bigint(20) NOT NULL, `run_id` int(11) NOT NULL, `element_name` varchar(255) NOT NULL, `value` text, `line_order` int(11) default NULL, `column_order` int(11) default NULL It is used to store data generated by a Java program that used to output this in CSV format, hence the line_order and column_order. Lets say I have 2 entries (according to the table description): 1,1,'ELEMENT 1','A',0,0 2,1,'ELEMENT 2','B',0,1 I want to pivot this data in a view for reporting so that it would look like more like the CSV would, where the output would look this: --------------------- |ELEMENT 1|ELEMENT 2| --------------------- | A | B | --------------------- The data coming in is extremely dynamic; it can be in any order, can be any of over 900 different elements, and the value could be anything. The Run ID ties them all together, and the line and column order basically let me know where the user wants that data to come back in order.

    Read the article

  • genrating xlsheet with mysql but also contains html code in xl sheet need remove html code

    - by pmms
    following is the code for getting xlsheet from mysql ?php if($_POST['Submit']=='Generatexml') { $tblname=$_GET['genratexml']; //mysql_connect("localhost","root",""); //mysql_select_db("hitnrunf_db"); global $obj_mysql; $result = mysql_query("SELECT * FROM tbl_js_login"); while($row = mysql_fetch_array($result)) { $csv_output .= "$row[fld_id],$row[fld_fname],$row[fld_lname]"; $csv_output .="\015\012"; } header("Content-type: application/vnd.ms-excel"); header("Content-disposition: csv; filename= Student_Data_". date("Y-m-d") . ".csv"); print $csv_output; exit; } include_once $path."includes/jobseeker_form.php"; ? following is the link error we are getting http://www.eminosoft.com/screenshot/xlsheet.JPG

    Read the article

  • The system cannot find the path specified with FileWriter

    - by Nazgulled
    Hi, I have this code: private static void saveMetricsToCSV(String fileName, double[] metrics) { try { FileWriter fWriter = new FileWriter( System.getProperty("user.dir") + "\\output\\" + fileTimestamp + "_" + fileDBSize + "-" + fileName + ".csv" ); BufferedWriter csvFile = new BufferedWriter(fWriter); for(int i = 0; i < 4; i++) { for(int j = 0; j < 5; j++) { csvFile.write(String.format("%,10f;", metrics[i+j])); } csvFile.write(System.getProperty("line.separator")); } csvFile.close(); } catch(IOException e) { System.out.println(e.getMessage()); } } But I get this error: C:\Users\Nazgulled\Documents\Workspace\Só Amigos\output\1274715228419_5000-List-ImportDatabase.csv (The system cannot find the path specified) Any idea why? I'm using NetBeans on Windows 7 if it matters...

    Read the article

  • how to format date when i load data from google-app-engine..

    - by zjm1126
    i use remote_api to load data from google-app-engine. appcfg.py download_data --config_file=helloworld/GreetingLoad.py --filename=a.csv --kind=Greeting helloworld the setting is: class AlbumExporter(bulkloader.Exporter): def __init__(self): bulkloader.Exporter.__init__(self, 'Greeting', [('author', str, None), ('content', str, None), ('date', str, None), ]) exporters = [AlbumExporter] and i download a.csv is : the date is not readable , and the date in appspot.com admin is : so how to get the full date ?? thanks i change this : class AlbumExporter(bulkloader.Exporter): def __init__(self): bulkloader.Exporter.__init__(self, 'Greeting', [('author', str, None), ('content', str, None), ('date', lambda x: datetime.datetime.strptime(x, '%m/%d/%Y').date(), None), ]) exporters = [AlbumExporter] but the error is :

    Read the article

  • Huge file in Clojure and Java heap space error

    - by trzewiczek
    I posted before on a huge XML file - it's a 287GB XML with Wikipedia dump I want ot put into CSV file (revisions authors and timestamps). I managed to do that till some point. Before I got the StackOverflow Error, but now after solving the first problem I get: java.lang.OutOfMemoryError: Java heap space error. My code (partly taken from Justin Kramer answer) looks like that: (defn process-pages [page] (let [title (article-title page) revisions (filter #(= :revision (:tag %)) (:content page))] (for [revision revisions] (let [user (revision-user revision) time (revision-timestamp revision)] (spit "files/data.csv" (str "\"" time "\";\"" user "\";\"" title "\"\n" ) :append true))))) (defn open-file [file-name] (let [rdr (BufferedReader. (FileReader. file-name))] (->> (:content (data.xml/parse rdr :coalescing false)) (filter #(= :page (:tag %))) (map process-pages)))) I don't show article-title, revision-user and revision-title functions, because they just simply take data from a specific place in the page or revision hash. Anyone could help me with this - I'm really new in Clojure and don't get the problem.

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >