Search Results

Search found 1639 results on 66 pages for 'csv'.

Page 33/66 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Seeking reporting or templating tool to generate large formatted PDF reports from dataset

    - by Mr. Tacos
    Say I have some data in MySQL or a big ole CSV file. I also have a report. It's a PDF, call it 100 pages long. I need to generate variations on this PDF for slices of the data. More specific example: I have a CSV file with each StackOverflow user in a row and each column contains various statistics about that user. I have a report called "Your StackOverflow Performance". Its got lots of text, always the same, but each section contains something like: "You Vs. The Average StackOverflow Poster on this metric". I want a table that appears there that has the average data, which is the same in every run of the PDF, in one column. In the second column, I want your data, which is different for each PDF/row in the CSV file/user of StackOverflow. I'm pretty sure people use things like Crystal for this? Is there something in MS SQL Server that's good for this? An open source template language? I'm not even really sure if what I need is called a 'reporting' tool (since I don't really need to do any crunching, the data in this case is being crunched by a series of scripts and SPSS, I don't need bands and subbands and so on) or 'templating'. Is there even such a thing as templating PDFs? Natch, I'd be fine with something that generates output easily scriptable to PDF, like eps, but not something like HTML. The report formatting is fussy and done and externally determined and handed down from on high. It's print-oriented, not webby. Thanks in advance.

    Read the article

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • Use $_FILES on a page called by .ajax

    - by RachelD
    I have two .php pages that I'm working with. Index.php has a file upload form that posts back to index.php. I can access the $_FILES no problem on index.php after submitting the form. My issue is that I want (after the form submit and the page loads) to use .ajax (jQuery) to call another .php file so that file can open and process some of the rows and return the results to ajax. The ajax then displays the results and recursively calls itself to process the next batch of rows. Basically I want to process (put in the DB etc) the csv in chunks and display it for the user in between chunks. Im doing it this way because the files are 400,000+ rows and the user doesnt want to wait the 10+ min for them all to be processed. I dont want to move this file (save it) because I just need to process it and throw it away and if a user closes the page while its processing the file wont be thrown away. I could cron script it but I dont want to. What I would really like to do is pass the (single) $_FILES through .ajax OR Save it in a $_POST or $_SESSION to use on the second page. Is there any hope for my cause? Heres the ajax code if that helps: function processCSV(startIndex, length) { $.ajax({ url: "ajax-targets/process-csv.php", dataType: "json", type: "POST", data: { startIndex: startIndex, length: length }, timeout: 60000, // 1000 = 1 sec success: function(data) { // JQuery to display the rows from the CSV var newStart = startIndex+length; if(newStart <= data['csvNumRows']) { processCSV(newStart, length); } } }); } processCSV(1, 2); }); P.S. I did try this Passing $_FILES or $_POST to a new page with PHP but its not working for me :( SOS.

    Read the article

  • Python script, runs well, but not perfectly, debugging help.

    - by S1syphus
    What it does (sort of)... or is meant to, the script reads from a csv file that contains information on sound files and create a play list exactly 60 minutes long. An example csv, contains: their title, duration (in seconds), minium total time to be played (in minutes) An example is: Soundfoo,120,10 Soundbar,30,6 Sounddev,60,20 Soundrandom,15,8 The script works out the minimum instances of plays, take 'Soundfoo' for example, the length of each sample is 120 seconds and the minimum time to be played is 10 minutes, so basic maths 10*60/120 gives the number of instances the song is to be played, in this case 5. It is meant to take minimum number of instances and spread out equally from each other; so there will never be a period where for example Soundbar is played twice in a row. Then if the minium instances of each song has been used, and there is still time with in the 60 min, how is it possible to tell it to go back and fill the time by selecting each sound and including it till the 60 min is filled while remaining sparsely populated. Heres the issue(s)! The script fails to calculate the actual time require to play all the sounds in a file and the total time of the playlist, the thing is tho it doesn't get it wrong all the time maybe 3/5 times, even if I run it on the same csv file it will give me different answers. Here is the file I shall run the script on e for sake of ease to see the issue: Sound1,60,10 Sound2,60,10 Sound3,60,10 Sound4,60,10 Sound5,60,10 Sound6,60,10 I'll do it three times and post the results: 1 Required playtime in minutes: 60 Actual time in minutes to play all required ads: 62 Total playtime in minutes: 62.0 2 Required playtime in minutes: 60 Actual time in minutes to play all required ads: 71 Total playtime in minutes: 71.0 3 Required playtime in minutes: 60 Actual time in minutes to play all required ads: 60 Total playtime in minutes: 60.0 Relevant Code: pastebin.com/demkBXk6 And finally... in context: http://pastebin.com/demkBXk6 If you made it down to here, thanks for staying and reading, kudos.

    Read the article

  • Java: Combine 2 List <String[]>

    - by battousai622
    I have two List of array string. I want to be able to create a New List (newList) by combining the 2 lists. But it must meet these 3 conditions: 1) Copy the contents of store_inventory into newList. 2) Then if the item names in store_inventory & new_acquisitions match, just add the two quantities together and change it in newList. 3) If new_acquisitions has a new item that does not exist in store_inventory, then add it to the newList. The titles for the CSV list are: Item Name, Quantity, Cost, Price. The List contains an string[] of item name, quantity, cost and price for each row. CSVReader from = new CSVReader(new FileReader("/test/new_acquisitions.csv")); List <String[]> acquisitions = from.readAll(); CSVReader to = new CSVReader(new FileReader("/test/store_inventory.csv")); List <String[]> inventory = to.readAll(); List <String[]> newList; Any code to get me started would be great! =] this is what i have so far... for (int i = 0; i < acquisitions.size(); i++) { temp1 = acquisitions.get(i); for (int j = 1; j < inventory.size(); j++) { temp2 = inventory.get(j); if (temp1[0].equals(temp2[0])) { //if match found... do something? //break out of loop } } //if new item found... do something? }

    Read the article

  • Take Current Snapshot of DB and send it to FTP in same PHP Scripts: Advice Needed

    - by Rachel
    Not sure if I can do it this way. I want to get current snapshot of the database and send it via FTP Server, both of this functionality should be implemented in PHP scripts. Here are the steps I am thinking on right now. In my php scripts(basically am extending an PDO into my Dao class and then preparing the query), $qry = SELECT * FROM MyTablename; $stmt = $this->prepare($qry); $stmt = $this->execute(); Now I will store $stmt in csv file using fputcsv or I will execute the sql command from the script itself and than try to store the result in the $file(csv file) note here that I do not have any csv file with me at this point to basically I will have to create one and let's say its $file, so then $file = fputcsv($stmt); or $file = exec("Select * from MyTablename"); Will this put all records in the file ? If yes, then I will use FTP Functionality to transfer file to the FTP Folder. I am not sure if this approach would work and also have concerns regarding the need of preparing the $qry Any suggestions or different approach advised would be highly appreciated. Thanks !!!

    Read the article

  • Am I using handlers in the wrong way?

    - by superexsl
    Hey, I've never used HTTP Handlers before, and I've got one working, but I'm not sure if I'm actually using it properly. I have generated a string which will be saved as a CSV file. When the user clicks a button, I want the download dialog box to open so that the user can save the file. What I have works, but I keep reading about modifying the web.config file and I haven't had to do that. My Handler: private string _data; private string _title = "temp"; public void AddData(string data) { _data = data; } public bool IsReusable { get { return false; } } public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/csv"; context.Response.AddHeader("content-disposition", "filename=" + _title + ".csv"); context.Response.Write(_data); context.Response.Flush(); context.Response.Close(); } And this is from the page that allows the user to download: (on button click) string dataToConvert = "MYCSVDATA...."; csvHandler handler = new csvHandler(); handler.AddData(dataToConvert); handler.ProcessRequest(this.Context); This works fine, but no examples I've seen ever instantiate the handler and always seem to modify the web.config. Am I doing something wrong? Thanks

    Read the article

  • Object reference not set to an instance of an object.

    - by Lambo
    I have the following code - private static void convert() { string csv = File.ReadAllText("test.csv"); string year = "2008"; XDocument doc = ConvertCsvToXML(csv, new[] { "," }); doc.Save("update.xml"); XmlTextReader reader = new XmlTextReader("update.xml"); XmlDocument testDoc = new XmlDocument(); testDoc.Load(@"update.xml"); XDocument turnip = XDocument.Load("update.xml"); webservice.singleSummary[] test = new webservice.singleSummary[1]; webservice.FinanceFeed CallWebService = new webservice.FinanceFeed(); foreach(XElement el in turnip.Descendants("row")) { test[0].account = el.Descendants("var").Where(x => (string)x.Attribute("name") == "account").SingleOrDefault().Attribute("value").Value; test[0].actual = System.Convert.ToInt32(el.Descendants("var").Where(x => (string)x.Attribute("name") == "actual").SingleOrDefault().Attribute("value").Value); test[0].commitment = System.Convert.ToInt32(el.Descendants("var").Where(x => (string)x.Attribute("name") == "commitment").SingleOrDefault().Attribute("value").Value); test[0].costCentre = el.Descendants("var").Where(x => (string)x.Attribute("name") == "costCentre").SingleOrDefault().Attribute("value").Value; test[0].internalCostCentre = el.Descendants("var").Where(x => (string)x.Attribute("name") == "internalCostCentre").SingleOrDefault().Attribute("value").Value; MessageBox.Show(test[0].account, "Account"); MessageBox.Show(System.Convert.ToString(test[0].actual), "Actual"); MessageBox.Show(System.Convert.ToString(test[0].commitment), "Commitment"); MessageBox.Show(test[0].costCentre, "Cost Centre"); MessageBox.Show(test[0].internalCostCentre, "Internal Cost Centre"); CallWebService.updateFeedStatus(test, year); } It is coming up with the error of - NullReferenceException was unhandled, saying that the object reference not set to an instance of an object. The error occurs on the first line test[0].account. How can I get past this?

    Read the article

  • How should I organize complex SQL views in Rails?

    - by Benjamin Oakes
    I manage a research database with Ruby on Rails. The data that is entered is primarily used by scientists who prefer to have all the relevant information for a study in one single massive table for use in their statistics software of choice. I'm currently presenting it as CSV, as it's very straightforward to do and compatible with the tools people want to use. I've written many views (the SQL kind, not the Rails HTML/ERB kind) to make the output they expect a reality. Some of these views are quite large and have a fair amount of complexity behind them. I wrote them in SQL because there are many calculations and comparisons that are more easily done with SQL. They're currently loaded into the database straight from a file named views.sql. To get the requested data, I do a select * from my_view;. The views.sql file is getting quite large. Part of the problem is that we're still figuring out what the data we collect means, so there's a lot of changes being made to the views all the time -- and a ton of them are being created. Many of them need to be repeatable. I've recently run into issues organizing and testing these views. Rails works great for user interface stuff and business logic, but I'm not aware of much existing structure for handling the reporting we require. Some options I've thought of: Should I move them into the most relevant models somehow? Several of the views interact with each other, which makes this situation more complex than just doing a single find_by_sql, so I don't know if they should only be part of the model. Perhaps they should be treated as a "view" in the MVC sense? (That is, they could be moved into app/views/ and live alongside the HTML, perhaps as files named something like my_view.csv.sql which return CSV.) How would you deal with a complex reporting problem like this?

    Read the article

  • Extract data from PostgreSQL DB without using pg_dump

    - by John Horton
    There is a PostgreSQL database on which I only have limited access (e.g, I can't use pg_dump). I am trying to create a local "mirror" by exporting certain tables from the database. I do not have the permissions needed to just dump a table as SQL from within psql. Right now, I just have a Python script that iterates through my table_names, selects all fields and then exports them as a CSV: for table_name, file_name in zip(table_names, file_names): cmd = """echo "\\\copy (select * from %s)" to stdout WITH CSV HEADER | psql -d remote_db | gzip > ./%s/%s.gz"""%(table_name,dir_name,file_name) os.system(cmd) I would like to not use CSV if possible, as I lose the field types and the encoding can get messed up. First best would probably be some way of getting the generating SQL code for the table using \copy. Next best would be XML, ideally with some way of preserving the field types. If that doesn't work, I think the final option might be two queries---one to get the field data types, the other to get the actual data. Any thoughts or advice would be greatly appreciated - thanks!

    Read the article

  • Using a function found in a different file in a loop

    - by Anders
    This question is related to BuddyPress, and a follow-up question from this question I have a .csv-file with 790 rows and 3 columns where the first column is the group name, second is the group description and last (third) the slug. As far as I've been told I can use this code: <?php $groups = array(); if (($handle = fopen("groupData.csv", "r")) !== FALSE) { while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) { $group = array('group_id' = 'SOME ID', 'name' = $data[0], 'description' = $data[1], 'slug' = groups_check_slug(sanitize_title(esc_attr($data[2]))), 'date_created' = gmdate( "Y-m-d H:i:s" ), 'status' = 'public' ); $groups[] = $group; } fclose($handle); } foreach ($groups as $group) { groups_create_group($group); } With http://www.nomorepasting.com/getpaste.php?pasteid=35217 which is called bp-groups.php. The thing is that I can't make it work. I've created a new file with the code written above called groupgenerator.php uploaded the .csv file to the same folder and opened groupgenerator.php in my browser. But, i get this error: Fatal error: Call to undefined function groups_check_slug() in What am I doing wrong?

    Read the article

  • convert jruby 1.8 string to windows encoding?

    - by Arne
    Hey, I want to export some data from my jruby on rails webapp to excel, so I create a csv string and send it as a download to the client using send_data(text, :filename => "file.csv", :type => "text/csv; charset=CP1252", :encoding => "CP1252") The file seems to be in UTF-8 which Excel cannot read correctly. I googled the problem and found that iconv can convert encodings. I try to do that with: ic = Iconv.new('CP1252', 'UTF-8') text = ic.iconv(text) but when I send the converted text it does not make any difference. It is still UTF-8 and Excel cannot read the special characters. there are several solutions using iconv, so this seems to work for others. When I convert the file on the linux shell manually with iconv it works. What am I doing wrong? Is there a better way? Im using: - jruby 1.3.1 (ruby 1.8.6p287) (2009-06-15 2fd6c3d) (Java HotSpot(TM) Client VM 1.6.0_19) [i386-java] - Debian Lenny - Glassfish app server - Iceweasel 3.0.6

    Read the article

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

  • PostgreSQL storing paths for reference in scripts

    - by Brian D.
    I'm trying to find the appropriate place to store a system path in PostgreSQL. What I'm trying to do is load values into a table using the COPY command. However, since I will be referring to the same file path regularly I want to store that path in one place. I've tried creating a function to return the appropriate path, but I get a syntax error when I call the function in the COPY command. I'm not sure if this is the right way to go about it, but I'll post my code anyway. COPY command: COPY employee_scheduler.countries (code, name) FROM get_csv_path('countries.csv') WITH CSV; Function Definition: CREATE OR REPLACE FUNCTION employee_scheduler.get_csv_path(IN file_name VARCHAR(50)) RETURNS VARCHAR(250) AS $$ DECLARE path VARCHAR(200) := E'C:\\Brian\\Work\\employee_scheduler\\database\\csv\\'; file_path VARCHAR(250) := ''; BEGIN file_path := path || file_name; RETURN file_path; END; $$ LANGUAGE plpgsql; If anyone has a different idea on how to accomplish this I'm open to suggestions. Thanks for any help!

    Read the article

  • Reflection and Generics get value of property

    - by GigaPr
    Hi i am trying to implement a generic method which allows to output csv file public static void WriteToCsv<T>(List<T> list) where T : new() { const string attachment = "attachment; filename=PersonList.csv"; HttpContext.Current.Response.Clear(); HttpContext.Current.Response.ClearHeaders(); HttpContext.Current.Response.ClearContent(); HttpContext.Current.Response.AddHeader("content-disposition", attachment); HttpContext.Current.Response.ContentType = "text/csv"; HttpContext.Current.Response.AddHeader("Pragma", "public"); bool isFirstRow = true; foreach (T item in list) { //Get public properties PropertyInfo[] propertyInfo = item.GetType().GetProperties(); while (isFirstRow) { WriteColumnName(propertyInfo); isFirstRow = false; } Type type = typeof (T); StringBuilder stringBuilder = new StringBuilder(); foreach (PropertyInfo info in propertyInfo) { //string value ???? I am trying to get the value of the property info for the item object } HttpContext.Current.Response.Write(stringBuilder.ToString()); HttpContext.Current.Response.Write(Environment.NewLine); } HttpContext.Current.Response.End(); } but I am not able to get the value of the object's property Any suggestion? Thanks

    Read the article

  • perl multithreading issue for autoincrement

    - by user3446683
    I'm writing a multi threaded perl script and storing the output in a csv file. I'm trying to insert a field called sl.no. in the csv file for each row entered but as I'm using threads, the sl. no. overlaps in most. Below is an idea of my code snippet. for ( my $count = 1 ; $count <= 10 ; $count++ ) { my $t = threads->new( \&sub1, $count ); push( @threads, $t ); } foreach (@threads) { my $num = $_->join; } sub sub1 { my $num = shift; my $start = '...'; #distributing data based on an internal logic my $end = '...'; #distributing data based on an internal logic my $next; for ( my $x = $start ; $x <= $end ; $x++ ) { my $count = $x + 1; #part of code from which I get @data which has name and age my $j = 0; if ( $x != 0 ) { $count = $next; } foreach (@data) { #j is required here for some extra code flock( OUTPUT, LOCK_EX ); print OUTPUT $count . "," . $name . "," . $age . "\n"; flock( OUTPUT, LOCK_UN ); $j++; $count++; } $next = $count; } return $num; } I need the count to be incremented which is the serial number for the rows that would be inserted in the csv file. Any help would be appreciated.

    Read the article

  • Query crashes MS Access

    - by user284651
    THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend?

    Read the article

  • How can I read a continuously updating log file in Perl?

    - by Octopus
    I have a application generating logs in every 5 sec. The logs are in below format. 11:13:49.250,interface,0,RX,0 11:13:49.250,interface,0,TX,0 11:13:49.250,interface,1,close,0 11:13:49.250,interface,4,error,593 11:13:49.250,interface,4,idle,2994215 and so on for other interfaces... I am working to convert these into below CSV format: Time,interface.RX,interface.TX,interface.close.... 11:13:49,0,0,0,.... Simple as of now but the problem is, I have to get the data in CSV format online, i.e as soon the log file updated the CSV should also be updated. What I have tried to read the output and make the header is: #!/usr/bin/perl -w use strict; use File::Tail; my $head=["Time"]; my $pos={}; my $last_pos=0; my $current_event=[]; my $events=[]; my $file = shift; $file = File::Tail->new($file); while(defined($_=$file->read)) { next if $_ =~ some filters; my ($time,$interface,$count,$eve,$value) = split /[,\n]/, $_; my $key = $interface.".".$eve; if (not defined $pos->{$eve_key}) { $last_pos+=1; $pos->{$eve_key}=$last_pos; push @$head,$eve; } print join(",", @$head) . "\n"; } Is there any way to do this using Perl?

    Read the article

  • Pick up relevant information from a string using regular expression C#3.0

    - by Newbie
    Hi, I have a situation. I have been given some file name which can be like <filename>YYYYMMDD<fileextension> some valid file names that will satisfy the above pattern are as under xxx20100326.xls, xxx2v20100326.csv, x_20100326.xls, xy2z_abc_20100326_xyz.csv, abc.xyz.20100326.doc, ab2.v.20100326.doc, abc.v.20100326_xyz.xls In what ever be the above defined case, I need to pick up the dates only. So for all the cases, the output will be 20100326. I am trying to achieve the same but no luck. Here is what I have done so far string testdata = "x2v20100326.csv"; string strYYYY = @"\d{4}"; string strMM = @"(1[0-2]|0[1-9])"; string strDD = @"(3[0-1]|[1-2][0-9]|0[1-9])"; string regExPattern = @"\A" + strYYYY + strMM + strDD + @"\Z"; Regex regex = new Regex(regExPattern); Match match = regex.Match(testdata); if (match.Success) { string result = match.Groups[0].Value; } I am using c#3.0 and dotnet framework 3.5 Please help. It is very urgent Thanks in advance.

    Read the article

  • Can someone answer this for me?

    - by Dcurvez
    okay I am totally stuck. I have been getting some help off and on throughout this project and am anxious to get this problem solved so I can continue on with the rest of this project. I have a gridview that is set to save to a file, and has the option to import into excel. I keep getting an error of this: Invalid cast exception was unhandled. At least one element in the source array could not be cast down to the destination array type. Can anyone tell me in layman easy to understand what this error is speaking of? This is the code I am trying to use: Dim fileName As String = "" Dim dlgSave As New SaveFileDialog dlgSave.Filter = "Text files (*.txt)|*.txt|CSV Files (*.csv)|*.csv" dlgSave.AddExtension = True dlgSave.DefaultExt = "txt" If dlgSave.ShowDialog = Windows.Forms.DialogResult.OK Then fileName = dlgSave.FileName SaveToFile(fileName) End If End Sub Private Sub SaveToFile(ByVal fileName As String) If DataGridView1.RowCount > 0 AndAlso DataGridView1.Rows(0).Cells(0) IsNot Nothing Then Dim stream As New System.IO.FileStream(fileName, IO.FileMode.Append, IO.FileAccess.Write) Dim sw As New System.IO.StreamWriter(stream) For Each row As DataGridViewRow In DataGridView1.Rows Dim arrLine(9) As String Dim line As String **row.Cells.CopyTo(arrLine, 0)** line = arrLine(0) line &= ";" & arrLine(1) line &= ";" & arrLine(2) line &= ";" & arrLine(3) line &= ";" & arrLine(4) line &= ";" & arrLine(5) line &= ";" & arrLine(6) line &= ";" & arrLine(7) line &= ";" & arrLine(8) sw.WriteLine(line) Next sw.Flush() sw.Close() End If I bolded the line where it shows in debug, and I really dont see what all the fuss is about LOL

    Read the article

  • Updating Cells in a DataTable

    - by Maxim Z.
    I'm writing a small app to do a little processing on some cells in a CSV file I have. I've figured out how to read and write CSV files with a library I found online, but I'm having trouble: the library parses CSV files into a DataTable, but, when I try to change a cell of the table, it isn't saving the change in the table! Below is the code in question. I've separated the process into multiple variables and renamed some of the things to make it easier to debug for this question. Code Inside the loop: string debug1 = readIn.Rows[i].ItemArray[numColumnToCopyTo].ToString(); string debug2 = readIn.Rows[i].ItemArray[numColumnToCopyTo].ToString().Trim(); string debug3 = readIn.Rows[i].ItemArray[numColumnToCopyFrom].ToString().Trim(); string towrite = debug2 + ", " + debug3; readIn.Rows[i].ItemArray[numColumnToCopyTo] = (object)towrite; After the loop: readIn.AcceptChanges(); When I debug my code, I see that towrite is being formed correctly and everything's OK, except that the row isn't updated: why isn't it working? I have a feeling that I'm making a simple mistake here: the last time I worked with DataTables (quite a long time ago), I had similar problems. If you're wondering why I'm adding another comma in towrite, it's because I'm combining a street address field with a zip code field - I hope that's not messing anything up. My code is kind of messy, as I'm only trying to edit one file to make a small fix, so sorry.

    Read the article

  • Cant turn off Redirected Access on Cluster Shared Volumes 2008r2 Failover clustering

    - by 562networks
    I read up on LH Mode and am still boggled what it is and what it does. I pass all validation on the Failover cluster wizard but in the Event Viewer I get erros for Event ID 5121 and 1034 related to one of the disks that is in the CSV for my hyper v machines. We have two disks in the CSV for our hyper V farm. Everything seems to work just fine but im worried about the even viewer errors. I have also read that people are having problems like I turning off Redirected access.

    Read the article

  • IOError: [Errno 32] Broken pipe

    - by khati
    I got "IOError: [Errno 32] Broken pipe" while writing files in linux. I am using python to read each line a of csv file and then write into a database table. My code is f = open(path,'r') command = command to connect to database p = Popen(command, shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE, env=env) query = " COPY myTable( id, name, address) FROM STDIN WITH DELIMITER ';' CSV QUOTE '"'; " p.stdin.write(query.encode('ascii')) *-->(Here exactly I got the error, p.stdin.write(query.encode('ascii')) IOError: [Errno 32] Broken pipe )* So when I run this program in linux, I got error "IOError: [Errno 32] Broken pipe" . However this works fine when I run in windows7. Do I need to do some configuration in Linux sever? Any suggestions will be appreciated. Thank you.

    Read the article

  • Migrate AD DS Server 2003 to Server 2008 R2

    - by user2566483
    I would like to get a couple opinions Found this article online and wanted to know if it is good to follow http://www.msserverpro.com/migrating-active-directory-domain-controller-from-windows-server-2003-sp2-to-windows-server-2008-r2/ Couple of things that need to be done. 1. Move over all active directory settings from old Server 2003 server to new Server 2008R2 2. Setup all users on new server using csvde. csvde -f output.csv -- on old server csvde -i -f output.csv -- on new server Any advice would be greatly appreciated.

    Read the article

  • Windows Handling Piped Comands Error Redirection

    - by jpmartins
    Warning: I am no expert on building scripts, and sorry for lousy English. In an case of generating a CSV from a database query I'm using the following commands. ... CALL java.exe -classpath ... com.xigole.util.sql.Jisql -user dmfodbc -pf pwd.file -driver com.sybase.jdbc3.jdbc.SybDriver -cstring %constr% -c ; -input 42.sql -formatter csv -delimiter ; 2%LOGFILE% | CALL grep -v -e "SELECT right" -e "executing: " -e " rows affect" %FicheiroR% 2%LOGFILE% ... I'm using windows implementation of grep. The 2%LOGFILE% in both java and grep command is causing an error message indicating the file is being use by another process. The Ugly workaround i have came up with is to put grep error redirect to a temporary %LOGFILE%.aux java ... | grep ... 2%LOGFILE%.aux type %LOGFILE%.aux % %LOGFILE% del %LOGFILE%.aux What is a better solution?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >