Search Results

Search found 4705 results on 189 pages for 'export to csv'.

Page 78/189 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • The system cannot find the path specified with FileWriter

    - by Nazgulled
    Hi, I have this code: private static void saveMetricsToCSV(String fileName, double[] metrics) { try { FileWriter fWriter = new FileWriter( System.getProperty("user.dir") + "\\output\\" + fileTimestamp + "_" + fileDBSize + "-" + fileName + ".csv" ); BufferedWriter csvFile = new BufferedWriter(fWriter); for(int i = 0; i < 4; i++) { for(int j = 0; j < 5; j++) { csvFile.write(String.format("%,10f;", metrics[i+j])); } csvFile.write(System.getProperty("line.separator")); } csvFile.close(); } catch(IOException e) { System.out.println(e.getMessage()); } } But I get this error: C:\Users\Nazgulled\Documents\Workspace\Só Amigos\output\1274715228419_5000-List-ImportDatabase.csv (The system cannot find the path specified) Any idea why? I'm using NetBeans on Windows 7 if it matters...

    Read the article

  • genrating xlsheet with mysql but also contains html code in xl sheet need remove html code

    - by pmms
    following is the code for getting xlsheet from mysql ?php if($_POST['Submit']=='Generatexml') { $tblname=$_GET['genratexml']; //mysql_connect("localhost","root",""); //mysql_select_db("hitnrunf_db"); global $obj_mysql; $result = mysql_query("SELECT * FROM tbl_js_login"); while($row = mysql_fetch_array($result)) { $csv_output .= "$row[fld_id],$row[fld_fname],$row[fld_lname]"; $csv_output .="\015\012"; } header("Content-type: application/vnd.ms-excel"); header("Content-disposition: csv; filename= Student_Data_". date("Y-m-d") . ".csv"); print $csv_output; exit; } include_once $path."includes/jobseeker_form.php"; ? following is the link error we are getting http://www.eminosoft.com/screenshot/xlsheet.JPG

    Read the article

  • how to format date when i load data from google-app-engine..

    - by zjm1126
    i use remote_api to load data from google-app-engine. appcfg.py download_data --config_file=helloworld/GreetingLoad.py --filename=a.csv --kind=Greeting helloworld the setting is: class AlbumExporter(bulkloader.Exporter): def __init__(self): bulkloader.Exporter.__init__(self, 'Greeting', [('author', str, None), ('content', str, None), ('date', str, None), ]) exporters = [AlbumExporter] and i download a.csv is : the date is not readable , and the date in appspot.com admin is : so how to get the full date ?? thanks i change this : class AlbumExporter(bulkloader.Exporter): def __init__(self): bulkloader.Exporter.__init__(self, 'Greeting', [('author', str, None), ('content', str, None), ('date', lambda x: datetime.datetime.strptime(x, '%m/%d/%Y').date(), None), ]) exporters = [AlbumExporter] but the error is :

    Read the article

  • Huge file in Clojure and Java heap space error

    - by trzewiczek
    I posted before on a huge XML file - it's a 287GB XML with Wikipedia dump I want ot put into CSV file (revisions authors and timestamps). I managed to do that till some point. Before I got the StackOverflow Error, but now after solving the first problem I get: java.lang.OutOfMemoryError: Java heap space error. My code (partly taken from Justin Kramer answer) looks like that: (defn process-pages [page] (let [title (article-title page) revisions (filter #(= :revision (:tag %)) (:content page))] (for [revision revisions] (let [user (revision-user revision) time (revision-timestamp revision)] (spit "files/data.csv" (str "\"" time "\";\"" user "\";\"" title "\"\n" ) :append true))))) (defn open-file [file-name] (let [rdr (BufferedReader. (FileReader. file-name))] (->> (:content (data.xml/parse rdr :coalescing false)) (filter #(= :page (:tag %))) (map process-pages)))) I don't show article-title, revision-user and revision-title functions, because they just simply take data from a specific place in the page or revision hash. Anyone could help me with this - I'm really new in Clojure and don't get the problem.

    Read the article

  • Neglect empty cells while refreshing

    - by Ashok Vardhan
    I have an excel macro which refreshes the worksheet. However, if the file (in .csv format) with which the worksheet is being refreshed has empty cells, it's shifting the data from other columns and placing the data in wrong columns. However,if I manually refresh the sheet, it's working fine. I don't know how I can fix this. I just want my whole .csv file including empty cells to appear as it is in the worksheet. Any suggestions would be greatly helpful. The following is the Macro code. With Worksheets("RawData1").QueryTables(1) .TextFilePromptOnRefresh = False .RefreshStyle = xlinsertdelete .Connection = Application.Substitute(.Connection, CurrPath, NewPath) .Refresh End With // We can assume that we have CurrPath and NewPath properly

    Read the article

  • Magento: Syncing product/category translations from "Store 1 > German Store View" to "Store 2 > German Store View"

    - by mattalexx
    For two different store Views using the same locale, it is easy to manage translations for miscellaneous text that's stored in CSV files. It's just a question of configuring the locale correctly so the correct CSV files will be used. But my client has entered a bunch of translations for products and categories into the admin by changing the scope to "Store 1 German" and setting the translations. But now he has Store 2, with a German store view. How does he keep "Store 1 German Store View" and "Store 2 German Store View" in sync?

    Read the article

  • Ruby: wait for system command to end

    - by Ignace
    Hey all, I'm converting an xls 2 csv with a system(command) in ruby. After the conversion i'm processing this csv files. But the conversion is still going when the program wants to process the files. So at that time they are non existant. Can someone tell me if it's possible to let Ruby wait the right amount of time for the system command to finish? Right now i'm using sleep 20 but if it will take longer once, it isn't right of course... Thanks!

    Read the article

  • Writing to an already existing file using FileWriter Java

    - by delo
    Is there anyway I can write to an already existing file using Filewriter For example when the user clicks a submit button: FileWriter writer = new FileWriter("myfile.csv"); writer.append("LastName"); writer.append(','); writer.append("FirstName"); writer.append('/n'); writer.append(LastNameTextField.getText()); writer.append(','); writer.append(FirstNameTextField.getText()); I want to be able to write new data into the already existing myfile.csv without having to recreate a brand new one every time

    Read the article

  • Best way to gather, then import data into drupal?

    - by Frank
    I am building my first database driven website with Drupal and I have a few questions. I am currently populating a google docs excel spreadsheet with all of the data I want to eventually be able to query from the website (after it's imported). Is this the best way to start? If this is not the best way to start what would you recommend? My plan is to populate the spreadsheet then import it as a csv into the mysql db via the CCK Node. I've seen two ways to do this. http://drupal.org/node/133705 (importing data into CCK nodes) http://drupal.org/node/237574 (Inserting data using spreadsheet/csv instead of SQL insert statements) Basically my question(s) is what is the best way to gather, then import data into drupal? Thanks in advance for any help, suggestions.

    Read the article

  • Converting old Mailer to Rails 3 (multipart/mixed)

    - by Oscar Del Ben
    I'm having some difficulties converting this old mailer api to rails 3: content_type "multipart/mixed" part :content_type => "multipart/alternative" do |alt| alt.part "text/plain" do |p| p.body = render_message("summary_report.text.plain.erb", :message = message.gsub(/<.br./,"\n"), :campaign=campaign, :aggregate=aggregate, :promo_messages=campaign.participating_promo_msgs) end alt.part "text/html" do |p| p.body = render_message("summary_report.text.html.erb", :message = message, :campaign=campaign, :aggregate=aggregate,:promo_messages=campaign.participating_promo_msgs) end end if bounce_path attachment :content_type => "text/csv", :body=> File.read(bounce_path), :filename => "rmo_bounced_emails.csv" end attachment :content_type => "application/pdf", :body => File.read(report_path), :filename=>"rmo_report.pdf" In particular I don't understand how to differentiate the different multipart options. Any idea?

    Read the article

  • How to query data from a password protected https website

    - by Addie
    I'd like my application to query a csv file from a secure website. I have no experience with web programming so I'd appreciate detailed instructions. Currently I have the user login to the site, manually query the csv, and have my application load the file locally. I'd like to automate this by having the user enter his login information, authenticating him on the website, and querying the data. The application is written in C# .NET. The url of the site is: https://www2.emidas.com/default.asp. I've tested the following code already and am able to access the file once the user has already authenticated himself and created a manual query. System.Net.WebClient Client = new WebClient(); Stream strm = Client.OpenRead("https://www3.emidas.com/users/<username>/file.csv"); Here is the request sent to the site for authentication. I've angle bracketed the real userid and password. POST /pwdVal.asp HTTP/1.1 Accept: image/jpeg, application/x-ms-application, image/gif, application/xaml+xml, image/pjpeg, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, application/x-shockwave-flash, */* User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; Tablet PC 2.0; OfficeLiveConnector.1.4; OfficeLivePatch.1.3; .NET4.0C; .NET4.0E) Content-Type: application/x-www-form-urlencoded Accept-Encoding: gzip, deflate Cookie: ASPSESSIONID<unsure if this data contained password info so removed>; ClientId=<username> Host: www3.emidas.com Content-Length: 36 Connection: Keep-Alive Cache-Control: no-cache Accept-Language: en-US client_id=<username>&password=<password>

    Read the article

  • Gaining application/module context from a symfony task

    - by Martin Chatterton
    I have written a reporting suite, and I have a specific report that builds a CSV file. Serving this file via a browser on demand isn't an issue, but I need to be able to build this CSV file nightly, and email round a link to be able to download it. Essentially, I need to be able to replace a specific action with a symfony task, run via cron. So how do I gain application/module context from a symfony task? And secondly, how would I invoke the SwiftMailer library from a symfony task? I'm using symfony v1.4.4 and PHP v.5.2.13. Thanks in advance for your help.

    Read the article

  • Ruby string encoding problem

    - by John Prideaux
    I've looked at the other ruby/encoding related posts but haven't been able to figure out why the following is not working. Likely just because I'm dense, but here's the situation. Using Ruby 1.9 on windows. I have a set of CSV files that need some data appended to the end of each line. Whenever I run my script, the appended characters are gibberish. The input text appears to be IBM437 encoding, whereas my string I'm appending starts as US-ASCII. Nothing I've tried with respect to forcing encoding on the input strings or the append string seems to change the resultant output. I'm stumped. The current encoding version is simply the last that I tried. def append_salesperson(txt, salesperson) if txt.length > 2 return txt.chomp.force_encoding('US-ASCII') + %(, "", "", "#{salesperson}") end end salespeople = Hash[ "fname", "Record Manager"] outfile = File.open("ActData.csv", "w:US-ASCII") salespeople.each do | filename, recordManager | infile = File.open("#{filename}.txt") infile.each do |line| outfile.puts append_salesperson(line, recordManager) end infile.close end outfile.close

    Read the article

  • Database schema for simple stats project

    - by Bubnoff
    Backdrop: I have a file hierarchy of cvs files for multiple locations named by dates they cover ...by month specifically. Each cvs file in the folder is named after the location. eg', folder name: 2010-feb contains: location1.csv location2.csv Each CSV file holds records like this: 2010-06-28, 20:30:00 , 0 2010-06-29, 08:30:00 , 0 2010-06-29, 09:30:00 , 0 2010-06-29, 10:30:00 , 0 2010-06-29, 11:30:00 , 0 meaning of record columns ( column names ): Date, time, # of sessions I have a perl script that pulls the data from this mess and originally I was going to store it as json files, but am thinking a database might be more appropriate long term ...comparing year to year trends ...fun stuff like that. Pt 2 - My question/problem: So I now have a REST service that coughs up json with a test database. My question is [ I suck at db design ], how best to design a database backend for this? I am thinking the following tables would suffice and keep it simple: Location: (PK)location_code, name session: (PK)id, (FK)location_code, month, hour, num_sessions I need to be able to average sessions (plus min and max) for each hour across days of week in addition to days of week in a given month or months. I've been using perl hashes to do this and am trying to decide how best to implement this with a database. Do you think stored procedures should be used? As to the database, depending on info gathered here, it will be postgresql or sqlite. If there is no compelling reason for postgresql I'll stick with sqlite. How and where should I compare the data to hours of operation. I am storing the hours of operation in a yaml file. I currently 'match' the hour in the data to a hash from the yaml to do this. Would a database open simpler methods? I am thinking I would do this comparison as I do now then insert the data. Can be recalled with: SELECT hour, num_sessions FROM session WHERE location_code=LOC1 Since only hours of operation are present, I do not need to worry about it. Should I calculate all results as I do now then store as a stats table for different 'reports'? This, rather than processing on demand? How would this look? Anyway ...I ramble. Thanks for reading! Bubnoff

    Read the article

  • mysql does not utilize my cpu and ram enough?

    - by vick
    Hello Everyone! I am importing a 2.5gb csv file to a mysql table. My storage engine is innodb. Here is the script: use xxx; DROP TABLE IF EXISTS `xxx`.`xxx`; CREATE TABLE `xxx`.`xxx` ( `xxx_id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(128) NOT NULL, `yy` varchar(128) NOT NULL, `yyy` varchar(64) NOT NULL, `yyyy` varchar(2) NOT NULL, `yyyyy` varchar(10) NOT NULL, `url` varchar(64) NOT NULL, `p` varchar(10) NOT NULL, `pp` varchar(10) NOT NULL, `category` varchar(256) NOT NULL, `flag` varchar(4) NOT NULL, PRIMARY KEY (`xxx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; set autocommit = 0; load data local infile '/home/xxx/raw.csv' into table company fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n' ( name, yy, yyy, yyyy, yyyyy, url, p, pp, category, flag ); commit; Why does my PC (core i7 920 with 6gb ram) only consume 9% cpu power and 60% ram when running these queries?

    Read the article

  • No download dialog with FileResult

    - by majkinetor
    I am returning File result from action triggered by the form post event. I can't get download dialog. Instead, if I use: return File(Encoding.UTF8.GetBytes(reportPath), "text/plain", "Report.csv"); I get path to the file upon ajax execution in the target div. When I use return File(reportPath, "text/plain", "Report.csv"); I get content of the file in the target div. Any thoughts ? The action is declared as [HttpPost] public virtual ActionResult ExportFilter(Model model) { string outputFile = CreateReport(model); return File(....) }

    Read the article

  • write.table in R screws up header when has rownames

    - by Yannick Wurm
    Hello, check this example: > a = matrix(1:9, nrow = 3, ncol = 3, dimnames = list(LETTERS[1:3], LETTERS[1:3])) > a A B C A 1 4 7 B 2 5 8 C 3 6 9 the table displays correctly. There are two different ways of writing it to file... write.csv(a, 'a.csv') which gives as expected: "","A","B","C" "A",1,4,7 "B",2,5,8 "C",3,6,9 and write.table(a, 'a.txt') which screws up "A" "B" "C" "A" 1 4 7 "B" 2 5 8 "C" 3 6 9 indeed, an empty tab is missing.... which is a pain in the butt for downstream things. Is this a bug or a feature? Is there a workaround? (other than write.table(cbind(rownames(a), a), 'a.txt', row.names=FALSE) Cheers, yannick

    Read the article

  • Is there a work-around that allows missing data to equal NULL for LOAD DATA INFILE in MySQL?

    - by richardh
    I have a lot of large csv files with NULL values stored as ,, (i.e., no entry). After a lot of searching I found that this is a known "bug", although it may be a feature for some users. Is there a way that I can fix this on the fly without pre-processing? These data are all numeric, so a zero value is very different from NULL. Or if I have to do pre-processing, is there one that is most promising for dealing with tens of csv files of 100mb to 1gb? Thanks!

    Read the article

  • I need to rename a file in VBA, but getting "File Not Found" when it's clearly there!

    - by Karl
    Help! I'm getting a File Not Found error when trying to rename a file w/a variable. The variable is string. I can look at the variable and it's the exact filename that is there, but when I run the code, it says not found! Dim filePath, fileName, absPath, newPath As String filePath = "P:\Automated\" fileName = MySite.GetResult absPath = filePath & fileName newPath = "P:\Automated\NEW.csv" 'The following is a rename from CuteFTP Pro COM Object: '(Getting the same result from this and the below "Name". 'MySite.LocalRename "P:\Automated\" & fileName, "P:\Automated\NEW.csv" Name absPath As newPath

    Read the article

  • Streaming data to the browser as a file of unknown size

    - by Sir Psycho
    I have some data which is queried from the database and I'd like to send it to the client as a csv file. The file size varies each time due to the fact that the DB data returned can be of any size. Instead of saving this file to the hard disk, I'd like to send it to the browser at the same time it's being processed into a CSV by my algorithm. Response.Write seems useless. For some reason, the file download dialog is only displayed once my processing is finished. This seems odd as I'm writting all my output to the Response.Output stream. I have downloaded files on the web before where the filesize is not known and the browser just keeps on downloading. Is there any way to achieve this? The following stackoverflow thread did not offer any good advise. http://stackoverflow.com/questions/873995/asp-net-downloading-large-files-of-unknown-size Thanks

    Read the article

  • Extract dates from filename

    - by Newbie
    I have a situation where I need to extract dates from the file names whose general pattern is [filename_]YYYYMMDD[.fileExtension] e.g. "xxx_20100326.xls" or x2v_20100326.csv The below program does the work //Number of charecter in the substring is set to 8 //since the length of YYYYMMDD is 8 public static string ExtractDatesFromFileNames(string fileName) { return fileName.Substring(fileName.IndexOf("_") + 1, 8); } Is there any better option of achieving the same? I am basically looking for standard practice. I am using C#3.0 and dotnet framework 3.5 Edit: I have like the solution and the way of answerig of LC. I have used his program like string regExPattern = "^(?:.*_)?([0-9]{4})([0-9]{2})([0-9]{2})(?:\\..*)?$"; string result = Regex.Match(fileName, @regExPattern).Groups[1].Value; The input to the function is : "x2v_20100326.csv" But the output is: 2010 instead of 20100326(which is the expected one). Can anyone please help.

    Read the article

  • R problems using rpart with 4000 records and 13 attributes

    - by josh
    I have attempted to email the author of this package without success, just wondering if anybody else has experienced this. I am having an using rpart on 4000 rows of data with 13 attributes. I can run the same test on 300 rows of the same data with no issue. When I run on 4000 rows, Rgui.exe runs consistently at 50% cpu and the UI hangs.... it will stay like this for at least 4-5hours if I let it run, and never exit or become responsive. here is the code I am using both on the 300 and 4000 size subset : train<-read.csv("input.csv",header=T) y<-train[,18] x<-train[,3:17] library(rpart) fit<-rpart(y~.,x) Is this a known limitation of rpart, am I doing something wrong? potential workarounds? any assistance appreciated

    Read the article

  • Can I use Linq to project a new typed datarow?

    - by itchi
    I currently have a csv file that I'm parsing with an example from here: http://alexreg.wordpress.com/2009/05/03/strongly-typed-csv-reader-in-c/ I then want to loop the records and insert them using a typed dataset xsd to an Oracle database. It's not that difficult, something like: foreach (var csvItem in csvfile) { DataSet.MYTABLEDataTable DT = new DataSet.MYTABLEDataTable(); DataSet.MYTABLERow row = DT.NewMYTABLERow(); row.FIELD1 = csvItem.FIELD1; row.FIELD2 = csvItem.FIELD2; } I was wondering how I would do something with LINQ projection: var test = from csvItem in csvfile select new MYTABLERow { FIELD1 = csvItem.FIELD1, FIELD2 = csvItem.FIELD2 } But I don't think I can create datarows like this without the use of a rowbuilder, or maybe a better constructor for the datarow?

    Read the article

  • design function: underlying structure to store list of results for print to file

    - by forest.peterson
    is this a good approach to print a list of items to csv file with a sublist attached to each item. The gist of the function is when an item is found that does not exactly macth then a list of close matches is generated - this works now writing out one list at a time to a command window. For export to a csv file I think all the lists must be generated, stored and then written at once. Right now I use a struct to store the attributes printed, each struct is an item on the list - these structs are then added to a sorted stack and when printed they pop off into write out. Is a stack of stacks of structs a good design?

    Read the article

  • ob_start() -> ob_flush() doesn't work

    - by MB34
    I am using ob_start()/ob_flush() to, hopefully, give me some progress during a long import operation. Here is a simple outline of what I'm doing: <?php ob_start (); echo "Connecting to download Inventory file.<br>"; $conn = ftp_connect($ftp_site) or die("Could not connect"); echo "Logging into site download Inventory file.<br>"; ftp_login($conn,$ftp_username,$ftp_password) or die("Bad login credentials for ". $ftp_site); echo "Changing directory on download Inventory file.<br>"; ftp_chdir($conn,"INV") or die("could not change directory to INV"); // connection, local, remote, type, resume $localname = "INV"."_".date("m")."_".date('d').".csv"; echo "Downloading Inventory file to:".$localname."<br>"; ob_flush(); flush(); sleep(5); if (ftp_get($conn,$localname,"INV.csv",FTP_ASCII)) { echo "New Inventory File Downloaded<br>"; $datapath = $localname; ftp_close($conn); } else { ftp_close($conn); die("There was a problem downloading the Inventory file."); } ob_flush(); flush(); sleep(5); $csvfile = fopen($datapath, "r"); // open csv file $x = 1; // skip the header line $line = fgetcsv($csvfile); $y = (feof($csvfile) ? 2 : 5); while ((!$debug) ? (!feof($csvfile)) : $x <= $y) { $x++; $line = fgetcsv($csvfile); // do a lot of import stuff here with $line ob_flush(); flush(); sleep(1); } fclose($csvfile); // important: close the file ob_end_clean(); However, nothing is being output to the screen at all. I know the data file is getting downloaded because I watch the directory where it is being placed. I also know that the import is happening, meaning that it is in the while loop, because I can monitor the DB and records are being inserted. Any ideas as to why I am not getting output to the screen?

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >