Search Results

Search found 1639 results on 66 pages for 'csv'.

Page 27/66 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • SQLPlus - spooling to multiple files from PL/SQL blocks

    - by FrustratedWithFormsDesigner
    I have a query that returns a lot of data into a CSV file. So much, in fact, that Excel can't open it - there are too many rows. Is there a way to control spool to spool to a new file everytime 65000 rows have been processed? Ideally, I'd like to have my output in files named in sequence, such as large_data_1.csv, large_data_2.csv, large_data_3.csv, etc... I could use dbms_output in a PL/SQL block to control how many rows are output, but then how would I switch files, as spool does not seem to be accessible from PL/SQL blocks? (Oracle 10g)

    Read the article

  • Beginner to RUBY - require_relative problem

    - by WANNABE
    Hi, I'm learning Ruby (using version 1.8.6) on Windows 7. When I try to run the stock_stats.rb program below, I get the following error: C:\Users\Will\Desktop\ruby>ruby stock_stats.rb stock_stats.rb:1: undefined method `require_relative' for main:Object (NoMethodE rror) I have three v.small code files: stock_stats.rb require_relative 'csv_reader' reader = CsvReader.new ARGV.each do |csv_file_name| STDERR.puts "Processing #{csv_file_name}" reader.read_in_csv_data(csv_file_name) end puts "Total value = #{reader.total_value_in_stock}" csv_reader.rb require 'csv' require_relative 'book_in_stock' class CsvReader def initialize @books_in_stock = [] end def read_in_csv_data(csv_file_name) CSV.foreach(csv_file_name, headers: true) do |row| @books_in_stock << BookInStock.new(row["ISBN"], row["Amount"]) end end # later we'll see how to use inject to sum a collection def total_value_in_stock sum = 0.0 @books_in_stock.each {|book| sum += book.price} sum end def number_of_each_isbn # ... end end book_in_stock.rb require 'csv' require_relative 'book_in_stock' class CsvReader def initialize @books_in_stock = [] end def read_in_csv_data(csv_file_name) CSV.foreach(csv_file_name, headers: true) do |row| @books_in_stock << BookInStock.new(row["ISBN"], row["Amount"]) end end # later we'll see how to use inject to sum a collection def total_value_in_stock sum = 0.0 @books_in_stock.each {|book| sum += book.price} sum end def number_of_each_isbn # ... end end Thanks in advance for any help.

    Read the article

  • Python: How to read huge text file into memory

    - by asmaier
    I'm using Python 2.6 on a Mac Mini with 1GB RAM. I want to read in a huge text file $ ls -l links.csv; file links.csv; tail links.csv -rw-r--r-- 1 user user 469904280 30 Nov 22:42 links.csv links.csv: ASCII text, with CRLF line terminators 4757187,59883 4757187,99822 4757187,66546 4757187,638452 4757187,4627959 4757187,312826 4757187,6143 4757187,6141 4757187,3081726 4757187,58197 So each line in the file consists of a tuple of two comma separated integer values. I want to read in the whole file and sort it according to the second column. I know, that I could do the sorting without reading the whole file into memory. But I thought for a file of 500MB I should still be able to do it in memory since I have 1GB available. However when I try to read in the file, Python seems to allocate a lot more memory than is needed by the file on disk. So even with 1GB of RAM I'm not able to read in the 500MB file into memory. My Python code for reading the file and printing some information about the memory consumption is: #!/usr/bin/python # -*- coding: utf-8 -*- import sys infile=open("links.csv", "r") edges=[] count=0 #count the total number of lines in the file for line in infile: count=count+1 total=count print "Total number of lines: ",total infile.seek(0) count=0 for line in infile: edge=tuple(map(int,line.strip().split(","))) edges.append(edge) count=count+1 # for every million lines print memory consumption if count%1000000==0: print "Position: ", edge print "Read ",float(count)/float(total)*100,"%." mem=sys.getsizeof(edges) for edge in edges: mem=mem+sys.getsizeof(edge) for node in edge: mem=mem+sys.getsizeof(node) print "Memory (Bytes): ", mem The output I got was: Total number of lines: 30609720 Position: (9745, 2994) Read 3.26693612356 %. Memory (Bytes): 64348736 Position: (38857, 103574) Read 6.53387224712 %. Memory (Bytes): 128816320 Position: (83609, 63498) Read 9.80080837067 %. Memory (Bytes): 192553000 Position: (139692, 1078610) Read 13.0677444942 %. Memory (Bytes): 257873392 Position: (205067, 153705) Read 16.3346806178 %. Memory (Bytes): 320107588 Position: (283371, 253064) Read 19.6016167413 %. Memory (Bytes): 385448716 Position: (354601, 377328) Read 22.8685528649 %. Memory (Bytes): 448629828 Position: (441109, 3024112) Read 26.1354889885 %. Memory (Bytes): 512208580 Already after reading only 25% of the 500MB file, Python consumes 500MB. So it seem that storing the content of the file as a list of tuples of ints is not very memory efficient. Is there a better way to do it, so that I can read in my 500MB file into my 1GB of memory?

    Read the article

  • VBScript Out of String space

    - by MalsiaPro
    I got the following code to capture information for files on a specified drive, I ran the script againts a 600 GB hard drive on one of our servers and after a while I get the error Out of String space; "Join". Line 34, Char 2 For this code, file script.vbs: Option Explicit Dim objFS, objFld Dim objArgs Dim strFolder, strDestFile, blnRecursiveSearch ''Dim strLines Dim strCsv ''Dim i '' i = 0 ' 'Get the commandline parameters ' Set objArgs = WScript.Arguments ' strFolder = objArgs(0) ' strDestFile = objArgs(1) ' blnRecursiveSearch = objArgs(2) '######################################## 'SPECIFY THE DRIVE YOU WANT TO SCAN BELOW '######################################## strFolder = "C:\" strDestFile = "C:\InformationOutput.csv" blnRecursiveSearch = True 'Create the FileSystemObject Set objFS=CreateObject("Scripting.FileSystemObject") 'Get the directory you are working in Set objFld = objFS.GetFolder(strFolder) 'Open the csv file Set strCsv = objFS.CreateTextFile(strDestFile, True) '' 'Write the csv file '' Set strCsv = objFS.CreateTextFile(strDestFile, True) strCsv.WriteLine "File Path,File Size,Date Created,Date Last Modified,Date Last Accessed" '' strCsv.Write Join(strLines, vbCrLf) 'Now get the file details GetFileDetails objFld, blnRecursiveSearch '' 'Close and cleanup objects '' strCsv.Close '' 'Write the csv file '' Set strCsv = objFS.CreateTextFile(strDestFile, True) '' For i = 0 to UBound(strLines) '' strCsv.WriteLine strLines(i) '' Next 'Close and cleanup objects strCsv.Close Set strCsv = Nothing Set objFld = Nothing Set strFolder = Nothing Set objArgs = Nothing '---------------------------SCAN SPECIFIED LOCATION------------------------------- Private Sub GetFileDetails(fold, blnRecursive) Dim fld, fil dim strLine(4) on error resume next If InStr(fold.Path, "System Volume Information") < 1 Then If blnRecursive Then 'Work through all the folders and subfolders For Each fld In fold.SubFolders GetFileDetails fld, True If err.number <> 0 then LogError err.Description & vbcrlf & "Folder - " & fold.Path err.Clear End If Next End If 'Now work on the files For Each fil in fold.Files strLine(0) = fil.Path strLine(1) = fil.Size strLine(2) = fil.DateCreated strLine(3) = fil.DateLastModified strLine(4) = fil.DateLastAccessed strCsv.WriteLine Join(strLine, ",") if err.number <> 0 then LogError err.Description & vbcrlf & "Folder - " & fold.Path & vbcrlf & "File - " & fil.Name err.Clear End If Next End If end sub Private sub LogError(strError) dim strErr 'Write the csv file Set strErr = objFS.CreateTextFile("C:\test\err.log", false) strErr.WriteLine strError strErr.Close Set strErr = nothing End Sub RunMe.cmd wscript.exe "C:\temp\script\script.vbs" How can I avoid getting this error? The server drives are quite a bit <???? and I would imagine that the CSV file would be at least 40 MB. Edit by Guffa: I commented out some lines in the code, using double ticks ('') so you can see where.

    Read the article

  • SQLiteDataAdapter Fill exception C# ADO.NET

    - by Lirik
    I'm trying to use the OleDb CSV parser to load some data from a CSV file and insert it into a SQLite database, but I get an exception with the OleDbAdapter.Fill method and it's frustrating: An unhandled exception of type 'System.Data.ConstraintException' occurred in System.Data.dll Additional information: Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. Here is the source code: public void InsertData(String csvFileName, String tableName) { String dir = Path.GetDirectoryName(csvFileName); String name = Path.GetFileName(csvFileName); using (OleDbConnection conn = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + dir + @";Extended Properties=""Text;HDR=No;FMT=Delimited""")) { conn.Open(); using (OleDbDataAdapter adapter = new OleDbDataAdapter("SELECT * FROM " + name, conn)) { QuoteDataSet ds = new QuoteDataSet(); adapter.Fill(ds, tableName); // <-- Exception here InsertData(ds, tableName); // <-- Inserts the data into the my SQLite db } } } class Program { static void Main(string[] args) { SQLiteDatabase target = new SQLiteDatabase(); string csvFileName = "D:\\Innovations\\Finch\\dev\\DataFeed\\YahooTagsInfo.csv"; string tableName = "Tags"; target.InsertData(csvFileName, tableName); Console.ReadKey(); } } The "YahooTagsInfo.csv" file looks like this: tagId,tagName,description,colName,dataType,realTime 1,s,Symbol,symbol,VARCHAR,FALSE 2,c8,After Hours Change,afterhours,DOUBLE,TRUE 3,g3,Annualized Gain,annualizedGain,DOUBLE,FALSE 4,a,Ask,ask,DOUBLE,FALSE 5,a5,Ask Size,askSize,DOUBLE,FALSE 6,a2,Average Daily Volume,avgDailyVolume,DOUBLE,FALSE 7,b,Bid,bid,DOUBLE,FALSE 8,b6,Bid Size,bidSize,DOUBLE,FALSE 9,b4,Book Value,bookValue,DOUBLE,FALSE I've tried the following: Removing the first line in the CSV file so it doesn't confuse it for real data. Changing the TRUE/FALSE realTime flag to 1/0. I've tried 1 and 2 together (i.e. removed the first line and changed the flag). None of these things helped... One constraint is that the tagId is supposed to be unique. Here is what the table look like in design view: Can anybody help me figure out what is the problem here?

    Read the article

  • ggplot2 heatmap : how to preserve the label order ?

    - by Tg
    I'm trying to plot heatmap in ggplot2 using csv data following casbon's solution in http://biostar.stackexchange.com/questions/921/how-to-draw-a-csv-data-file-as-a-heatmap-using-numpy-and-matplotlib the problem is x-label try to re-sort itself. For example, if I swap label COG0002 and COG0001 in that example data, the x-label still come out in sort order (cog0001, cog0002, cog0003.... cog0008). Is there anyway to prevent this ? I want to it to be ordered as in csv file thanks pp

    Read the article

  • How to print a Perl 2-dimensional array?

    - by Matt Pascoe
    I am trying to write a simple Perl script that reads a *.csv, places the rows of the *.csv file in a two dimensional array, and then prints on item out of the array and then prints a row of the array. #!/usr/bin/perl use strict; use warnings; open(CSV, $ARGV[0]) || die("Cannot open the $ARGV[0] file: $!"); my @row; my @table; while(<CSV>) { @row = split(/\s*,\s*/, $_); push(@table, @row); } close CSV || die $!; foreach my $element ( @{ $table[0] } ) { print $element, "\n"; } print "$table[0][1]\n"; When I run this script I receive the following error and nothing prints: Can't use string ("1") as an ARRAY ref while "strict refs" in use at ./scripts.pl line 16. I have looked in a number of other forums and am still not sure how to fix this issue. Can anyone help me fix this script?

    Read the article

  • Showing renames in hg status?

    - by Ryan Thompson
    I know that Mercurial can track renames of files, but how do I get it to show me renames instead of adds/removes when I do hg status? For instance, instead of: A bin/extract-csv-column.pl A bin/find-mirna-binding.pl A bin/xls2csv-separate-sheets.pl A lib/Text/CSV/Euclid.pm R src/extract-csv-column.pl R src/find-mirna-binding.pl R src/modules/Text/CSV/Euclid.pm R src/xls2csv-separate-sheets.pl I want some indication that four files have been moved. I think I read somewhere that the output is like this to preserve backward-compatibility with something-or-other, but I'm not worried about that.

    Read the article

  • loading data into rails applicaton

    - by ash34
    Hi, I have a list of rows in an excel sheet which I need to load as historical transactions into a table in my rails applications. I saved the excel file as a csv file. I tried using csv from the standard library but keep getting the following exception CSV::IllegalFormatError. Not sure how to even figure out where the problem lies. Any suggestions on how to do this. thanks, ash

    Read the article

  • Groovy Grails, How do you stream or buffer a large file in a Controller's response?

    - by Julian Noye
    Hi Guys I have a controller that makes a connection to a url to retrieve a csv file. I am able to send the file in the response using the following code, this works fine. def fileURL = "www.mysite.com/input.csv" def thisUrl = new URL(fileURL); def connection = thisUrl.openConnection(); def output = connection.content.text; response.setHeader "Content-disposition", "attachment; filename=${'output.csv'}" response.contentType = 'text/csv' response.outputStream << output response.outputStream.flush() However I don't think this method is inappropriate for a large file, as the whole file is loaded into the controllers memory. I want to be able to read the file chunk by chunk and write the file to the response chunk by chunk. Any ideas?

    Read the article

  • possible bug in geom_ribbon

    - by tomw
    i was hoping to plot two time series and shade the space between the series according to which series is larger at that time. here are the two series-- first in a data frame with an indicator for whichever series is larger at that time d1 <- read.csv("https://dl.dropbox.com/s/0txm3f70msd3nm6/ribbon%20data.csv?dl=1") And this is the melted series. d2 <- read.csv("https://dl.dropbox.com/s/6ohwmtkhpsutpig/melted%20ribbon%20data.csv?dl=1") which I plot... ggplot() + geom_line(data = d2, aes(x = time, y = value, group = variable, color = variable)) + geom_hline(yintercept = 0, linetype = 2) + geom_ribbon(data = d1[d1$big == "B",], aes(x = time, ymin = csa, ymax = csb), alpha = .25, fill = "#9999CC") + geom_ribbon(data = d1[d1$big == "A",], aes(x = time, ymin = csb, ymax = csa), alpha = .25, fill = "#CC6666") + scale_color_manual(values = c("#CC6666" , "#9999CC")) which results in... why is there a superfluous blue band in the middle of the plot?

    Read the article

  • Python creating a dictionary and swapping these into another file

    - by satsurae
    Hi all, I have two tab delimited .csv file. From one.csv I have created a dictionary which looks like: 'EB2430': ' "\t"idnD "\t"yjgV "\t"b4267 "\n', 'EB3128': ' "\t"yagE "\t\t"b0268 "\n', 'EB3945': ' "\t"maeB "\t"ypfF "\t"b2463 "\n', 'EB3944': ' "\t"eutS "\t"ypfE "\t"b2462 "\n', I would like to insert the value of the dictionary into the second.csv file which looks like: "EB2430" 36.81 364 222 4 72 430 101 461 1.00E-063 237 "EB3128" 26.04 169 108 6 42 206 17 172 6.00E-006 45.8 "EB3945" 20.6 233 162 6 106 333 33 247 6.00E-005 42.4 "EB3944" 19.07 367 284 6 1 355 1 366 2.00E-023 103 With a resultant output tab delimited: 'EB2430' idnD yjgV b4267 36.81 364 222 4 72 430 101 461 1.00E-063 237 'EB3128' yagE b0268 26.04 169 108 6 42 206 17 172 6.00E-006 45.8 'EB3945' maeB ypfF b2463 20.6 233 162 6 106 333 33 247 6.00E-005 42.4 'EB3944' eutS ypfE b2462 19.07 367 284 6 1 355 1 366 2.00E-023 103 Here is my code for creating the dictionary: f = open ("one.csv", "r") g = open ("second.csv", "r") eb = [] desc = [] di = {} for line in f: for row in f: eb.append(row[1:7]) desc.append(row[7:]) di = dict(zip(eb,desc)) Sorry for it being so long-winded!! I've not been programming for long. Cheers! Sat

    Read the article

  • How do you tell if two wildcards overlap?

    - by Tom Ritter
    Given two strings with * wildcards, I would like to know if a string could be created that would match both. For example, these two are a simple case of overlap: Hello*World Hel* But so are all of these: *.csv reports*.csv reportsdump.csv Is there an algorithm published for doing this? Or perhaps a utility function in Windows or a library I might be able to call or copy?

    Read the article

  • Faking user interaction on CMS

    - by Leocer
    I'm working with a CMS and need to import data to it using typical html forms. The data itself is in csv files with one page per row. Such is the CMS that importing directly to db isn't possible due to the complexity of the design. It's pretty important that i "fake" usual user interaction because the CMS does a lot of background work that's crucial for the import. Basically, for each row in the csv file, I need to copy a csv column to a html textfield, or select a checkbox, or click a certain button. One major issue is mapping the data in the csv to actions in the CMS. So if one column contains the string 'foobar' is really means "set the firstName dropdown widget to 'foobar'". Is there a tool to automate this? I´ve been looking at AutoHotKey, Selendium, Web-Harvester and many other tools but I'm not convinced they are the correct tools. The main problem is being able to interact with the html pages in a easy way.

    Read the article

  • Bad performance function in PHP. With large files memory blows up! How can I refactor?

    - by André
    Hi I have a function that strips out lines from files. I'm handling with large files(more than 100Mb). I have the PHP Memory with 256MB but the function that handles with the strip out of lines blows up with a 100MB CSV File. What the function must do is this: Originally I have the CSV like: Copyright (c) 2007 MaxMind LLC. All Rights Reserved. locId,country,region,city,postalCode,latitude,longitude,metroCode,areaCode 1,"O1","","","",0.0000,0.0000,, 2,"AP","","","",35.0000,105.0000,, 3,"EU","","","",47.0000,8.0000,, 4,"AD","","","",42.5000,1.5000,, 5,"AE","","","",24.0000,54.0000,, 6,"AF","","","",33.0000,65.0000,, 7,"AG","","","",17.0500,-61.8000,, 8,"AI","","","",18.2500,-63.1667,, 9,"AL","","","",41.0000,20.0000,, When I pass the CSV file to this function I got: locId,country,region,city,postalCode,latitude,longitude,metroCode,areaCode 1,"O1","","","",0.0000,0.0000,, 2,"AP","","","",35.0000,105.0000,, 3,"EU","","","",47.0000,8.0000,, 4,"AD","","","",42.5000,1.5000,, 5,"AE","","","",24.0000,54.0000,, 6,"AF","","","",33.0000,65.0000,, 7,"AG","","","",17.0500,-61.8000,, 8,"AI","","","",18.2500,-63.1667,, 9,"AL","","","",41.0000,20.0000,, It only strips out the first line, nothing more. The problem is the performance of this function with large files, it blows up the memory. The function is: public function deleteLine($line_no, $csvFileName) { // this function strips a specific line from a file // if a line is stripped, functions returns True else false // // e.g. // deleteLine(-1, xyz.csv); // strip last line // deleteLine(1, xyz.csv); // strip first line // Assigna o nome do ficheiro $filename = $csvFileName; $strip_return=FALSE; $data=file($filename); $pipe=fopen($filename,'w'); $size=count($data); if($line_no==-1) $skip=$size-1; else $skip=$line_no-1; for($line=0;$line<$size;$line++) if($line!=$skip) fputs($pipe,$data[$line]); else $strip_return=TRUE; return $strip_return; } It is possible to refactor this function to not blow up with the 256MB PHP Memory? Give me some clues. Best Regards,

    Read the article

  • PowerShell: Read text, regex sort, write output to file and formatting

    - by Bill Hunter
    I am a Powershell novice and have run into a challenge in reading, sorting, and outputting a csv file. The input csv has no headers, the data is as follows: 05/25/2010,18:48:33,Stop,a1usak,10.128.212.212 05/25/2010,18:48:36,Start,q2uhal,10.136.198.231 05/25/2010,18:48:09,Stop,s0upxb,10.136.198.231 I use the following piping construct to read the file, sort and output to a file: (Get-Content d:\vpnData\u62gvpn2.csv) | %{,[regex]::Split($, ",")} | sort @{Expression={$[3]}},@{Expression={$_[1]}} | out-file d:\vpnData\u62gvpn3.csv The new file is written with the following format: 05/25/2010 07:41:57 Stop a0uaar 10.128.196.160 05/25/2010 12:24:24 Start a0uaar 10.136.199.51 05/25/2010 20:00:56 Stop a0uaar 10.136.199.51 What I would like to see in the output file is a similar format to the original input file with comma dilimiters: 05/25/2010,07:41:57,Stop,a0uaar,10.128.196.160 05/25/2010,12:24:24,Start,a0uaar,10.136.199.51 05/25/2010,20:00:56,Stop,a0uaar,10.136.199.51 But I can't quite seem to get there. I'm almost of the mind that I'll have to write another segment to read the newly produced file and reset its contents to the preferred format for further processing. Thoughts?

    Read the article

  • Flex 3 - Send a HTTP Get request from Flash and want Firefox to show Open With Box.

    - by Kash
    Hi all, I am a newb developer as far as Flex and Flash is concerned. This is what I'm trying to do: 1) Send a HTTP request to our server (with a custom made URL). The URL basically tells the server to send data in a CSV format. 2) The server sends a 200 OK response, which has Content-Type: application/csv and the payload is pure CSV data. What I wish to do is, when firefox gets this 200 OK response, I want it to show the standard Open with box (the one that shows up when you download some file). I tried doing this with HTTPService. I have a "Export to CSV" button on the flash component. Upon clicking that, the flash component is able to succesfully send the HTTP request. I however don't want Flash component to handle the response, so I don't have the 's "result" tag defined. But nothing happens. Any suggestions on how to do this.

    Read the article

  • How to implement a download for dynamic files in asp.net with masterpages

    - by Tim
    Hello, the title says it all. I have seen some similar questions on SO like this or this, but either i have overlooked something or my requirement is different, neither works. My situation is following: i have a Masterpage one of its contentpage is called MasterData.aspx MasterData has an asp.net ajax tabcontainer control with one usercontrol in every tabpanel these usercontrols(f.e. MD_Customer.ascx)hold the main content(like a normal page) they all have GridViews in it and i want to provide an Excel-Export-Button What i've tried is is to use an iframe like here. But the function that adds the iframe to the document gets never called and therefore i never see the save-as-dialog. Maybe this is caused by using a MasterPage. Does somebody has an idea on how to provide a button in an UpdatePanel that causes an async postback, so that i can generate a CSV dynamically in codebehind and write it to the response? Thank you in advance. aspx-markup: <asp:UpdatePanel ID="UpdGridInfo" runat="server" > <ContentTemplate> <asp:Label ID="LblInfo" Font-Underline="false" runat="server" CssClass="content" ></asp:Label>&nbsp;&nbsp; <asp:ImageButton ToolTip="export to Excel" style="vertical-align:bottom" ID="BtnExcelExport" ImageUrl="~/images/excel2007logo.png" runat="server" /> </ContentTemplate> </asp:UpdatePanel> and the BtnExportExcel codebehind handler(of course it cannot work to write the csv to the response of this page): Private Sub BtnExcelExport_Click(ByVal sender As Object, ByVal e As System.Web.UI.ImageClickEventArgs) Handles BtnExcelExport.Click Dim csv As String = tableToCsv(DirectCast(Me.GridSource, DataTable)) Response.AddHeader("Content-disposition", "attachment; filename=RuleConfigurationFile.csv") Response.ContentType = "application/octet-stream" Response.Write(csv) Response.End() End Sub

    Read the article

  • Automating site interaction

    - by Leocer
    I'm working with a CMS and need to import data to it using typical html forms. The data itself is in csv files with one page per row. Such is the CMS that importing directly to db isn't possible due to the complexity of the design. It's pretty important that i "fake" usual user interaction because the CMS does a lot of background work that's crucial for the import. Basically, for each row in the csv file, I need to copy a csv column to a html textfield, or select a checkbox, or click a certain button. One major issue is mapping the data in the csv to actions in the CMS. So if one column contains the string 'foobar' is really means "set the firstName dropdown widget to 'foobar'". Is there a tool to automate this? I´ve been looking at AutoHotKey, Selendium, Web-Harvester and many other tools but I'm not convinced they are the correct tools. The main problem is being able to interact with the html pages in a easy way.

    Read the article

  • How do you pipe output from a Ruby script to 'head' without getting a broken pipe error

    - by dan
    I have a simple Ruby script that looks like this require 'csv' while line = STDIN.gets array = CSV.parse_line(line) puts array[2] end But when I try using this script in a Unix pipeline like this, I get 10 lines of output, followed by an error: ruby lib/myscript.rb < data.csv | head 12080450 12080451 12080517 12081046 12081048 12081050 12081051 12081052 12081054 lib/myscript.rb:4:in `write': Broken pipe - <STDOUT> (Errno::EPIPE) Is there a way to write the Ruby script in a way that prevents the broken pipe exception from being raised?

    Read the article

  • shell script filter du and find by a string inside a file in a subfolder

    - by Jason
    I have the following command that I run on cygwin: find /cygdrive/d/tmp/* -maxdepth 0 -mtime -150 -type d | xargs du --max-depth=0 > foldersizesreport.csv I intended to do the following with this command: for each folder under /d/tmp/ that was modified in last 150 days, check its total size including files within it and report it to file foldersizesreport.csv however that is now not good enough for me, as it turns out inside each /d/tmp/subfolder1/somefile.properties /d/tmp/subfolder2/somefile.properties /d/tmp/subfolder3/somefile.properties /d/tmp/subfolder4/somefile.properties so as you see inside each subfolderX there is a file named somefile.properties inside it there is a property SOMEPROPKEY=3808612800100 (among other properties) this is the time in millisecond, i need to change the command so that instead of -mtime -150 it will include in the whole calculation only subfolderX that has a file inside them somefile.properties where the SOMEPROPKEY=3808612800100 is the time in millisecond in future, if the value SOMEPROPKEY=23948948 is in past then dont at all include the folder in the foldersizesreport.csv because its not relevant to me. so the result report should be looking like: /d/tmp/,subfolder1,<itssizein KB> /d/tmp/,subfolder2,<itssizein KB> and if subfolder3 had a SOMEPROPKEY=34243234 (time in ms in past) then it would not be in that csv file. so basically I'm looking for: find /cygdrive/d/tmp/* -maxdepth 0 -mtime -150 -type d | <only subfolders that have in them property in file SOMEPROPKEY=28374874827 - time in ms in future and not in past | xargs du --max-depth=0 > foldersizesreport.csv

    Read the article

  • Drawing and filling different polygons at the same time in MATLAB

    - by Hossein
    Hi,I have the code below. It load a CSV file into memory. This file contains the coordinates for different polygons.Each row of this file has X,Y coordinates and a string which tells that to which polygon this datapoint belongs. for example a polygone named "Poly1" with 100 data points has 100 rows in this file like : Poly1,X1,Y1 Poly1,X2,Y2 ... Poly1,X100,Y100 Poly2,X1,Y1 ..... The index.csv file has the number of datapoint(number of rows) for each polygon in file Polygons.csv. These details are not important. The thing is: I can successfully extract the datapoints for each polygon using the code below. However, When I plot the lines of different polygons are connected to each other and the plot looks crappy. I need the polygons to be separated(they are connected and overlapping the some areas though). I thought by using "fill" I can actually see them better. But "fill" just filles every polygon that it can find and that is not desirable. I only want to fill inside the polygons. Can someone help me? I can also send you my datapoint if necessary, they are less than 200Kb. Thanks [coordinates,routeNames,polygonData] = xlsread('Polygons.csv'); index = dlmread('Index.csv'); firstPointer = 0 lastPointer = index(1) for Counter=2:size(index) firstPointer = firstPointer + index(Counter) + 1 hold on plot(coordinates(firstPointer:lastPointer,2),coordinates(firstPointer:lastPointer,1),'r-') lastPointer = lastPointer + index(Counter) end

    Read the article

  • ssis package from SQL agent failed

    - by Pramodtech
    I have simple package which reads data from csv file and loads into SQL table. File is located on another server and it is shared. I use UNC path in package. package is scheduled using sql agent job. Job worked fine for 1 week and suddenly started giving error "The file name "\\124.0.48.173\basel2\Commercial\Input\ACBS_GSU.csv" specified in the connection was not valid. End Error Error: 2010-04-20 16:15:07.19 Code: 0xC0202070 Source: ACBS_GSU Connection manager "CSV file conection" Description: Connection "CSV file conection" failed validation." Any help will be appreciated.

    Read the article

  • Showing renames in hg log?

    - by Ryan Thompson
    I know that Mercurial can track renames of files, but how do I get it to show me renames instead of adds/removes when I do hg log? For instance, instead of: A bin/extract-csv-column.pl A bin/find-mirna-binding.pl A bin/xls2csv-separate-sheets.pl A lib/Text/CSV/Euclid.pm R src/extract-csv-column.pl R src/find-mirna-binding.pl R src/modules/Text/CSV/Euclid.pm R src/xls2csv-separate-sheets.pl I want some indication that four files have been moved. I think I read somewhere that the output is like this to preserve backward-compatibility with something-or-other, but I'm not worried about that.

    Read the article

  • How do I use comma-separated-value file received from a URL query in Objective-c?

    - by chiemekailo
    How do I use comma-separated-value file received from a URL query in Objective-c? when I query the URL I get csv such as "OMRUAH=X",20.741,"3/16/2010","1:52pm",20.7226,20.7594. How do I capture and use this for my application? My problem is creating/initializing that NSString object in the first place. An example is this link "http://download.finance.yahoo.com/d/quotes.csv?s=GBPEUR=X&f=sl1d1t1ba&e=.csv" which returns csv. I dont know how to capture this as an object since I cannot use an NSXMLParser object.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >