unix tool to remove duplicate lines from a file

Posted by Nathan Fellman on Stack Overflow See other posts from Stack Overflow or by Nathan Fellman
Published on 2009-04-14T07:51:59Z Indexed on 2010/06/02 2:43 UTC
Read the original article Hit count: 398

Filed under:
|
|

I have a tool that generates tests and predicts the output. The idea is that if I have a failure I can compare the prediction to the actual output and see where they diverged. The problem is the actual output contains some lines twice, which confuses diff. I want to remove the duplicates, so that I can compare them easily. Basically, something like sort -u but without the sorting.

Is there any unix commandline tool that can do this?

© Stack Overflow or respective owner

Related posts about unix

Related posts about command-line