Search Results

Search found 65497 results on 2620 pages for 'list files'.

Page 56/2620 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Other people's files showing up in rhythmbox

    - by Avery Boyer
    I have my computer connected to a college network, and right now files that belong to other individuals on campus are showing up under Shared in rhythmbox. This is driving me up the wall, I absolutely despise the idea that files are being thrown around on the network and that other people's s*** is showing up on my computer, and that they may be able to see my files as well. This is a very, very serious problem as far as I am concerned and I want to know how I can ensure that I am sharing nothing with the network in the way of files on my computer and that no one else's files are showing up on my computer.

    Read the article

  • React to a modified directory

    - by Ghanshyam Rathod
    In linux everything is considered as file, Now if I want to find only folders/directories not the files then how can i do that? I am getting all the modified files with the following command. find /Users/ghanshyam -type f -mmin -5 -print My goal is to generate the log file with all the modified/access folders. Here two options are available. create a module and call every time when a folder is modified (this one is bit difficult because I need to check particular event) create a cron task that will run after every 5 minutes. cron task will execute shell script and generate the log entries with the modified folders. Do you have any other option to do this task ?

    Read the article

  • How do I know which file a program is trying to access?

    - by user9069
    I have a program which I am trying to run, however when I run it; it just complains that it can't find a particular file. However I have no idea which folder it is trying to find this particular file in. I have a copy of the required file, I just need to know which folder to copy it too. Is there any way to show in real time which files are being accessed or which files are trying and failing to be accessed? I am using Ext4 filesystem if that helps. Thanks

    Read the article

  • Make Apache encode or replace quotes instead of escaping them?

    - by mplungjan
    In the dcoumentation I read Format Notes For security reasons, starting with version 2.0.46, non-printable and other special characters in %r, %i and %o are escaped using \xhh sequences, where hh stands for the hexadecimal representation of the raw byte. Exceptions from this rule are " and \, which are escaped by prepending a backslash, and all whitespace characters, which are written in their C-style notation (\n, \t, etc). In versions prior to 2.0.46, no escaping was performed on these strings so you had to be quite careful when dealing with raw log files. This is a problem for Analog which is still the handiest analyser I use. I get .... "GET /somerequest?q=\"quoted string\"&someparm=bla" in the logfile and it is of course flagged as corrupt since Analog expects .... "GET /somerequest?q=%22quoted string%22&someparm=bla" or similar. I realise I can pre-process using something like perl -p -i.bak -e 's/\\"/%22/g' logfile But I'd rather not have to add this step to these files which are 50-90MB zipped per day Thanks for any pointers

    Read the article

  • Where should I store and verify files manipulated by an app

    - by Alan W. Smith
    I'm working on a little Ruby script to move screenshots while renaming them based on a specific convention. I'll be writing tests to confirm the behavior. Ruby has lots of conventions for where to store files (e.g. the "spec" and "features" directories for RSpec and Cucumber, respectively), but I'm not finding best practices for storing files that will be acted upon by the tests. The same goes for a destination for the final copies of the files. So, the question in two parts is: Where should I store files that the test cases will use for a source input. Where should tests that need to write output files send them to.

    Read the article

  • How file permissions are stored in inode?

    - by Debadyuti Maiti
    Suppose there's two pc - "A" and "B". Then if A downloads a files from B , then what would be the file permission of that downloaded file? Is it possible that the downloaded file in A will have an Inode entry with all it's permissions from B & store B's user account as the owner ? If that's the case then is it impossible to change that files permission on A if "others" [as in user-group-others] doesn't have the right to write on that file? e.g. if this is the case , __x __x __x file.txt [On B] then what would be the file permission on A of that same file downloaded from B [e.g. through vsftpd]? __x __x __x file.txt [On A] or rw_x rw_x rw_x file.txt [On A] [i.e. defined by A's default umask value]

    Read the article

  • How to add a daemon to a quickly project

    - by darkrex1986
    Currently I'm developing an application with quickly which is divided in two parts: A graphical UI where the user could configure some things, and a daemon which do the most work in the background. I started with the UI, to create some windows for settings and so on. Now I want to start coding the daemon, but I have no clue how to implement the daemon in my quickly project. Could I simply paste the files of the daemon in project folder or is there an implementation method for adding new files to a quickly project? Or do I have to create a new project and merge them together?

    Read the article

  • How to build a list from Postfix maillog

    - by dstonek
    I want to build a list from maillog, maillog.x containing something like Date, Sender's email, Recipient's Email and subject of the message filtering output emails and output domain. I've read about importing from spreadsheet program a cvs file. The issue is I have to add field separators in log file. I couldn't find how to customize that. How can I do that, the list and the separator? This is an example of sending mail log Jun 11 15:24:58 host postfix/cleanup[19060]: F41C660D98A0: warning: header Subject: TESTING SUBJECT from unknown[XXX.XXX.XXX.XXX]; [email protected] [email protected] proto=ESMTP helo=<[192.168.1.91] Jun 11 15:25:01 host postfix/smtp[19062]: F41C660D98A0: to=, relay=mx-rl.com[xxx.xxx.xxx.xxx]:25, delay=3.4, delays=0.66/0.01/0.86/1.9, dsn=2.0.0, status=sent (250 <538E30D9000A1DD8 Mail accepted) The list would contain the three bold fields filtering by to = [email protected]

    Read the article

  • Alternating links and slide down content on a list of sub sections

    - by user27291
    I have a page for a doctor's practice. In the summary page for the practice there is a list of subsections such as Women's, Men's, Children, Sport, etc. Some of these sub sections are very large, others can be a paragraph or more with a short unordered list. In terms of content volume, the large subsections warrant their own separate page, the smaller one's not so much. I created a little plugin which enables me to use the list of subsections in 2 ways. When clicking on the title of a larger section, you'll be sent through to it's own page. For the smaller sections, a slide down box will open with the information. Is this a good way to handle my information architecture? Should I be giving the smaller sub sections their own page for SEO purposes?

    Read the article

  • Java List to Excel Columns

    - by Nitin
    Correct me where I'm going wrong. I'm have written a program in Java which will get list of files from two different directories and make two (Java list) with the file names. I want to transfer the both the list (downloaded files list and Uploaded files list) to an excel. What the result i'm getting is those list are transferred row wise. I want them in column wise. Given below is the code: public class F { static List<String> downloadList = new ArrayList<>(); static List<String> dispatchList = new ArrayList<>(); public static class FileVisitor extends SimpleFileVisitor<Path> { @Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException { String name = file.toRealPath().getFileName().toString(); if (name.endsWith(".pdf") || name.endsWith(".zip")) { downloadList.add(name); } if (name.endsWith(".xml")) { dispatchList.add(name); } return FileVisitResult.CONTINUE; } } public static void main(String[] args) throws IOException { try { Path downloadPath = Paths.get("E:\\report\\02_Download\\10252013"); Path dispatchPath = Paths.get("E:\\report\\01_Dispatch\\10252013"); FileVisitor visitor = new FileVisitor(); Files.walkFileTree(downloadPath, visitor); Files.walkFileTree(downloadPath, EnumSet.of(FileVisitOption.FOLLOW_LINKS), 1, visitor); Files.walkFileTree(dispatchPath, visitor); Files.walkFileTree(dispatchPath, EnumSet.of(FileVisitOption.FOLLOW_LINKS), 1, visitor); System.out.println("Download File List" + downloadList); System.out.println("Dispatch File List" + dispatchList); F f = new F(); f.UpDown(downloadList, dispatchList); } catch (Exception ex) { Logger.getLogger(F.class.getName()).log(Level.SEVERE, null, ex); } } int rownum = 0; int colnum = 0; HSSFSheet firstSheet; Collection<File> files; HSSFWorkbook workbook; File exactFile; { workbook = new HSSFWorkbook(); firstSheet = workbook.createSheet("10252013"); Row headerRow = firstSheet.createRow(rownum); headerRow.setHeightInPoints(40); } public void UpDown(List<String> download, List<String> upload) throws Exception { List<String> headerRow = new ArrayList<>(); headerRow.add("Downloaded"); headerRow.add("Uploaded"); List<List> recordToAdd = new ArrayList<>(); recordToAdd.add(headerRow); recordToAdd.add(download); recordToAdd.add(upload); F f = new F(); f.CreateExcelFile(recordToAdd); f.createExcelFile(); } void createExcelFile() { FileOutputStream fos = null; try { fos = new FileOutputStream(new File("E:\\report\\Download&Upload.xls")); HSSFCellStyle hsfstyle = workbook.createCellStyle(); hsfstyle.setBorderBottom((short) 1); hsfstyle.setFillBackgroundColor((short) 245); workbook.write(fos); } catch (Exception e) { } } public void CreateExcelFile(List<List> l1) throws Exception { try { for (int j = 0; j < l1.size(); j++) { Row row = firstSheet.createRow(rownum); List<String> l2 = l1.get(j); for (int k = 0; k < l2.size(); k++) { Cell cell = row.createCell(k); cell.setCellValue(l2.get(k)); } rownum++; } } catch (Exception e) { } finally { } } } (The purpose is to verify the files Downloaded and Uploaded for the given date) Thanks.

    Read the article

  • How best to use XPath with very large XML files in .NET?

    - by glenatron
    I need to do some processing on fairly large XML files ( large here being potentially upwards of a gigabyte ) in C# including performing some complex xpath queries. The problem I have is that the standard way I would normally do this through the System.XML libraries likes to load the whole file into memory before it does anything with it, which can cause memory problems with files of this size. I don't need to be updating the files at all just reading them and querying the data contained in them. Some of the XPath queries are quite involved and go across several levels of parent-child type relationship - I'm not sure whether this will affect the ability to use a stream reader rather than loading the data into memory as a block. One way I can see of making it work is to perform the simple analysis using a stream-based approach and perhaps wrapping the XPath statements into XSLT transformations that I could run across the files afterward, although it seems a little convoluted. Alternately I know that there are some elements that the XPath queries will not run across, so I guess I could break the document up into a series of smaller fragments based on it's original tree structure, which could perhaps be small enough to process in memory without causing too much havoc. I've tried to explain my objective here so if I'm barking up totally the wrong tree in terms of general approach I'm sure you folks can set me right...

    Read the article

  • Read from one large file and write to many (tens, hundreds, or thousands) files in Java?

    - by Rudiger
    I have a large-ish file (4-5 GB compressed) of small messages that I wish to parse into approximately 6,000 files by message type. Messages are small; anywhere from 5 to 50 bytes depending on the type. Each message starts with a fixed-size type field (a 6-byte key). If I read a message of type '000001', I want to write append its payload to 000001.dat, etc. The input file contains a mixture of messages; I want N homogeneous output files, where each output file contains only the messages of a given type. What's an efficient a fast way of writing these messages to so many individual files? I'd like to use as much memory and processing power to get it done as fast as possible. I can write compressed or uncompressed files to the disk. I'm thinking of using a hashmap with a message type key and an outputstream value, but I'm sure there's a better way to do it. Thanks!

    Read the article

  • How can I use Perl to determine whether the contents of two files are identical?

    - by Zaid
    This question comes from a need to ensure that changes I've made to code doesn't affect the values it outputs to text file. Ideally, I'd roll a sub to take in two filenames and return 1or return 0 depending on whether the contents are identical or not, whitespaces and all. Given that text-processing is Perl's forté, it should be quite easy to compare two files and determine whether they are identical or not (code below untested). use strict; use warnings; sub files_match { my ( $fileA, $fileB ) = @_; open my $file1, '<', $fileA; open my $file2, '<', $fileB; while (my $lineA = <$file1>) { next if $lineA eq <$file2>; return 0 and last; } return 1; } The only way I can think of (sans CPAN modules) is to open the two files in question, and read them in line-by-line until a difference is found. If no difference is found, the files must be identical. But this approach is limited and clumsy. What if the total lines differ in the two files? Should I open and close to determine line count, then re-open to scan the texts? Yuck. I don't see anything in perlfaq5 relating to this. I want to stay away from modules unless they come with the core Perl 5.6.1 distribution.

    Read the article

  • Algorithm for querying linearly through a non-linear list of questions

    - by JoshLeaves
    For a multiplayers trivia game, I need to supply my users with a new quizz in a desired subject (Science, Maths, Litt. and such) at the start of every game. I've generated about 5K quizzes for each subject and filled my database with them. So my 'Quizzes' database looks like this: |ID |Subject |Question +-----+------------+---------------------------------- | 23 |Science | What's water? | 42 |Maths | What's 2+2? | 99 |Litt. | Who wrote "Pride and Prejudice"? | 123 |Litt. | Who wrote "On The Road"? | 146 |Maths | What's 2*2? | 599 |Science | You know what's cool? |1042 |Maths | What's the Fibonacci Sequence? |1056 |Maths | What's 42? And so on... (Much more detailed/complex but I'll keep the exemple simple) As you can see, due to technical constraints (MongoDB), my IDs are not linear but I can use them as an increasing suite. So far, my algorithm to ensure two users get a new quizz when they play together is the following: // Take the last played quizzes by P1 and P2 var q_one = player_one.getLastPlayedQuizz('Maths'); var q_two = player_two.getLastPlayedQuizz('Maths'); // If both of them never played in the subject, return first quizz in the list if ((q_one == NULL) && (q_two == NULL)) return QuizzDB.findOne({subject: 'Maths'}); // If one of them never played, play the next quizz for the other player // This quizz is found by asking for the first quizz in the desired subject where // the ID is greater than the last played quizz's ID (if the last played quizz ID // is 42, this will return 146 following the above example database) if (q_one == NULL) return QuizzDB.findOne({subject: 'Maths', ID > q_two}); if (q_two == NULL) return QuizzDB.findOne({subject: 'Maths', ID > q_one}); // And if both of them have a lastPlayedQuizz, we return the next quizz for the // player whose lastPlayedQuizz got the higher ID if (q_one > q_two) return QuizzDB.findOne({subject: 'Maths', ID > q_one}); else return QuizzDB.findOne({subject: 'Maths', ID > q_two}); Now here comes the real problem: Once I get to the end of my database (let's say, P1's last played quizz in 'Maths' is 1056, P2's is 146 and P3 is 1042), following my algorithm, P1's ID is the highest so I ask for the next question in 'Maths' where ID is superior to 1056. There is nothing, so I roll back to the beginning of my quizz list (with a random skipper to avoid having the first question always show up). P1 and P2's last played will then be 42 and they will start fresh from the beginning of the list. However, if P1 (42) plays against P3 (1042), the resulting ID will be 1056...which P1 already played two games ago. Basically, players who just "rolled back" to the beginning of the list will be brought back to the end of the list by players who still haven't rolled back. The rollback WILL happen in the end, but it'll take time and there'll be a "bottleneck" at the beginning and at the end. Thus my question: What would be the best algorith to avoid this bottleneck and ensure players don't get stuck endlessly on the same quizzes? Also bear in mind that I've got some technical constraints: I can't get a random question in a subject (ie: no "QuizzDB.findOne({subject: 'Maths'}).skip(random());"). It's cool to skip on one to twenty records, but the MongoDB documentation warns against skipping too many documents. I would like to avoid building an array of every quizz played by each player and find the next non-played in the database with a $nin. Thanks for your help

    Read the article

  • Linq filtering an IQueryable<T> (System.Data.Linq.DataQuery) object by a List<T> (System.Collection.

    - by Klaptrap
    My IQueryable line is: // find all timesheets for this period - from db so System.Data.Linq.DataQuery var timesheets = _timesheetRepository.FindByPeriod(dte1, dte2); My List line is: // get my team from AD - from active directory so System.Collection.Generic.List var adUsers = _adUserRepository.GetMyTeam(User.Identity.Name); I wish to only show timesheets for those users in the timesheet collection that are present in the user collection. If I use a standard c# expression such as: var teamsheets = from t in timesheets join user in adUsers on t.User1.username equals user.fullname select t; I get the error "An IQueryable that returns a self-referencing Constant expression is not supported" Any recommendations?

    Read the article

  • How can I copy files with names containing spaces and UNICODE, when using a shell script?

    - by LOlliffe
    I have a list of files that I'm trying to copy and move (using cp and mv) in a bash shell script. The problem that I'm running into, is that I can't get either command to recognize a huge number of files, seemingly because the filenames contain spaces and/or unicode characters. I couldn't find any switches to decode/re-encode these characters. Instead, for example, if I copy "file name.xml", I get "*.xml" and a script error that the file wasn't found for my result. Does anyone know settings or commands that will deal with these files?

    Read the article

  • Setup.exe files downloading without cab files over poor connections

    - by Colin
    We have customers who are trying to download a setup.exe file over mobile connections that appear to be very slow. They have reported that when they click on the downloaded setup.exe, the install wizard starts up, but part way through the wizard they get an error message indicating that a cab file is corrupt or missing. They couriered a problem tablet to us, and we downloaded the file without a problem but I could replicate the problem by using https to download the file (https is normally used to access the rest of the site, although it is not necessary for the download). When I did this the downloaded file was 2.8MB. It should be 8MB. I don't think that https is the root cause of the problem because I can see the download link in the browser history using http, so I know the customer tried to download using http. I think that the issue is that the poor connection is preventing a complete download, but the browser is acting as if it is complete. Is there a way to ensure the file is downloaded fully, or not at all? Why does the browser not indicate that the download is incomplete?

    Read the article

  • large amount of data in many text files - how to process?

    - by Stephen
    Hi, I have large amounts of data (a few terabytes) and accumulating... They are contained in many tab-delimited flat text files (each about 30MB). Most of the task involves reading the data and aggregating (summing/averaging + additional transformations) over observations/rows based on a series of predicate statements, and then saving the output as text, HDF5, or SQLite files, etc. I normally use R for such tasks but I fear this may be a bit large. Some candidate solutions are to 1) write the whole thing in C (or Fortran) 2) import the files (tables) into a relational database directly and then pull off chunks in R or Python (some of the transformations are not amenable for pure SQL solutions) 3) write the whole thing in Python Would (3) be a bad idea? I know you can wrap C routines in Python but in this case since there isn't anything computationally prohibitive (e.g., optimization routines that require many iterative calculations), I think I/O may be as much of a bottleneck as the computation itself. Do you have any recommendations on further considerations or suggestions? Thanks

    Read the article

  • remuxing mpv files from h264 AVI files

    - by crankharder
    I have a bunch of, I think, x264 encoded AVIs that I'd like to convert to m4v so that I can play with Quicktime. Here's how I created them First I dump the vob from DVD with this: $ mplayer -dumpstream -dumpfile new.vob dvd://1 Then I compress it: $ mencoder -oac copy -o new.avi -ovc x264 -x264encopts crf=18 new.vob I tried doing this to convertthem to m4v, but it's blowing up... I tried dumping the h264/acc streams: $ mplayer new.avi -dumpvideo -dumpfile new.h264 $ mplayer new.avi -dumpaudio -dumpfile new.acc And remuxing(?) with MP4Box but I'm getting an error: $ MP4Box -add new.h264#video -add new.aac#audio new.m4v Cannot find H264 start code Error importing new.h264#video: BitStream Not Compliant So not sure what to do now...

    Read the article

  • Vbscript - Compare and copy files from folder if newer than destination files

    - by Kenny Bones
    Hi, I'm trying to design this script that's supposed to be used as a part of a logon script for alot of users. And this script is basically supposed to take a source folder and destination folder as basically just make sure that the destination folder has the exact same content as the source folder. But only copy if the datemodified stamp of the source file is newer than the destination file. I have been thinking out this basic pseudo code, just trying to make sure this is valid and solid basically. Dim strSourceFolder, strDestFolder strSourceFolder = "C:\Users\User\SourceFolder\" strDestFolder = "C:\Users\User\DestFolder\" For each file in StrSourceFolder ReplaceIfNewer (file, strDestFolder) Next Sub ReplaceIfNewer (SourceFile, DestFolder) Dim DateModifiedSourceFile, DateModifiedDestFile DateModifiedSourceFile = SourceFile.DateModified() DateModifiedDestFile = DestFolder & "\" & SourceFile.DateModified() If DateModifiedSourceFile < DateModifiedDestFile Copy SourceFile to SourceFolder End if End Sub Would this work? I'm not quite sure how it can be done, but I could probably spend all day figuring it out. But the people here are generally so amazingly smart that with your help it would take alot less time :)

    Read the article

  • How can I read pcap files in a friendly format?

    - by Tony
    a simple cat on the pcap file looks terrible: $cat tcp_dump.pcap ?ò????YVJ? JJ ?@@.?E<??@@ ?CA??qe?U?????h? .Ceh?YVJ?? JJ ?@@.?E<??@@ CA??qe?U?????z? .ChV?YVJ$?JJ ?@@.?E<-/@@A?CA??9????F???A&? .Ck??YVJgeJJ@@.??#3E<@3{n??9CA??P???F???<K? ??`.Ck??YVJgeBB ?@@.?E4-0@@AFCA??9????F?P????? .Ck???`?YVJ?""@@.??#3E?L@3?I??9CA??P???F????? ???.Ck?220-rly-da03.mx etc. I tried to make it prettier with: sudo tcpdump -ttttnnr tcp_dump.pcap reading from file tcp_dump.pcap, link-type EN10MB (Ethernet) 2009-07-09 20:57:40.819734 IP 67.23.28.65.49237 > 216.239.113.101.25: S 2535121895:2535121895(0) win 5840 <mss 1460,sackOK,timestamp 776168808 0,nop,wscale 5> 2009-07-09 20:57:43.819905 IP 67.23.28.65.49237 > 216.239.113.101.25: S 2535121895:2535121895(0) win 5840 <mss 1460,sackOK,timestamp 776169558 0,nop,wscale 5> 2009-07-09 20:57:47.248100 IP 67.23.28.65.42385 > 205.188.159.57.25: S 2644526720:2644526720(0) win 5840 <mss 1460,sackOK,timestamp 776170415 0,nop,wscale 5> 2009-07-09 20:57:47.288103 IP 205.188.159.57.25 > 67.23.28.65.42385: S 1358829769:1358829769(0) ack 2644526721 win 5792 <mss 1460,sackOK,timestamp 4292123488 776170415,nop,wscale 2> 2009-07-09 20:57:47.288103 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 1 win 183 <nop,nop,timestamp 776170425 4292123488> 2009-07-09 20:57:47.368107 IP 205.188.159.57.25 > 67.23.28.65.42385: P 1:481(480) ack 1 win 1448 <nop,nop,timestamp 4292123568 776170425> 2009-07-09 20:57:47.368107 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 481 win 216 <nop,nop,timestamp 776170445 4292123568> 2009-07-09 20:57:47.368107 IP 67.23.28.65.42385 > 205.188.159.57.25: P 1:18(17) ack 481 win 216 <nop,nop,timestamp 776170445 4292123568> 2009-07-09 20:57:47.404109 IP 205.188.159.57.25 > 67.23.28.65.42385: . ack 18 win 1448 <nop,nop,timestamp 4292123606 776170445> 2009-07-09 20:57:47.404109 IP 205.188.159.57.25 > 67.23.28.65.42385: P 481:536(55) ack 18 win 1448 <nop,nop,timestamp 4292123606 776170445> 2009-07-09 20:57:47.404109 IP 67.23.28.65.42385 > 205.188.159.57.25: P 18:44(26) ack 536 win 216 <nop,nop,timestamp 776170454 4292123606> 2009-07-09 20:57:47.444112 IP 205.188.159.57.25 > 67.23.28.65.42385: P 536:581(45) ack 44 win 1448 <nop,nop,timestamp 4292123644 776170454> 2009-07-09 20:57:47.484114 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 581 win 216 <nop,nop,timestamp 776170474 4292123644> 2009-07-09 20:57:47.616121 IP 67.23.28.65.42385 > 205.188.159.57.25: P 44:50(6) ack 581 win 216 <nop,nop,timestamp 776170507 4292123644> 2009-07-09 20:57:47.652123 IP 205.188.159.57.25 > 67.23.28.65.42385: P 581:589(8) ack 50 win 1448 <nop,nop,timestamp 4292123855 776170507> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: . ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: P 50:56(6) ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.652123 IP 67.23.28.65.42385 > 205.188.159.57.25: F 56:56(0) ack 589 win 216 <nop,nop,timestamp 776170516 4292123855> 2009-07-09 20:57:47.668124 IP 67.23.28.65.49239 > 216.239.113.101.25: S 2642380481:2642380481(0) win 5840 <mss 1460,sackOK,timestamp 776170520 0,nop,wscale 5> 2009-07-09 20:57:47.692126 IP 205.188.159.57.25 > 67.23.28.65.42385: P 589:618(29) ack 57 win 1448 <nop,nop,timestamp 4292123893 776170516> 2009-07-09 20:57:47.692126 IP 67.23.28.65.42385 > 205.188.159.57.25: R 2644526777:2644526777(0) win 0 2009-07-09 20:57:47.692126 IP 205.188.159.57.25 > 67.23.28.65.42385: F 618:618(0) ack 57 win 1448 <nop,nop,timestamp 4292123893 776170516> 2009-07-09 20:57:47.692126 IP 67.23.28.65.42385 > 205.188.159.57.25: R 2644526777:2644526777(0) win 0 Well...that is much prettier but it doesn't show the actual messages. I can actually extract more information just viewing the RAW file. What is the best ( and preferably easiest) way to just view all the contents of the pcap file? UPDATE Thanks to the responses below, I made some progress. Here is what it looks like now: tcpdump -qns 0 -A -r blah.pcap 20:57:47.368107 IP 205.188.159.57.25 > 67.23.28.65.42385: tcp 480 0x0000: 4500 0214 834c 4000 3306 f649 cdbc 9f39 [email protected] 0x0010: 4317 1c41 0019 a591 50fe 18ca 9da0 4681 C..A....P.....F. 0x0020: 8018 05a8 848f 0000 0101 080a ffd4 9bb0 ................ 0x0030: 2e43 6bb9 3232 302d 726c 792d 6461 3033 .Ck.220-rly-da03 0x0040: 2e6d 782e 616f 6c2e 636f 6d20 4553 4d54 .mx.aol.com.ESMT 0x0050: 5020 6d61 696c 5f72 656c 6179 5f69 6e2d P.mail_relay_in- 0x0060: 6461 3033 2e34 3b20 5468 752c 2030 3920 da03.4;.Thu,.09. 0x0070: 4a75 6c20 3230 3039 2031 363a 3537 3a34 Jul.2009.16:57:4 0x0080: 3720 2d30 3430 300d 0a32 3230 2d41 6d65 7.-0400..220-Ame 0x0090: 7269 6361 204f 6e6c 696e 6520 2841 4f4c rica.Online.(AOL 0x00a0: 2920 616e 6420 6974 7320 6166 6669 6c69 ).and.its.affili 0x00b0: 6174 6564 2063 6f6d 7061 6e69 6573 2064 ated.companies.d etc. This looks good, but it still makes the actual message on the right difficult to read. Is there a way to view those messages in a more friendly way? UPDATE This made it pretty: tcpick -C -yP -r tcp_dump.pcap Thanks!

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >