Search Results

Search found 65497 results on 2620 pages for 'list files'.

Page 52/2620 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Is there a smarter Find files utility for Windows 8 than Windows key + F?

    - by Clay Shannon
    Is there any utility for Windows 8 that will basically do the same thing the old "Find" dialog in Explorer did? Often times (many times a day) I need to find a particular file, and I don't know the name of it or where it is, but I can remember a phrase in it, and approximately when it was written, e.g., it has the phrase "Duckbilled Platypus" in it and was written sometime in the last week. The Find Files functionality in Windows 8 is lame by comparison; I know there are probably geeky ways to jump through hoops and do it, but I don't want to have to write GREP expressions, I want something easy like the old functionality...

    Read the article

  • How to put several files in one archive?

    - by Roman
    I have 10 files which I need to send per e-mail. It is inconvenient for me all 10 files and it will be inconvenient for the receiver to download all 10 files (it can be annoying to do the same operation 10 times). I would like to put all 10 files into one files (I think it can be done as archive). How can I do it? Important details. I am working in the Windows 7 and prefer to do the mentioned operation from the command line. In the directory, where I have my 10 files, I have many other files which I would not like to include into the archive. The files are small, so compression rate and size do not play any role. I just one to have an easy way to put 10 files into one and then easily to extract these 10 files.

    Read the article

  • Search text in list of files. Double search. Search files within a files

    - by wormhit
    I'm trying to execute double search within files and return file names. I'm using find ./ -iname '*txt' | xargs grep "searchtext" -sl to find file names with 'searchtext' in them. Command is returning a list of files. How can I find "othersearchtext" in those already found files and show them in the same fashion? #### EDITED Answer: grep -l "othersearchtext" $(find ./ -iname '*txt' | xargs grep "searchtext" -sl)

    Read the article

  • Is there a way to convert a list of cc'es to a distribution list in Outlook?

    - by Will M
    In Outlook 2003, I find myself frequently sending emails to the same group of people all working on a project. I'd like to find a simple way to convert the list of addresses in an email that I would "reply all" to into an Outlook distribution list for future use. I know how to create Outlook distribution lists, that's not my question. But the normal process that I see in Outlook requires adding the names to the list one by one. If I have an email with a large "reply all" list is there any way to take that whole list all at once (including many email addresses not otherwise already in my Outlook contacts) and convert it to a distribution list in one step? Trying to google for this answer got me many, many pages of instructions on how to create a distribution list, but I can't seem to find the search term to locate an answer on how to convert a list of address.

    Read the article

  • Quickly Add Watermark To Multiple PDF Files Using “Batch PDF Watermark”

    - by Kavitha
    Want to add watermark to your PDF files with a single click? You can use the freeware Batch PDF Watermark. Batch PDF Watermark is super cool application that lets you add image or text watermarks to multiple files at a time. Office 2010 style ribbon user interface of the application is very easy to use and provides many options to configure watermark properties like – font styles, positioning, transparency levels, rotation of watermark image, scaling of watermark image and etc. Before running the watermark process, you can even preview it. To select multiple PDF files to watermark you can use “Add Files” option to hand pick required files or “Add Folder” option to choose all the PDF files available in the folder. Download Batch PDF Watermark [via liferocks] This article titled,Quickly Add Watermark To Multiple PDF Files Using “Batch PDF Watermark”, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How to Upload Really Large Files to SkyDrive, Dropbox, or Email

    - by Matthew Guay
    Do you need to upload a very large file to store online or email to a friend? Unfortunately, whether you’re emailing a file or using online storage sites like SkyDrive, there’s a limit on the size of files you can use. Here’s how to get around the limits. Skydrive only lets you add files up to 50 MB, and while the Dropbox desktop client lets you add really large files, the web interface has a 300 MB limit, so if you were on another PC and wanted to add giant files to your Dropbox, you’d need to split them. This same technique also works for any file sharing service—even if you were sending files through email. There’s two ways that you can get around the limits—first, by just compressing the files if you’re close to the limit, but the second and more interesting way is to split up the files into smaller chunks. Keep reading for how to do both. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • Tools for modelling data and workflows using structured text files

    - by Alexey
    Consider a case when I want to try some idea of an application. But I want to avoid investing a lot of effort in coding UI/work flows/database schema etc before I see that it's going to be useful to me (as example of potential user). My idea is stay lightweight and put all the data in text files. So the components could be following: Domain objects are represented by text files or their fragments Domain objects are grouped by their type using directories Structure the files using some both human- and machine-friendly format, e.g. YAML Use some smart text editor (e.g. vim, emacs, rubymine) to edit and navigate those files Use color schemes and macros/custom commands of the text editor to effectively manipulate those files Use scripts (or a lightweight web framework like Sinatra) to try some business logic ideas on top of the data model The question is: Are there tools or toolkits that support or can be adopted to this approach? Also any ideas, links to articles/other knowledge sources are very welcome. And more specific question: What is the simplest way to index and update index of files with YAML files?

    Read the article

  • Access Log Files

    - by Matt Watson
    Some of the simplest things in life make all the difference. For a software developer who is trying to solve an application problem, being able to access log files, windows event viewer, and other details is priceless. But ironically enough, most developers aren't even given access to them. Developers have to escalate the issue to their manager or a system admin to retrieve the needed information. Some companies create workarounds to solve the problem or use third party solutions.Home grown solution to access log filesSome companies roll their own solution to try and solve the problem. These solutions can be great but are not always real time, and don't account for the windows event viewer, config files, server health, and other information that is needed to fix bugs.VPN or FTP access to log file foldersCreate programs to collect log files and move them to a centralized serverModify code to write log files to a centralized placeExpensive solution to access log filesSome companies buy expensive solutions like Splunk or other log management tools. But in a lot of cases that is overkill when all the developers need is the ability to just look at log files, not do analytics on them.There has to be a better solution to access log filesStackify recently came up with a perfect solution to the problem. Their software gives developers remote visibility to all the production servers without allowing them to remote desktop in to the machines. They can get real time access to log files, windows event viewer, config files, and other things that developers need. This allows the entire development team to be more involved in the process of solving application defects.Check out their product to learn morehttp://www.Stackify.com

    Read the article

  • In-House Generated Certificates Supported for Signing E-Business Suite JAR Files

    - by Elke Phelps (Oracle Development)
    The E-Business Suite uses Java Archive (JAR) files to deliver certain types of E-Business Suite content desktop clients.  Previously we announced the support of securing JAR files with 3072-bit certificates signed by a third-party Certificate Authority (CA).  We now support securing JAR files with in-house generated certificates.  The new steps to use an in-house Certificate Authority for securing JAR files are provided in: Enhanced Signing of Oracle E-Business Suite JAR Files (Note 1207184.1) This enhancement is great news for those of you familiar with the warning that is triggered when using a self-signed certificate.  As a result of supporting self-signed certificates, the following warning can be avoided: Oracle E-Business Suite Release 12 Certified Platforms Linux x86 (Oracle Linux 4, 5) Linux x86 (RHEL 3, 4, 5) Linux x86 (SLES 9, 10) Linux x86-64 (Oracle Linux 4, 5) Linux x86-64 (RHEL 4, 5) Linux x86-64 (SLES 9, 10)  Oracle Solaris on SPARC (64-bit) (8, 9, 10) IBM AIX on Power Systems (64-bit) (5.3, 6.1) IBM Linux on System z** (RHEL 5, SLES 9, SLES 10) HP-UX Itanium (11.23, 11.31) HP-UX PA-RISC (64-bit) (11.11, 11.23, 11.31) Microsoft Windows Server (32-bit) (2003, 2008 for EBS 12.1 only) Oracle E-Business Suite Release 11i Certified Platforms Linux x86 (Oracle Enterprise Linux 4, 5) Linux x86 (RHEL 3, 4, 5) Linux x86 (SLES 8, 9, 10) Linux x86 (Asianux 1.0) Oracle Solaris on SPARC (64-bit) (8, 9, 10) IBM AIX on Power Systems (64-bit) (5.3, 6.1) HP-UX PA-RISC (64-bit) (11.11, 11.23, 11.31) HP Tru64 (5.1b) Microsoft Windows Server (32-bit) (2000, 2003) References Enhanced Signing of Oracle E-Business Suite JAR Files (Note 1207184.1) Related Articles Two New Options for Signing E-Business Suite JAR Files Now Available What Are the Minimum Desktop Requirements for EBS? Internet Explorer 9 Certified with Oracle E-Business Suite

    Read the article

  • nautilus crash when merging/overwriting files

    - by sBlatt
    On my Ubuntu 10.10, whenever I want to copy some files/folders over some other files/folders, or when I try to empty the trash, nautilus crashes! Example: I have a folder with some files. Now I want to overwrite this folder with a folder with the same name, same files, but some additional files, the merge window comes up, I choose merge and nautilus crashes (does not respond, when I press the close button I can force close it). Some times it even does the copying/emptying (trash), but it always crashes! This happens when copying to the same partition/ntfs partition/netshares, but not when I make a new folder and copy the files/folders into that (without overwriting anything). On a netshare, it's even possible to merge these files afterwards with another computer! dmesg/syslog/messages does not show any entry related to that problem. Does anyone have a solution for this very annoying problem? EDIT: dpkg -l nautilus* (see output in pastebin) EDIT2: I found out, nautilus already crashes before clicking replace/merge (as soon as the question appeares. In the video it's not entirely clear, that i click the cross before the force-close dialog appeares. Video of problem nautilus-debug-log.txt EDIT3: Filed bugreport: https://bugs.launchpad.net/ubuntu/+source/nautilus/+bug/678233

    Read the article

  • Packing up files on my machine, sending it to a server, and unpacking it

    - by MxyL
    I am implementing a feature in my application that sends all files in a specified folder to a server. I have the basic FTP transaction set up using Apache Commons FTPClient: it sets up a connection and transfers a file from one place to another. So I can simply loop over the directory and use this connection to transfer all the files. However, this could be better. Rather than transferring each file one by one, it makes more sense to pack it up in a compressed archive and then send the whole file at once. Saves time and bandwidth, since these are just text files so they compress nicely. So I would like to add automatic archive packing and unpacking. This is the workflow I have planned out, using zip compression: Zip all files in the folder Send the file over Unzip the files at its destination 1 and 2 are easy since the files are on the local machine, but I'm not sure how to accomplish the last step, when the files are now on a remote server. What are my options? I have control over what I can put and run on the server. Perhaps it is not necessary to do the packing/unpacking myself?

    Read the article

  • How to undelete files in TFS

    - by Tarun Arora
    Have you accidently deleted files from TFS and are looking at a way to undelete the file? You don’t have to undo your previous check in to get the files back, there is a simpler way. 01 – View Deleted items in Team Explorer Have you been wondering how you can view deleted items in Team Explorer? Well, go to tools, options, Source Control. From Visual Studio Team Foundation check ‘show deleted items in the Source Control Explorer’.  02 – Undelete files from TFS Simply right click the deleted file or folder and from the context menu select ‘Undelete’. This will roll back the files to the version before the delete operation was committed on them.  The undeleted changes now show up as pending changes in your workspace. You need to right click the folder and select Check In Pending changes from the context menu to restore the files. Add a comment and check in the files back to TFS to undelete them Right click the folder and view history. You’ll see both the check in that deleted the file/folder and the check in that restored it. So, that’s how you can restoring deleted files in TFS… Nice and simple… Right?

    Read the article

  • a question on common lisp

    - by kostas
    Hello people, I'm getting crazy with a small problem here, I keep getting an error and I cant seem to figure out why, the code is supposed to change the range of a list, so if we give it a list with values (1 2 3 4) and we want to change the range in 11 to fourteen the result would be (11 12 13 14) the problem is that the last function called scale-list will give back an error saying: Debugger entered--Lisp error: (wrong-type-argument number-or-marker-p nil) anybody has a clue why? I use aquamacs as an editor thanks in advance ;;finds minimum in a list (defun minimum(list) (car (sort list #'<))) ;;finds maximum in a list (defun maximum(list) (car (sort list #'>))) ;;calculates the range of a list (defun range(list) (- (maximum list) (minimum list))) ;;this codes scales a value from a list (defun scale-value(list low high n) (+ (/ (* (- (nth (- n 1) list) (minimum list)) (- high low)) (range list)) low)) ;and this code is supposed to scale the whole list (defun scale-list(list low high n) (unless (= n 0) (cons (scale-value list low high n) (scale-list list low high (- n 1))))) (scale-list '(0.1 0.3 0.5 0.9) 20 30 4)

    Read the article

  • Problems with opening CHM Help files from Network or Internet

    - by Rick Strahl
    As a publisher of a Help Creation tool called Html Help Help Builder, I’ve seen a lot of problems with help files that won't properly display actual topic content and displays an error message for topics instead. Here’s the scenario: You go ahead and happily build your fancy, schmanzy Help File for your application and deploy it to your customer. Or alternately you've created a help file and you let your customers download them off the Internet directly or in a zip file. The customer downloads the file, opens the zip file and copies the help file contained in the zip file to disk. She then opens the help file and finds the following unfortunate result:     The help file  comes up with all topics in the tree on the left, but a Navigation to the WebPage was cancelled or Operation Aborted error in the Help Viewer's content window whenever you try to open a topic. The CHM file obviously opened since the topic list is there, but the Help Viewer refuses to display the content. Looks like a broken help file, right? But it's not - it's merely a Windows security 'feature' that tries to be overly helpful in protecting you. The reason this happens is because files downloaded off the Internet - including ZIP files and CHM files contained in those zip files - are marked as as coming from the Internet and so can potentially be malicious, so do not get browsing rights on the local machine – they can’t access local Web content, which is exactly what help topics are. If you look at the URL of a help topic you see something like this:   mk:@MSITStore:C:\wwapps\wwIPStuff\wwipstuff.chm::/indexpage.htm which points at a special Microsoft Url Moniker that in turn points the CHM file and a relative path within that HTML help file. Try pasting a URL like this into Internet Explorer and you'll see the help topic pop up in your browser (along with a warning most likely). Although the URL looks weird this still equates to a call to the local computer zone, the same as if you had navigated to a local file in IE which by default is not allowed.  Unfortunately, unlike Internet Explorer where you have the option of clicking a security toolbar, the CHM viewer simply refuses to load the page and you get an error page as shown above. How to Fix This - Unblock the Help File There's a workaround that lets you explicitly 'unblock' a CHM help file. To do this: Open Windows Explorer Find your CHM file Right click and select Properties Click the Unblock button on the General tab Here's what the dialog looks like:   Clicking the Unblock button basically, tells Windows that you approve this Help File and allows topics to be viewed.   Is this insecure? Not unless you're running a really old Version of Windows (XP pre-SP1). In recent versions of Windows Internet Explorer pops up various security dialogs or fires script errors when potentially malicious operations are accessed (like loading Active Controls), so it's relatively safe to run local content in the CHM viewer. Since most help files don't contain script or only load script that runs pure JavaScript access web resources this works fine without issues. How to avoid this Problem As an application developer there's a simple solution around this problem: Always install your Help Files with an Installer. The above security warning pop up because Windows can't validate the source of the CHM file. However, if the help file is installed as part of an installation the installation and all files associated with that installation including the help file are trusted. A fully installed Help File of an application works just fine because it is trusted by Windows. Summary It's annoying as all hell that this sort of obtrusive marking is necessary, but it's admittedly a necessary evil because of Microsoft's use of the insecure Internet Explorer engine that drives the CHM Html Engine's topic viewer. Because help files are viewing local content and script is allowed to execute in CHM files there's potential for malicious code hiding in CHM files and the above precautions are supposed to avoid any issues. © Rick Strahl, West Wind Technologies, 2005-2012 Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • What's the standard algorithm for syncing two lists of objects?

    - by Oliver Giesen
    I'm pretty sure this must be in some kind of text book (or more likely in all of them) but I seem to be using the wrong keywords to search for it... :( A common task I'm facing while programming is that I am dealing with lists of objects from different sources which I need to keep in sync somehow. Typically there's some sort of "master list" e.g. returned by some external API and then a list of objects I create myself each of which corresponds to an object in the master list. Sometimes the nature of the external API will not allow me to do a live sync: For instance the external list might not implement notifications about items being added or removed or it might notify me but not give me a reference to the actual item that was added or removed. Furthermore, refreshing the external list might return a completely new set of instances even though they still represent the same information so simply storing references to the external objects might also not always be feasible. Another characteristic of the problem is that both lists cannot be sorted in any meaningful way. You should also assume that initializing new objects in the "slave list" is expensive, i.e. simply clearing and rebuilding it from scratch is not an option. So how would I typically tackle this? What's the name of the algorithm I should google for? In the past I have implemented this in various ways (see below for an example) but it always felt like there should be a cleaner and more efficient way. Here's an example approach: Iterate over the master list Look up each item in the "slave list" Add items that do not yet exist Somehow keep track of items that already exist in both lists (e.g. by tagging them or keeping yet another list) When done iterate once more over the slave list Remove all objects that have not been tagged (see 4.) Update Thanks for all your responses so far! I will need some time to look at the links. Maybe one more thing worthy of note: In many of the situations where I needed this the implementation of the "master list" is completely hidden from me. In the most extreme cases the only access I might have to the master list might be a COM-interface that exposes nothing but GetFirst-, GetNext-style methods. I'm mentioning this because of the suggestions to either sort the list or to subclass it both of which is unfortunately not practical in these cases unless I copy the elements into a list of my own and I don't think that would be very efficient. I also might not have made it clear enough that the elements in the two lists are of different types, i.e. not assignment-compatible: Especially, the elements in the master list might be available as interface references only.

    Read the article

  • Search and replace hundreds of strings in tens of thousands of files?

    - by C Johnson
    I am looking into changing the file name of hundreds of files in a (C/C++) project that I work on. The problem is our software has tens of thousands of files that including (i.e. #include) these hundreds of files that will get changed. This looks like a maintenance nightmare. If I do this I will be stuck in Ultra-Edit for weeks, rolling hundreds of regex's by hand like so: ^\#include.*["<\\/]stupid_name.*$ with #include <dir/new_name.h> Such drudgery would be worse than peeling hundreds of potatoes in a sunken submarine in the antarctic with a spoon. I think it would rather be ideal to put the inputs and outputs into a table like so: stupid_name.h <-> <dir/new_name.h> stupid_nameb.h <-> <dir/new_nameb.h> stupid_namec.h <-> <dir/new_namec.h> and feed this into a regular expression engine / tool / app / etc... My Ultimate Question: Is there a tool that will do that? Bonus Question: Is it multi-threaded? I looked at quite a few search and replace topics here on this website, and found lots of standard queries that asked a variant of the following question: standard question: Replace one term in N files. as opposed to: my question: Replace N terms in N files. Thanks in advance for any replies.

    Read the article

  • .Remove(object) on a List<T> returned from LINQ to SQL compiled query won't delete the Object right

    - by soldieraman
    I am returning two lists from the database using LINQ to SQL compiled query. While looping the first list I remove duplicates from the second list as I dont want to process already existing objects again. eg. //oldCustomers is a List returned by my Compiled Linq to SQL Statmenet that I have added a .ToList() at the end to //Same goes for newCustomers for (Customer oC in oldCustomers) { //Do some processing newCustomers.Remove(newCusomters.Find(nC=> nC.CustomerID == oC.CusomterID)); } for (Cusomter nC in newCustomers) { //Do some processing } DataContext.SubmitChanges() I expect this to only save the changes that have been made to the customers in my processing and not Remove or Delete any of my customers from the database. Correct? I have tried it and it works fine - but I am trying to know if there is any rare case it might actually get removed

    Read the article

  • Using R to Analyze G1GC Log Files

    - by user12620111
    Using R to Analyze G1GC Log Files body, td { font-family: sans-serif; background-color: white; font-size: 12px; margin: 8px; } tt, code, pre { font-family: 'DejaVu Sans Mono', 'Droid Sans Mono', 'Lucida Console', Consolas, Monaco, monospace; } h1 { font-size:2.2em; } h2 { font-size:1.8em; } h3 { font-size:1.4em; } h4 { font-size:1.0em; } h5 { font-size:0.9em; } h6 { font-size:0.8em; } a:visited { color: rgb(50%, 0%, 50%); } pre { margin-top: 0; max-width: 95%; border: 1px solid #ccc; white-space: pre-wrap; } pre code { display: block; padding: 0.5em; } code.r, code.cpp { background-color: #F8F8F8; } table, td, th { border: none; } blockquote { color:#666666; margin:0; padding-left: 1em; border-left: 0.5em #EEE solid; } hr { height: 0px; border-bottom: none; border-top-width: thin; border-top-style: dotted; border-top-color: #999999; } @media print { * { background: transparent !important; color: black !important; filter:none !important; -ms-filter: none !important; } body { font-size:12pt; max-width:100%; } a, a:visited { text-decoration: underline; } hr { visibility: hidden; page-break-before: always; } pre, blockquote { padding-right: 1em; page-break-inside: avoid; } tr, img { page-break-inside: avoid; } img { max-width: 100% !important; } @page :left { margin: 15mm 20mm 15mm 10mm; } @page :right { margin: 15mm 10mm 15mm 20mm; } p, h2, h3 { orphans: 3; widows: 3; } h2, h3 { page-break-after: avoid; } } pre .operator, pre .paren { color: rgb(104, 118, 135) } pre .literal { color: rgb(88, 72, 246) } pre .number { color: rgb(0, 0, 205); } pre .comment { color: rgb(76, 136, 107); } pre .keyword { color: rgb(0, 0, 255); } pre .identifier { color: rgb(0, 0, 0); } pre .string { color: rgb(3, 106, 7); } var hljs=new function(){function m(p){return p.replace(/&/gm,"&").replace(/"}while(y.length||w.length){var v=u().splice(0,1)[0];z+=m(x.substr(q,v.offset-q));q=v.offset;if(v.event=="start"){z+=t(v.node);s.push(v.node)}else{if(v.event=="stop"){var p,r=s.length;do{r--;p=s[r];z+=("")}while(p!=v.node);s.splice(r,1);while(r'+M[0]+""}else{r+=M[0]}O=P.lR.lastIndex;M=P.lR.exec(L)}return r+L.substr(O,L.length-O)}function J(L,M){if(M.sL&&e[M.sL]){var r=d(M.sL,L);x+=r.keyword_count;return r.value}else{return F(L,M)}}function I(M,r){var L=M.cN?'':"";if(M.rB){y+=L;M.buffer=""}else{if(M.eB){y+=m(r)+L;M.buffer=""}else{y+=L;M.buffer=r}}D.push(M);A+=M.r}function G(N,M,Q){var R=D[D.length-1];if(Q){y+=J(R.buffer+N,R);return false}var P=q(M,R);if(P){y+=J(R.buffer+N,R);I(P,M);return P.rB}var L=v(D.length-1,M);if(L){var O=R.cN?"":"";if(R.rE){y+=J(R.buffer+N,R)+O}else{if(R.eE){y+=J(R.buffer+N,R)+O+m(M)}else{y+=J(R.buffer+N+M,R)+O}}while(L1){O=D[D.length-2].cN?"":"";y+=O;L--;D.length--}var r=D[D.length-1];D.length--;D[D.length-1].buffer="";if(r.starts){I(r.starts,"")}return R.rE}if(w(M,R)){throw"Illegal"}}var E=e[B];var D=[E.dM];var A=0;var x=0;var y="";try{var s,u=0;E.dM.buffer="";do{s=p(C,u);var t=G(s[0],s[1],s[2]);u+=s[0].length;if(!t){u+=s[1].length}}while(!s[2]);if(D.length1){throw"Illegal"}return{r:A,keyword_count:x,value:y}}catch(H){if(H=="Illegal"){return{r:0,keyword_count:0,value:m(C)}}else{throw H}}}function g(t){var p={keyword_count:0,r:0,value:m(t)};var r=p;for(var q in e){if(!e.hasOwnProperty(q)){continue}var s=d(q,t);s.language=q;if(s.keyword_count+s.rr.keyword_count+r.r){r=s}if(s.keyword_count+s.rp.keyword_count+p.r){r=p;p=s}}if(r.language){p.second_best=r}return p}function i(r,q,p){if(q){r=r.replace(/^((]+|\t)+)/gm,function(t,w,v,u){return w.replace(/\t/g,q)})}if(p){r=r.replace(/\n/g,"")}return r}function n(t,w,r){var x=h(t,r);var v=a(t);var y,s;if(v){y=d(v,x)}else{return}var q=c(t);if(q.length){s=document.createElement("pre");s.innerHTML=y.value;y.value=k(q,c(s),x)}y.value=i(y.value,w,r);var u=t.className;if(!u.match("(\\s|^)(language-)?"+v+"(\\s|$)")){u=u?(u+" "+v):v}if(/MSIE [678]/.test(navigator.userAgent)&&t.tagName=="CODE"&&t.parentNode.tagName=="PRE"){s=t.parentNode;var p=document.createElement("div");p.innerHTML=""+y.value+"";t=p.firstChild.firstChild;p.firstChild.cN=s.cN;s.parentNode.replaceChild(p.firstChild,s)}else{t.innerHTML=y.value}t.className=u;t.result={language:v,kw:y.keyword_count,re:y.r};if(y.second_best){t.second_best={language:y.second_best.language,kw:y.second_best.keyword_count,re:y.second_best.r}}}function o(){if(o.called){return}o.called=true;var r=document.getElementsByTagName("pre");for(var p=0;p|=||=||=|\\?|\\[|\\{|\\(|\\^|\\^=|\\||\\|=|\\|\\||~";this.ER="(?![\\s\\S])";this.BE={b:"\\\\.",r:0};this.ASM={cN:"string",b:"'",e:"'",i:"\\n",c:[this.BE],r:0};this.QSM={cN:"string",b:'"',e:'"',i:"\\n",c:[this.BE],r:0};this.CLCM={cN:"comment",b:"//",e:"$"};this.CBLCLM={cN:"comment",b:"/\\*",e:"\\*/"};this.HCM={cN:"comment",b:"#",e:"$"};this.NM={cN:"number",b:this.NR,r:0};this.CNM={cN:"number",b:this.CNR,r:0};this.BNM={cN:"number",b:this.BNR,r:0};this.inherit=function(r,s){var p={};for(var q in r){p[q]=r[q]}if(s){for(var q in s){p[q]=s[q]}}return p}}();hljs.LANGUAGES.cpp=function(){var a={keyword:{"false":1,"int":1,"float":1,"while":1,"private":1,"char":1,"catch":1,"export":1,virtual:1,operator:2,sizeof:2,dynamic_cast:2,typedef:2,const_cast:2,"const":1,struct:1,"for":1,static_cast:2,union:1,namespace:1,unsigned:1,"long":1,"throw":1,"volatile":2,"static":1,"protected":1,bool:1,template:1,mutable:1,"if":1,"public":1,friend:2,"do":1,"return":1,"goto":1,auto:1,"void":2,"enum":1,"else":1,"break":1,"new":1,extern:1,using:1,"true":1,"class":1,asm:1,"case":1,typeid:1,"short":1,reinterpret_cast:2,"default":1,"double":1,register:1,explicit:1,signed:1,typename:1,"try":1,"this":1,"switch":1,"continue":1,wchar_t:1,inline:1,"delete":1,alignof:1,char16_t:1,char32_t:1,constexpr:1,decltype:1,noexcept:1,nullptr:1,static_assert:1,thread_local:1,restrict:1,_Bool:1,complex:1},built_in:{std:1,string:1,cin:1,cout:1,cerr:1,clog:1,stringstream:1,istringstream:1,ostringstream:1,auto_ptr:1,deque:1,list:1,queue:1,stack:1,vector:1,map:1,set:1,bitset:1,multiset:1,multimap:1,unordered_set:1,unordered_map:1,unordered_multiset:1,unordered_multimap:1,array:1,shared_ptr:1}};return{dM:{k:a,i:"",k:a,r:10,c:["self"]}]}}}();hljs.LANGUAGES.r={dM:{c:[hljs.HCM,{cN:"number",b:"\\b0[xX][0-9a-fA-F]+[Li]?\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\b\\d+(?:[eE][+\\-]?\\d*)?L\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\b\\d+\\.(?!\\d)(?:i\\b)?",e:hljs.IMMEDIATE_RE,r:1},{cN:"number",b:"\\b\\d+(?:\\.\\d*)?(?:[eE][+\\-]?\\d*)?i?\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"number",b:"\\.\\d+(?:[eE][+\\-]?\\d*)?i?\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"keyword",b:"(?:tryCatch|library|setGeneric|setGroupGeneric)\\b",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\.\\.\\.",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\.\\.\\d+(?![\\w.])",e:hljs.IMMEDIATE_RE,r:10},{cN:"keyword",b:"\\b(?:function)",e:hljs.IMMEDIATE_RE,r:2},{cN:"keyword",b:"(?:if|in|break|next|repeat|else|for|return|switch|while|try|stop|warning|require|attach|detach|source|setMethod|setClass)\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"literal",b:"(?:NA|NA_integer_|NA_real_|NA_character_|NA_complex_)\\b",e:hljs.IMMEDIATE_RE,r:10},{cN:"literal",b:"(?:NULL|TRUE|FALSE|T|F|Inf|NaN)\\b",e:hljs.IMMEDIATE_RE,r:1},{cN:"identifier",b:"[a-zA-Z.][a-zA-Z0-9._]*\\b",e:hljs.IMMEDIATE_RE,r:0},{cN:"operator",b:"|=||   Using R to Analyze G1GC Log Files   Using R to Analyze G1GC Log Files Introduction Working in Oracle Platform Integration gives an engineer opportunities to work on a wide array of technologies. My team’s goal is to make Oracle applications run best on the Solaris/SPARC platform. When looking for bottlenecks in a modern applications, one needs to be aware of not only how the CPUs and operating system are executing, but also network, storage, and in some cases, the Java Virtual Machine. I was recently presented with about 1.5 GB of Java Garbage First Garbage Collector log file data. If you’re not familiar with the subject, you might want to review Garbage First Garbage Collector Tuning by Monica Beckwith. The customer had been running Java HotSpot 1.6.0_31 to host a web application server. I was told that the Solaris/SPARC server was running a Java process launched using a commmand line that included the following flags: -d64 -Xms9g -Xmx9g -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:InitiatingHeapOccupancyPercent=80 -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+PrintGC -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+PrintGCDateStamps -XX:+PrintFlagsFinal -XX:+DisableExplicitGC -XX:+UnlockExperimentalVMOptions -XX:ParallelGCThreads=8 Several sources on the internet indicate that if I were to print out the 1.5 GB of log files, it would require enough paper to fill the bed of a pick up truck. Of course, it would be fruitless to try to scan the log files by hand. Tools will be required to summarize the contents of the log files. Others have encountered large Java garbage collection log files. There are existing tools to analyze the log files: IBM’s GC toolkit The chewiebug GCViewer gchisto HPjmeter Instead of using one of the other tools listed, I decide to parse the log files with standard Unix tools, and analyze the data with R. Data Cleansing The log files arrived in two different formats. I guess that the difference is that one set of log files was generated using a more verbose option, maybe -XX:+PrintHeapAtGC, and the other set of log files was generated without that option. Format 1 In some of the log files, the log files with the less verbose format, a single trace, i.e. the report of a singe garbage collection event, looks like this: {Heap before GC invocations=12280 (full 61): garbage-first heap total 9437184K, used 7499918K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) region size 4096K, 1 young (4096K), 0 survivors (0K) compacting perm gen total 262144K, used 144077K [0xffffffff40000000, 0xffffffff50000000, 0xffffffff50000000) the space 262144K, 54% used [0xffffffff40000000, 0xffffffff48cb3758, 0xffffffff48cb3800, 0xffffffff50000000) No shared spaces configured. 2014-05-14T07:24:00.988-0700: 60586.353: [GC pause (young) 7324M->7320M(9216M), 0.1567265 secs] Heap after GC invocations=12281 (full 61): garbage-first heap total 9437184K, used 7496533K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) region size 4096K, 0 young (0K), 0 survivors (0K) compacting perm gen total 262144K, used 144077K [0xffffffff40000000, 0xffffffff50000000, 0xffffffff50000000) the space 262144K, 54% used [0xffffffff40000000, 0xffffffff48cb3758, 0xffffffff48cb3800, 0xffffffff50000000) No shared spaces configured. } A simple grep can be used to extract a summary: $ grep "\[ GC pause (young" g1gc.log 2014-05-13T13:24:35.091-0700: 3.109: [GC pause (young) 20M->5029K(9216M), 0.0146328 secs] 2014-05-13T13:24:35.440-0700: 3.459: [GC pause (young) 9125K->6077K(9216M), 0.0086723 secs] 2014-05-13T13:24:37.581-0700: 5.599: [GC pause (young) 25M->8470K(9216M), 0.0203820 secs] 2014-05-13T13:24:42.686-0700: 10.704: [GC pause (young) 44M->15M(9216M), 0.0288848 secs] 2014-05-13T13:24:48.941-0700: 16.958: [GC pause (young) 51M->20M(9216M), 0.0491244 secs] 2014-05-13T13:24:56.049-0700: 24.066: [GC pause (young) 92M->26M(9216M), 0.0525368 secs] 2014-05-13T13:25:34.368-0700: 62.383: [GC pause (young) 602M->68M(9216M), 0.1721173 secs] But that format wasn't easily read into R, so I needed to be a bit more tricky. I used the following Unix command to create a summary file that was easy for R to read. $ echo "SecondsSinceLaunch BeforeSize AfterSize TotalSize RealTime" $ grep "\[GC pause (young" g1gc.log | grep -v mark | sed -e 's/[A-SU-z\(\),]/ /g' -e 's/->/ /' -e 's/: / /g' | more SecondsSinceLaunch BeforeSize AfterSize TotalSize RealTime 2014-05-13T13:24:35.091-0700 3.109 20 5029 9216 0.0146328 2014-05-13T13:24:35.440-0700 3.459 9125 6077 9216 0.0086723 2014-05-13T13:24:37.581-0700 5.599 25 8470 9216 0.0203820 2014-05-13T13:24:42.686-0700 10.704 44 15 9216 0.0288848 2014-05-13T13:24:48.941-0700 16.958 51 20 9216 0.0491244 2014-05-13T13:24:56.049-0700 24.066 92 26 9216 0.0525368 2014-05-13T13:25:34.368-0700 62.383 602 68 9216 0.1721173 Format 2 In some of the log files, the log files with the more verbose format, a single trace, i.e. the report of a singe garbage collection event, was more complicated than Format 1. Here is a text file with an example of a single G1GC trace in the second format. As you can see, it is quite complicated. It is nice that there is so much information available, but the level of detail can be overwhelming. I wrote this awk script (download) to summarize each trace on a single line. #!/usr/bin/env awk -f BEGIN { printf("SecondsSinceLaunch IncrementalCount FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize\n") } ###################### # Save count data from lines that are at the start of each G1GC trace. # Each trace starts out like this: # {Heap before GC invocations=14 (full 0): # garbage-first heap total 9437184K, used 325496K [0xfffffffd00000000, 0xffffffff40000000, 0xffffffff40000000) ###################### /{Heap.*full/{ gsub ( "\\)" , "" ); nf=split($0,a,"="); split(a[2],b," "); getline; if ( match($0, "first") ) { G1GC=1; IncrementalCount=b[1]; FullCount=substr( b[3], 1, length(b[3])-1 ); } else { G1GC=0; } } ###################### # Pull out time stamps that are in lines with this format: # 2014-05-12T14:02:06.025-0700: 94.312: [GC pause (young), 0.08870154 secs] ###################### /GC pause/ { DateTime=$1; SecondsSinceLaunch=substr($2, 1, length($2)-1); } ###################### # Heap sizes are in lines that look like this: # [ 4842M->4838M(9216M)] ###################### /\[ .*]$/ { gsub ( "\\[" , "" ); gsub ( "\ \]" , "" ); gsub ( "->" , " " ); gsub ( "\\( " , " " ); gsub ( "\ \)" , " " ); split($0,a," "); if ( split(a[1],b,"M") > 1 ) {BeforeSize=b[1]*1024;} if ( split(a[1],b,"K") > 1 ) {BeforeSize=b[1];} if ( split(a[2],b,"M") > 1 ) {AfterSize=b[1]*1024;} if ( split(a[2],b,"K") > 1 ) {AfterSize=b[1];} if ( split(a[3],b,"M") > 1 ) {TotalSize=b[1]*1024;} if ( split(a[3],b,"K") > 1 ) {TotalSize=b[1];} } ###################### # Emit an output line when you find input that looks like this: # [Times: user=1.41 sys=0.08, real=0.24 secs] ###################### /\[Times/ { if (G1GC==1) { gsub ( "," , "" ); split($2,a,"="); UserTime=a[2]; split($3,a,"="); SysTime=a[2]; split($4,a,"="); RealTime=a[2]; print DateTime,SecondsSinceLaunch,IncrementalCount,FullCount,UserTime,SysTime,RealTime,BeforeSize,AfterSize,TotalSize; G1GC=0; } } The resulting summary is about 25X smaller that the original file, but still difficult for a human to digest. SecondsSinceLaunch IncrementalCount FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ... 2014-05-12T18:36:34.669-0700: 3985.744 561 0 0.57 0.06 0.16 1724416 1720320 9437184 2014-05-12T18:36:34.839-0700: 3985.914 562 0 0.51 0.06 0.19 1724416 1720320 9437184 2014-05-12T18:36:35.069-0700: 3986.144 563 0 0.60 0.04 0.27 1724416 1721344 9437184 2014-05-12T18:36:35.354-0700: 3986.429 564 0 0.33 0.04 0.09 1725440 1722368 9437184 2014-05-12T18:36:35.545-0700: 3986.620 565 0 0.58 0.04 0.17 1726464 1722368 9437184 2014-05-12T18:36:35.726-0700: 3986.801 566 0 0.43 0.05 0.12 1726464 1722368 9437184 2014-05-12T18:36:35.856-0700: 3986.930 567 0 0.30 0.04 0.07 1726464 1723392 9437184 2014-05-12T18:36:35.947-0700: 3987.023 568 0 0.61 0.04 0.26 1727488 1723392 9437184 2014-05-12T18:36:36.228-0700: 3987.302 569 0 0.46 0.04 0.16 1731584 1724416 9437184 Reading the Data into R Once the GC log data had been cleansed, either by processing the first format with the shell script, or by processing the second format with the awk script, it was easy to read the data into R. g1gc.df = read.csv("summary.txt", row.names = NULL, stringsAsFactors=FALSE,sep="") str(g1gc.df) ## 'data.frame': 8307 obs. of 10 variables: ## $ row.names : chr "2014-05-12T14:00:32.868-0700:" "2014-05-12T14:00:33.179-0700:" "2014-05-12T14:00:33.677-0700:" "2014-05-12T14:00:35.538-0700:" ... ## $ SecondsSinceLaunch: num 1.16 1.47 1.97 3.83 6.1 ... ## $ IncrementalCount : int 0 1 2 3 4 5 6 7 8 9 ... ## $ FullCount : int 0 0 0 0 0 0 0 0 0 0 ... ## $ UserTime : num 0.11 0.05 0.04 0.21 0.08 0.26 0.31 0.33 0.34 0.56 ... ## $ SysTime : num 0.04 0.01 0.01 0.05 0.01 0.06 0.07 0.06 0.07 0.09 ... ## $ RealTime : num 0.02 0.02 0.01 0.04 0.02 0.04 0.05 0.04 0.04 0.06 ... ## $ BeforeSize : int 8192 5496 5768 22528 24576 43008 34816 53248 55296 93184 ... ## $ AfterSize : int 1400 1672 2557 4907 7072 14336 16384 18432 19456 21504 ... ## $ TotalSize : int 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 9437184 ... head(g1gc.df) ## row.names SecondsSinceLaunch IncrementalCount ## 1 2014-05-12T14:00:32.868-0700: 1.161 0 ## 2 2014-05-12T14:00:33.179-0700: 1.472 1 ## 3 2014-05-12T14:00:33.677-0700: 1.969 2 ## 4 2014-05-12T14:00:35.538-0700: 3.830 3 ## 5 2014-05-12T14:00:37.811-0700: 6.103 4 ## 6 2014-05-12T14:00:41.428-0700: 9.720 5 ## FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ## 1 0 0.11 0.04 0.02 8192 1400 9437184 ## 2 0 0.05 0.01 0.02 5496 1672 9437184 ## 3 0 0.04 0.01 0.01 5768 2557 9437184 ## 4 0 0.21 0.05 0.04 22528 4907 9437184 ## 5 0 0.08 0.01 0.02 24576 7072 9437184 ## 6 0 0.26 0.06 0.04 43008 14336 9437184 Basic Statistics Once the data has been read into R, simple statistics are very easy to generate. All of the numbers from high school statistics are available via simple commands. For example, generate a summary of every column: summary(g1gc.df) ## row.names SecondsSinceLaunch IncrementalCount FullCount ## Length:8307 Min. : 1 Min. : 0 Min. : 0.0 ## Class :character 1st Qu.: 9977 1st Qu.:2048 1st Qu.: 0.0 ## Mode :character Median :12855 Median :4136 Median : 12.0 ## Mean :12527 Mean :4156 Mean : 31.6 ## 3rd Qu.:15758 3rd Qu.:6262 3rd Qu.: 61.0 ## Max. :55484 Max. :8391 Max. :113.0 ## UserTime SysTime RealTime BeforeSize ## Min. :0.040 Min. :0.0000 Min. : 0.0 Min. : 5476 ## 1st Qu.:0.470 1st Qu.:0.0300 1st Qu.: 0.1 1st Qu.:5137920 ## Median :0.620 Median :0.0300 Median : 0.1 Median :6574080 ## Mean :0.751 Mean :0.0355 Mean : 0.3 Mean :5841855 ## 3rd Qu.:0.920 3rd Qu.:0.0400 3rd Qu.: 0.2 3rd Qu.:7084032 ## Max. :3.370 Max. :1.5600 Max. :488.1 Max. :8696832 ## AfterSize TotalSize ## Min. : 1380 Min. :9437184 ## 1st Qu.:5002752 1st Qu.:9437184 ## Median :6559744 Median :9437184 ## Mean :5785454 Mean :9437184 ## 3rd Qu.:7054336 3rd Qu.:9437184 ## Max. :8482816 Max. :9437184 Q: What is the total amount of User CPU time spent in garbage collection? sum(g1gc.df$UserTime) ## [1] 6236 As you can see, less than two hours of CPU time was spent in garbage collection. Is that too much? To find the percentage of time spent in garbage collection, divide the number above by total_elapsed_time*CPU_count. In this case, there are a lot of CPU’s and it turns out the the overall amount of CPU time spent in garbage collection isn’t a problem when viewed in isolation. When calculating rates, i.e. events per unit time, you need to ask yourself if the rate is homogenous across the time period in the log file. Does the log file include spikes of high activity that should be separately analyzed? Averaging in data from nights and weekends with data from business hours may alias problems. If you have a reason to suspect that the garbage collection rates include peaks and valleys that need independent analysis, see the “Time Series” section, below. Q: How much garbage is collected on each pass? The amount of heap space that is recovered per GC pass is surprisingly low: At least one collection didn’t recover any data. (“Min.=0”) 25% of the passes recovered 3MB or less. (“1st Qu.=3072”) Half of the GC passes recovered 4MB or less. (“Median=4096”) The average amount recovered was 56MB. (“Mean=56390”) 75% of the passes recovered 36MB or less. (“3rd Qu.=36860”) At least one pass recovered 2GB. (“Max.=2121000”) g1gc.df$Delta = g1gc.df$BeforeSize - g1gc.df$AfterSize summary(g1gc.df$Delta) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0 3070 4100 56400 36900 2120000 Q: What is the maximum User CPU time for a single collection? The worst garbage collection (“Max.”) is many standard deviations away from the mean. The data appears to be right skewed. summary(g1gc.df$UserTime) ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 0.040 0.470 0.620 0.751 0.920 3.370 sd(g1gc.df$UserTime) ## [1] 0.3966 Basic Graphics Once the data is in R, it is trivial to plot the data with formats including dot plots, line charts, bar charts (simple, stacked, grouped), pie charts, boxplots, scatter plots histograms, and kernel density plots. Histogram of User CPU Time per Collection I don't think that this graph requires any explanation. hist(g1gc.df$UserTime, main="User CPU Time per Collection", xlab="Seconds", ylab="Frequency") Box plot to identify outliers When the initial data is viewed with a box plot, you can see the one crazy outlier in the real time per GC. Save this data point for future analysis and drop the outlier so that it’s not throwing off our statistics. Now the box plot shows many outliers, which will be examined later, using times series analysis. Notice that the scale of the x-axis changes drastically once the crazy outlier is removed. par(mfrow=c(2,1)) boxplot(g1gc.df$UserTime,g1gc.df$SysTime,g1gc.df$RealTime, main="Box Plot of Time per GC\n(dominated by a crazy outlier)", names=c("usr","sys","elapsed"), xlab="Seconds per GC", ylab="Time (Seconds)", horizontal = TRUE, outcol="red") crazy.outlier.df=g1gc.df[g1gc.df$RealTime > 400,] g1gc.df=g1gc.df[g1gc.df$RealTime < 400,] boxplot(g1gc.df$UserTime,g1gc.df$SysTime,g1gc.df$RealTime, main="Box Plot of Time per GC\n(crazy outlier excluded)", names=c("usr","sys","elapsed"), xlab="Seconds per GC", ylab="Time (Seconds)", horizontal = TRUE, outcol="red") box(which = "outer", lty = "solid") Here is the crazy outlier for future analysis: crazy.outlier.df ## row.names SecondsSinceLaunch IncrementalCount ## 8233 2014-05-12T23:15:43.903-0700: 20741 8316 ## FullCount UserTime SysTime RealTime BeforeSize AfterSize TotalSize ## 8233 112 0.55 0.42 488.1 8381440 8235008 9437184 ## Delta ## 8233 146432 R Time Series Data To analyze the garbage collection as a time series, I’ll use Z’s Ordered Observations (zoo). “zoo is the creator for an S3 class of indexed totally ordered observations which includes irregular time series.” require(zoo) ## Loading required package: zoo ## ## Attaching package: 'zoo' ## ## The following objects are masked from 'package:base': ## ## as.Date, as.Date.numeric head(g1gc.df[,1]) ## [1] "2014-05-12T14:00:32.868-0700:" "2014-05-12T14:00:33.179-0700:" ## [3] "2014-05-12T14:00:33.677-0700:" "2014-05-12T14:00:35.538-0700:" ## [5] "2014-05-12T14:00:37.811-0700:" "2014-05-12T14:00:41.428-0700:" options("digits.secs"=3) times=as.POSIXct( g1gc.df[,1], format="%Y-%m-%dT%H:%M:%OS%z:") g1gc.z = zoo(g1gc.df[,-c(1)], order.by=times) head(g1gc.z) ## SecondsSinceLaunch IncrementalCount FullCount ## 2014-05-12 17:00:32.868 1.161 0 0 ## 2014-05-12 17:00:33.178 1.472 1 0 ## 2014-05-12 17:00:33.677 1.969 2 0 ## 2014-05-12 17:00:35.538 3.830 3 0 ## 2014-05-12 17:00:37.811 6.103 4 0 ## 2014-05-12 17:00:41.427 9.720 5 0 ## UserTime SysTime RealTime BeforeSize AfterSize ## 2014-05-12 17:00:32.868 0.11 0.04 0.02 8192 1400 ## 2014-05-12 17:00:33.178 0.05 0.01 0.02 5496 1672 ## 2014-05-12 17:00:33.677 0.04 0.01 0.01 5768 2557 ## 2014-05-12 17:00:35.538 0.21 0.05 0.04 22528 4907 ## 2014-05-12 17:00:37.811 0.08 0.01 0.02 24576 7072 ## 2014-05-12 17:00:41.427 0.26 0.06 0.04 43008 14336 ## TotalSize Delta ## 2014-05-12 17:00:32.868 9437184 6792 ## 2014-05-12 17:00:33.178 9437184 3824 ## 2014-05-12 17:00:33.677 9437184 3211 ## 2014-05-12 17:00:35.538 9437184 17621 ## 2014-05-12 17:00:37.811 9437184 17504 ## 2014-05-12 17:00:41.427 9437184 28672 Example of Two Benchmark Runs in One Log File The data in the following graph is from a different log file, not the one of primary interest to this article. I’m including this image because it is an example of idle periods followed by busy periods. It would be uninteresting to average the rate of garbage collection over the entire log file period. More interesting would be the rate of garbage collect in the two busy periods. Are they the same or different? Your production data may be similar, for example, bursts when employees return from lunch and idle times on weekend evenings, etc. Once the data is in an R Time Series, you can analyze isolated time windows. Clipping the Time Series data Flashing back to our test case… Viewing the data as a time series is interesting. You can see that the work intensive time period is between 9:00 PM and 3:00 AM. Lets clip the data to the interesting period:     par(mfrow=c(2,1)) plot(g1gc.z$UserTime, type="h", main="User Time per GC\nTime: Complete Log File", xlab="Time of Day", ylab="CPU Seconds per GC", col="#1b9e77") clipped.g1gc.z=window(g1gc.z, start=as.POSIXct("2014-05-12 21:00:00"), end=as.POSIXct("2014-05-13 03:00:00")) plot(clipped.g1gc.z$UserTime, type="h", main="User Time per GC\nTime: Limited to Benchmark Execution", xlab="Time of Day", ylab="CPU Seconds per GC", col="#1b9e77") box(which = "outer", lty = "solid") Cumulative Incremental and Full GC count Here is the cumulative incremental and full GC count. When the line is very steep, it indicates that the GCs are repeating very quickly. Notice that the scale on the Y axis is different for full vs. incremental. plot(clipped.g1gc.z[,c(2:3)], main="Cumulative Incremental and Full GC count", xlab="Time of Day", col="#1b9e77") GC Analysis of Benchmark Execution using Time Series data In the following series of 3 graphs: The “After Size” show the amount of heap space in use after each garbage collection. Many Java objects are still referenced, i.e. alive, during each garbage collection. This may indicate that the application has a memory leak, or may indicate that the application has a very large memory footprint. Typically, an application's memory footprint plateau's in the early stage of execution. One would expect this graph to have a flat top. The steep decline in the heap space may indicate that the application crashed after 2:00. The second graph shows that the outliers in real execution time, discussed above, occur near 2:00. when the Java heap seems to be quite full. The third graph shows that Full GCs are infrequent during the first few hours of execution. The rate of Full GC's, (the slope of the cummulative Full GC line), changes near midnight.   plot(clipped.g1gc.z[,c("AfterSize","RealTime","FullCount")], xlab="Time of Day", col=c("#1b9e77","red","#1b9e77")) GC Analysis of heap recovered Each GC trace includes the amount of heap space in use before and after the individual GC event. During garbage coolection, unreferenced objects are identified, the space holding the unreferenced objects is freed, and thus, the difference in before and after usage indicates how much space has been freed. The following box plot and bar chart both demonstrate the same point - the amount of heap space freed per garbage colloection is surprisingly low. par(mfrow=c(2,1)) boxplot(as.vector(clipped.g1gc.z$Delta), main="Amount of Heap Recovered per GC Pass", xlab="Size in KB", horizontal = TRUE, col="red") hist(as.vector(clipped.g1gc.z$Delta), main="Amount of Heap Recovered per GC Pass", xlab="Size in KB", breaks=100, col="red") box(which = "outer", lty = "solid") This graph is the most interesting. The dark blue area shows how much heap is occupied by referenced Java objects. This represents memory that holds live data. The red fringe at the top shows how much data was recovered after each garbage collection. barplot(clipped.g1gc.z[,c("AfterSize","Delta")], col=c("#7570b3","#e7298a"), xlab="Time of Day", border=NA) legend("topleft", c("Live Objects","Heap Recovered on GC"), fill=c("#7570b3","#e7298a")) box(which = "outer", lty = "solid") When I discuss the data in the log files with the customer, I will ask for an explaination for the large amount of referenced data resident in the Java heap. There are two are posibilities: There is a memory leak and the amount of space required to hold referenced objects will continue to grow, limited only by the maximum heap size. After the maximum heap size is reached, the JVM will throw an “Out of Memory” exception every time that the application tries to allocate a new object. If this is the case, the aplication needs to be debugged to identify why old objects are referenced when they are no longer needed. The application has a legitimate requirement to keep a large amount of data in memory. The customer may want to further increase the maximum heap size. Another possible solution would be to partition the application across multiple cluster nodes, where each node has responsibility for managing a unique subset of the data. Conclusion In conclusion, R is a very powerful tool for the analysis of Java garbage collection log files. The primary difficulty is data cleansing so that information can be read into an R data frame. Once the data has been read into R, a rich set of tools may be used for thorough evaluation.

    Read the article

  • Optimal way to generate list of PHP object properties with delimiter character, implode()?

    - by Kris
    I am trying to find out if there is a more optimal way for creating a list of an object's sub object's properties. (Apologies for the crude wording, I am not really much of an OO expert) I have an object "event" that has a collection of "artists", each artist having an "artist_name". On my HTML output, I want a plain list of artist names delimited by a comma. PHP's implode() seems to be the best way to create a comma delimited list of values. I am currently iterating through the object and push values in a temporary array "artistlist" so I can use implode(). That is the shortest I could come up with. Is there a way to do this more elegant? $artistlist = array(); foreach ($event->artists as $artist) { $artistlist[] = $artist->artist_name; } echo implode(', ', $artistlist);

    Read the article

  • How to get List of results from list of ID values with LINQ to SQL?

    - by DaveDev
    I have a list of ID values: List<int> MyIDs { get; set; } I'd like to pass this list to an interface to my repository and have it return a List that match the ID values I pass in. List<MyType> myTypes = new List<MyType>(); IMyRepository myRepos = new SqlMyRepository(); myTypes = myRepos.GetMyTypes(this.MyIDs); Currently, GetMyTypes() behaves similarly to this: public MyType GetMyTypes(int id) { return (from myType in db.MyTypes where myType.Id == id select new MyType { MyValue = myType.MyValue }).FirstOrDefault(); } where I iterate through MyIDs and pass each id in and add each result to a list. How do I need to change the LINQ so that I can pass in the full list of MyIDs and get a list of MyTypes out? GetMyTypes() would have a signature similar to public List<MyType> GetMyTypes(List<int> myIds)

    Read the article

  • Algorithm to generate all possible permutations of a list?

    - by LLer
    Say I have a list of n elements, I know there are n! possible ways to order these elements. What is an algorithm to generate all possible orderings of this list? Example, I have list [a, b, c]. The algorithm would return [[a, b, c], [a, c, b,], [b, a, c], [b, c, a], [c, a, b], [c, b, a]]. I'm reading this here http://en.wikipedia.org/wiki/Permutation#Algorithms_to_generate_permutations But Wikipedia has never been good at explaining. I don't understand much of it.

    Read the article

  • Possible to capture the returned value from a Python list comprehension for use a condition?

    - by Joe
    I want to construct a value in a list comprehension, but also filter on that value. For example: [expensive_function(x) for x in generator where expensive_function(x) < 5] I want to avoid calling expensive_function twice per iteration. The generator may return an infinite series, and list comprehensions aren't lazily evaluated. So this wouldn't work: [y in [expensive_function(x) for x in generator where expensive_function(x)] where y < 5] I could write this another way, but it feels right for a list comprehension and I'm sure this is a common usage pattern (possible or not!).

    Read the article

  • What are these stray zero-byte files extracted from tarball? (OSX)

    - by Scott M
    I'm extracting a folder from a tarball, and I see these zero-byte files showing up in the result (where they are not in the source.) Setup (all on OS X): On machine one, I have a directory /My/Stuff/Goes/Here/ containing several hundred files. I build it like this tar -cZf mystuff.tgz /My/Stuff/Goes/Here/ On machine two, I scp the tgz file to my local directory, then unpack it. tar -xZf mystuff.tgz It creates ~scott/My/Stuff/Goes/, but then under Goes, I see two files: Here/ - a directory, Here.bGd - a zero byte file. The "Here.bGd" zero-byte file has a random 3-character suffix, mixed upper and lower-case characters. It has the same name as the lowest-level directory mentioned in the tar-creation command. It only appears at the lowest level directory named. Anybody know where these come from, and how I can adjust my tar creation to get rid of them? Update: I checked the table of contents on the files using tar tZvf: toc does not list the zero-byte files, so I'm leaning toward the suggestion that the uncompress machine is at fault. OS X is version 10.5.5 on the unzip machine (not sure how to check the filesystem type). Tar is GNU tar 1.15.1, and it came with the machine.

    Read the article

  • jquery: addClass 1,2,3 etc. auto to a list

    - by Svensson
    Hello, is it possible, to add auto numeric classes to a list by using jquery? html: <ul id="list"> <li>Element 1</li> <li>Element 2</li> <li>Element 3</li> <li>Element 4</li> <li>Element 5</li> </ul> i want to get something like this: <ul id="list"> <li class="1">Element 1</li> <li class="2">Element 2</li> <li class="3">Element 3</li> <li class="4">Element 4</li> <li class="5">Element 5</li> </ul> hope there is a solution available :-)

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >