Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 60/457 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • null reference expction in the code

    - by LifeH2O
    I am getting NullReferenceException error on "_attr.Append(xmlNode.Attributes["name"]);". namespace SMAS { class Profiles { private XmlTextReader _profReader; private XmlDocument _profDoc; private const string Url = "http://localhost/teamprofiles.xml"; private const string XPath = "/teams/team-profile"; public XmlNodeList Teams{ get; private set; } private XmlAttributeCollection _attr; public ArrayList Team { get; private set; } public void GetTeams() { _profReader = new XmlTextReader(Url); _profDoc = new XmlDocument(); _profDoc.Load(_profReader); Teams = _profDoc.SelectNodes(XPath); foreach (XmlNode xmlNode in Teams) { _attr.Append(xmlNode.Attributes["name"]); } } } } the teamprofiles.xml file looks like <teams> <team-profile name="Australia"> <stats type="Test"> <span>1877-2010</span> <matches>721</matches> <won>339</won> <lost>186</lost> <tied>2</tied> <draw>194</draw> <percentage>47.01</percentage> </stats> <stats type="Twenty20"> <span>2005-2010</span> <matches>32</matches> <won>18</won> <lost>12</lost> <tied>1</tied> <draw>1</draw> <percentage>59.67</percentage> </stats> </team-profile> <team-profile name="Bangladesh"> <stats type="Test"> <span>2000-2010</span> <matches>66</matches> <won>3</won> <lost>57</lost> <tied>0</tied> <draw>6</draw> <percentage>4.54</percentage> </stats> </team-profile> </teams> I am trying to extract names of all teams in an ArrayList. Then i'll extract all stats of all teams and display them in my application. Can you please help me about that null reference exception?

    Read the article

  • null reference exception in the code

    - by LifeH2O
    I am getting NullReferenceException error on "_attr.Append(xmlNode.Attributes["name"]);". namespace SMAS { class Profiles { private XmlTextReader _profReader; private XmlDocument _profDoc; private const string Url = "http://localhost/teamprofiles.xml"; private const string XPath = "/teams/team-profile"; public XmlNodeList Teams{ get; private set; } private XmlAttributeCollection _attr; public ArrayList Team { get; private set; } public void GetTeams() { _profReader = new XmlTextReader(Url); _profDoc = new XmlDocument(); _profDoc.Load(_profReader); Teams = _profDoc.SelectNodes(XPath); foreach (XmlNode xmlNode in Teams) { _attr.Append(xmlNode.Attributes["name"]); } } } } the teamprofiles.xml file looks like <teams> <team-profile name="Australia"> <stats type="Test"> <span>1877-2010</span> <matches>721</matches> <won>339</won> <lost>186</lost> <tied>2</tied> <draw>194</draw> <percentage>47.01</percentage> </stats> <stats type="Twenty20"> <span>2005-2010</span> <matches>32</matches> <won>18</won> <lost>12</lost> <tied>1</tied> <draw>1</draw> <percentage>59.67</percentage> </stats> </team-profile> <team-profile name="Bangladesh"> <stats type="Test"> <span>2000-2010</span> <matches>66</matches> <won>3</won> <lost>57</lost> <tied>0</tied> <draw>6</draw> <percentage>4.54</percentage> </stats> </team-profile> </teams> I am trying to extract names of all teams in an ArrayList. Then i'll extract all stats of all teams and display them in my application. Can you please help me about that null reference exception?

    Read the article

  • Problem in arranging contents of Class in JAVA

    - by LuckySlevin
    Hi, I have some classes and I'm trying to fill the objects of this class. Here is what i've tried. (Question is at the below) public class Team { private String clubName; private String preName; private ArrayList<String> branches; public Team(String clubName, String preName) { this.clubName = clubName; this.preName = preName; branches = new ArrayList<String>(); } public Team() { // TODO Auto-generated constructor stub } public String getClubName() { return clubName; } public String getPreName() { return preName; } public ArrayList<String> getBranches() { return branches; } public void setClubName(String clubName) { this.clubName = clubName; } public void setPreName(String preName) { this.preName = preName; } public void setBranches(ArrayList<String> branches) { this.branches = branches; } } public class Branch { private ArrayList<Player> players = new ArrayList<Player>(); String brName; public Branch() {} public void setBr(String brName){this.brName = brName;} public String getBr(){return brName;} public ArrayList<Player> getPlayers() { return players; } public void setPlayers(ArrayList<Player> players) { this.players = players; } } //TEST CLASS public class test { /** * @param args * @throws IOException */ public static void main(String[] args) throws IOException { String a,b,c; String q = "q"; int brCount = 0, tCount = 0; BufferedReader input = new BufferedReader(new InputStreamReader(System.in)); Team[] teams = new Team[30]; Branch[] myBranch = new Branch[30]; for(int z = 0 ; z <30 ;z++) { teams[z] = new Team(); myBranch[z] = new Branch(); } ArrayList<String> tmp = new ArrayList<String>(); int k = 0; int secim = Integer.parseInt(input.readLine()); while(secim != 0) { if(k!=0) secim = Integer.parseInt(input.readLine()); k++; switch(secim) { case 1 : brCount = 0; a = input.readLine(); teams[tCount].setClubName(a); b= input.readLine(); teams[tCount].setPreName(b); c = input.readLine(); while(c.equals(q) == false) { if(brCount != 0) {c = input.readLine();} if(c.equals(q)== false){ myBranch[brCount].brName = c; tmp.add(myBranch[brCount].brName); brCount++; } System.out.println(brCount); } teams[tCount].setBranches(tmp); for(int i=0;i<=tCount;i++ ){ System.out.print("a :" + teams[i].getClubName()+ " " + teams[i].getPreName()+ " "); System.out.println(teams[i].getBranches());} tCount++; break; case 2: String src = input.readLine();//LATERRRRRRRr } } } } The problem is one of my class elements. I have an arraylist as an element of a class. When i enter: AAA as preName BBB as clubName c d e as Branches Then as a second element www as preName GGG as clubName a b as branches The result is coming like: AAA BBB c,d,e,a,b GGG www c,d,e,a,b Which means ArrayList part of the class is putting it on and on. I tried to use clear() method but caused problems. Any ideas.

    Read the article

  • Are eCommerce platforms worth it for large scale systems?

    - by codeflunky
    My company and I are building a new system for a relatively large client. We are going to be replacing their entire system, which includes some eCommerce aspects to it among many other things. It is not a typical public shopping site, and there are many things about the system (both back end and front end) that are quite different. Some of the people I work for are convinced that we should be using a third party product to implement the eCommerce pieces (shopping cart, catalog management). Their opinion is that it is a solved problem, and we shouldn't have to reinvent it. Given that direction, I have reviewed around ten different .NET based eCommerce platforms, and I struggle to imagine how we will be able to smoothly integrate any of them without a lot of friction. They are so all-encompassing that I feel like they are probably better suited for implementing simple shopping sites rather than larger systems that happen to have some eCommerce aspects to them. We have a really nice architecture planned for everything else (Entity Framework, ASP.NET MVC, etc.), and my gut is telling me that trying to introduce a third party platform will cause unnecessary fragmentation and difficulty. I would love to hear some opinions from people who have been there. Have you used a third party platform for eCommerce? Was it a typical shopping site or something different? Did you feel it was a help or a hinderance? Thanks.

    Read the article

  • how to show a large jpg image to the right in jqGrid's edit form ?

    - by cLee
    Is it possible to show a large (i.e. bigger than a thumbnail) jpeg image in the right-hand side of jqGrid's edit form ? Users want to look at a photo while entering data into fields ... they are describing things in the photo. I'm sure all things are possible with jQuery, but I don't know where to begin. thanks ... html: function afterSubmit(r, data, action) { // if session timeout returned: if (r.responseText == "logout") { window.location = '../scripts/logout.php'; } // if an error message is returned: if (r.responseText != "") { $('#submit_errors').html('Alert:'+r.responseText+''); // show div with error message $('#submit_errors').slideDown(); // hide error div after 10 seconds window.setTimeout(function() { $('#submit_errors').slideUp(); }, 10000); return false; // don't remove this! } return true; // don't remove this! } var lastsel; jQuery(document).ready(function(){ var mygrid = jQuery("#mobile_incidents").jqGrid({ url:'list.php?q=e', editurl:'edit.php', datatype: "json", // note: all column names are required even though some columns are hidden colNames:['Rec#','Date','Line','Photo'], colModel:[{ name:'id', index:'id', editable:true, editoptions: {readonly:'readonly'} }, { name:'mobile_discoveryDate', index:'mobile_discoveryDate', sortable:false, editable:true, edittype:'text', formatter:'date', formatoptions:{ srcformat:'Y/m/d', newformat:'m/d/Y' }, editoptions:{ size:12, maxlength:10, dataInit: function(element) { $(element).blur(); $(element).datepicker({dateFormat:'mm/dd/yyyy'}) } } }, { name:'mobile_lineName', index:'mobile_lineName', editable:true, sortable:false}, { name:'mobile_photo_name', index:'mobile_photo_name', editable:false, sortable:false} ], pager: '#mobile_incidents_pager', altRows: false, rowNum:10, rowList:[10,20], imgpath: '../include/images/jqgrid', viewrecords: true, emptyrecords:'No submissions found!', height: 260, sortname: 'id', sortorder: 'desc', gridview: true, scrollrows: true, autowidth: true, rownumbers: false, multiselect: false, subGrid:false, caption: '' }) .navGrid('#mobile_incidents_pager', // params: {add:false, edit:true, del:false, search:false, view:false, refresh:true, alertcap:' to edit:', alerttext:' . . . click on a row to highlight' }, // edit params: {top:50, left:5, editCaption: 'Edit Submission', bSubmit: 'Approve/Save', closeAfterEdit:true, afterSubmit:function(r,data){return afterSubmit(r,data,'edit');} }, {}, // add params {}, // delete params // search params: {multipleSearch: false}, // view params: {top: 150, left: 5, caption: 'View Mobile Rail Submission'} ); });

    Read the article

  • How to read LARGE Sqlite file to be copied into Android emulator, or device from assets folder?

    - by Peter SHINe ???
    I guess many people already read this article: Using your own SQLite database in Android applications: http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/comment-page-2/#comment-12368 However it's keep bringing IOException at while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } I’am trying to use a large DB file. It’s as big as 8MB I built it using sqlite3 in Mac OS X, inserted UTF-8 encoded strings (for I am using Korean), added android_meta table with ko_KR as locale, as instructed above. However, When I debug, it keeps showing IOException at length=myInput.read(buffer) I suspect it’s caused by trying to read a big file. If not, I have no clue why. I tested the same code using much smaller text file, and it worked fine. Can anyone help me out on this? I’ve searched many places, but no place gave me the clear answer, or good solution. Good meaning efficient or easy. I will try use BufferedInput(Output)Stream, but if the simpler one cannot work, I don’t think this will work either. Can anyone explain the fundamental limits in file input/output in Android, and the right way around it, possibly? I will really appreciate anyone’s considerate answer. Thank you. WITH MORE DETAIL: private void copyDataBase() throws IOException{ //Open your local db as the input stream InputStream myInput = myContext.getAssets().open(DB_NAME); // Path to the just created empty db String outFileName = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myOutput = new FileOutputStream(outFileName); //transfer bytes from the inputfile to the outputfile byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } //Close the streams myOutput.flush(); myOutput.close(); myInput.close(); }

    Read the article

  • What is the best way to iterate over a large result set in JDBC/iBatis 3?

    - by paul_sns
    We're trying to iterate over a large number of rows from the database and convert those into objects. Behavior will be as follows: Result will be sorted by sequence id, a new object will be created when sequence id changes. The object created will be sent to an external service and will sometimes have to wait before sending another one (which means the next set of data will not be immediately used) We already have invested code in iBatis 3 so an iBatis solution will be the best approach for us (we've tried using RowBounds but haven't seen how it does the iteration under the hood). We'd like to balance minimizing memory usage and reducing number of DB trips. We're also open to pure JDBC approach but we'd like the solution to work on different databases. UPDATE: We need to make as few calls to DB as possible (1 call would be the ideal scenario) while also preventing the application to use too much memory. Are there any other solutions out there for this type of problem may it be pure JDBC or any other technology? Thanks and hope to hear your insights on this.

    Read the article

  • does lucene search function work in large size document?

    - by shaon-fan
    Hi,there I have a problem when do search with lucene. First, in lucene indexing function, it works well to huge size document. such as .pst file, the outlook mail storage. It can build indexing file include all the information of .pst. The only problem is to large sometimes, include very much words. So when i search using lucene, it only can process the front part of this indexing file, if one word come out the back part of the indexing file, it couldn't find this word and no hits in result. But when i separate this indexing file to several parts in stupid way when debugging, and searching every parts, it can work well. So i want to know how to separate indexing file, how much size should be the limit of searching? cheers and wait 4 reply. ++++++++++++++++++++++++++++++++++++++++++++++++++ hi,there, follow Coady siad, i set the length to max 2^31-1. But the search result still can't include what i want. simply, i convert the doc word to string array[] to analyze, one doc word has 79680 words include the space and any symbol. when i search certain word, it just return 300 count, actually it has more than 300 results. The same reason, when i search a word in back part of the doc, it also couldn't find. //////////////set the length idexwriter.SetMaxFieldLength(2147483647); ////////////////////search IndexSearcher searcher = new ndexSearcher(Program.Parameters["INDEX_LOCATION"].ToString()); Hits hits = searcher.Search(query); This is my code, as others same. I found that problem when i need to count every word hits in a doc. So i also found it couldn't search word in back part of doc. pls help me to find, is there any set searcher length somewhere? how u meet this problem.

    Read the article

  • How to parse large xml files on google app engine?

    - by Alon Carmel
    Hey, I have fairly large xml file 1mb in size that i host on s3. I need to parse that xml file into my app engine datastore entirely. I have written a simple DOM parser that works fine locally but online it reaches the 30sec error and stops. I tried lowering the xml parsing by downloading the xml file into a BLOB at first before the parser then parse the xml file from blob. problem is that blobs are limited to 1mb. so it fails. I have multiple inserts to the datastore which cause it to fail on 30 sec. i saw somewhere that they recommend using the Mapper class and save some exception where the process stopped but as i am a python n00b i cant figure out how to implement it on a DOM parser or an SAX one (please provide an example?) on how to use it. i'm pretty much doing a bad thing right now and i parse the xml using php outside the app engine and push the data via HTTP post to the app engine using a proprietary API which works fine but is stupid and makes me maintain two codes. can you please help me out?

    Read the article

  • Suggested (simple) approach for drawing large numbers of visual elements in WPF?

    - by Ender
    I'm writing an interface that features a large (~50000px width) "canvas"-type area that is used to display a lot of data in a fairly novel way. This involves lots of lines, rectangles, and text. The user can scroll around to explore the entire canvas. At the moment I'm just using a standard Canvas panel with various Shapes placed on it. This is nice and easy to do: construct a shape, assign some coordinates, and attach it to the Canvas. Unfortunately, it's pretty slow (to construct the children, not to do the actual rendering). I've looked into some alternatives, it's a bit intimidating. I don't need anything fancy - just the ability to efficiently construct and place objects in a coordinate plane. If all I get are lines, colored rectangles, and text, I'll be happy. Do I need Geometry instances inside of Geometry Groups inside of GeometryDrawings inside of some Panel container? Note: I'd like to include text and graphics (i.e. colored rectangles) in the same space, if possible.

    Read the article

  • jQuery AJAX: How to pass large HTML tags as parameters?

    - by marknt15
    Hello, How can I pass a large HTML tag data to my PHP using jQuery AJAX? When I'm receiving the result it is wrong. Thanks in advance. Cheers, Mark jQuery AJAX code: $('#saveButton').click(function() { // do AJAX and store tree structure to a PHP array (to be saved later in database) var treeInnerHTML = $("#demo_1").html(); alert(treeInnerHTML); var ajax_url = 'ajax_process.php'; var params = 'tree_contents=' + treeInnerHTML; $.ajax({ type: 'POST', url: ajax_url, data: params, success: function(data) { $("#show_tree").html(data); }, error: function(req, status, error) { } }); }); treeInnerHTML actual value: <ul class="ltr"> <li id="phtml_1" class="open"><a href="#"><ins>&nbsp;</ins>Root node 1</a> <ul> <li class="leaf" id="phtml_2"><a href="#"><ins>&nbsp;</ins>Child node 1</a></li> <li class="last leaf" id="phtml_3"><a href="#"><ins>&nbsp;</ins>Child node 2</a></li> </ul> </li> <li id="phtml_5" class="file last leaf"><a href="#"><ins>&nbsp;</ins>Root node 2</a></li> </ul> Returned result from my show_tree div: <ul class="\&quot;ltr\&quot;"> <li id="\&quot;phtml_1\&quot;" class="\&quot;open\&quot;"><a href="%5C%22#%5C%22"><ins></ins></a></li></ul>

    Read the article

  • WCF code generation for large/complex schema (HR-XML/OAGIS) - is there an alternative?

    - by Sasha Borodin
    Hello, and thank you for reading. I am implementing a WCF Service based on a predefined specification (HR-XML 3.0). As such, I am starting with the schema, and working my way back to code. There are a number of large Schema documents (which import yet more Schema documents) related to my implementation, provided by this specification. I am able to generate code using xsd.exe, by supplying the "main" and "supporting" xsd files as arguments. But there are several issues, and I am wondering if this is the right approach. there are litterally hundreds of classes - the code file is half a meg in size duplicate classes (ex. Type, Type1 - which both represent the same type) there are classes declared as inheriting from a base class, but that base class is not generated/defined I understand that there are limitations to the types of Schema supported by svcutil.exe/xsd.exe when targeting the DataContractSerializer and even XmlSerializer. My question is two-fold: Are code generation "issues" fairly common when dealing with larger, modular xsd files? Has anyone had success with generating data contracts from OAGIS or HR-XML schema? Given the above issues, are there better approaches to this task, avoiding generating code and working with concrete objects? Does it make better sence to read and compose a SOAP message directly, while still taking advantage of the rest of the WCF framework? I understand that I am loosing the convenience of working with .NET objects, and the framekwork-provided (de)serialization; given these losses, would it still be advantageous to base my Service on WCF? Is there some "middle ground" between working with .NET types and pure XML? Thank you very much! -Sasha Borodin DFWHC.org

    Read the article

  • Best practice for inserting large chunks of HTML into elements with Javscript?

    - by hamstar
    Hey guys. I'm building a web application (using prototype) at the moment that requires the addition of large chunks of HTML into the DOM. Most of these are rows that contain elements with all manner of attributes. Currently I keep a blank row of html in a variable and var blankRow = '<tr><td>' +'<a href="{LINK}" onclick="someFunc(\'{STRING}\');">{WORD}</a>' +'</td></tr>'; function insertRow(o) { newRow = blankRow .sub('{LINK}',o.link) .sub('{STRING}',o.string) .sub('{WORD}',o.word); $('tbodyElem').insert( newRow ); } Now that works all well and dandy, but is it the best practice? I have to update the code in the blankRow when I update code on the page, so the new elements being inserted are the same. It gets sucky when I have like 40 lines of HTML to go in a blankRow and then I have to escape it too. Is there an easier way? I was thinking of urlencoding and then decoding it before insertion but that would still mean a blankRow and lots of escaping. What would be mean would be a eof function a la PHP et al. $blankRow = <<<EOF text text EOF; That would mean no escaping but it would still need a blankRow. What do you do in this situation?

    Read the article

  • Which is faster for large "for" loop: function call or inline coding?

    - by zaplec
    Hi, I have programmed an embedded software (using C of course) and now I'm considering ways to improve the running time of the system. The most important single module in my system is one very large nested for loop module. That module consists of two nested for loops that loops max 122500 times. That's not very much yet, but the problem is that inside that nested for loop I have a function call to a function that is in another source file. That specific function consists mostly of two another nested for loops which loops always 22500 times. So now I have to make a function call 122500 times. I have made that function that is to be called a lot lighter and shorter (yet still works as it should) and now I started to think that would it be faster to rip off that function call and write that process directly inside those first two for loops? The processor in that system is ARM7TDMI and its frequency is 55MHz. The system itself isn't very time critical so it doesn't have to be real time capable. However the faster it can process its duties the better. Also would it be also faster to use while loops instead of fors? And any piece of advice about how to improve the running time is appreciated. -zaplec

    Read the article

  • Is there a faster way to parse through a large file with regex quickly?

    - by Ray Eatmon
    Problem: Very very, large file I need to parse line by line to get 3 values from each line. Everything works but it takes a long time to parse through the whole file. Is it possible to do this within seconds? Typical time its taking is between 1 minute and 2 minutes. Example file size is 148,208KB I am using regex to parse through every line: Here is my c# code: private static void ReadTheLines(int max, Responder rp, string inputFile) { List<int> rate = new List<int>(); double counter = 1; try { using (var sr = new StreamReader(inputFile, Encoding.UTF8, true, 1024)) { string line; Console.WriteLine("Reading...."); while ((line = sr.ReadLine()) != null) { if (counter <= max) { counter++; rate = rp.GetRateLine(line); } else if(max == 0) { counter++; rate = rp.GetRateLine(line); } } rp.GetRate(rate); Console.ReadLine(); } } catch (Exception e) { Console.WriteLine("The file could not be read:"); Console.WriteLine(e.Message); } } Here is my regex: public List<int> GetRateLine(string justALine) { const string reg = @"^\d{1,}.+\[(.*)\s[\-]\d{1,}].+GET.*HTTP.*\d{3}[\s](\d{1,})[\s](\d{1,})$"; Match match = Regex.Match(justALine, reg, RegexOptions.IgnoreCase); // Here we check the Match instance. if (match.Success) { // Finally, we get the Group value and display it. string theRate = match.Groups[3].Value; Ratestorage.Add(Convert.ToInt32(theRate)); } else { Ratestorage.Add(0); } return Ratestorage; } Here is an example line to parse, usually around 200,000 lines: 10.10.10.10 - - [27/Nov/2002:16:46:20 -0500] "GET /solr/ HTTP/1.1" 200 4926 789

    Read the article

  • What hash algorithms are paralellizable? Optimizing the hashing of large files utilizing on mult-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • Web page for IPhone - Large fonts instead of scrolling?

    - by chris_l
    I'm planning the layout of my web page, which should also be usable on the IPhone. I don't really have much experience with the IPhone yet - I just installed the IPhone Simulator on my Mac. The page's contents are flexible, so I think it would be better to use this flexibility to avoid that the user has to scroll around the entire page. Especially I have A header and a sidebar that will be used all the time to perform several actions. A main content area with a number of elements (e.g. images). The UI would stay usable pretty well, if the number of elements shown at one time is reduced for a small screen (e.g. by JavaScript). It would also be okay to make the main content area scrollable (as opposed to the entire page). The problem: If I simply display the page on the IPhone, it uses an extremely small font size, so that users must zoom in first, and then scroll around - so that they can't see the header and sidebar all the time. What's the best way to deal with this situation? Just leave it this way (very small fonts), because users expect that behaviour on the IPhone? Increase the font size (by specifying it in em or px or with xx-large, or what would be the best way?), if I detect - somehow - that it's being displayed on the IPhone. Or is there some way to restrict the viewport size to the screen size, and make it zoom in automatically? I think that would be the easiest solution in my case. Or ...?

    Read the article

  • what can cause large discrepancy between minor GC time and total pause time?

    - by cxcg
    We have a latency-sensitive application, and are experiencing some GC-related pauses we don't fully understand. We occasionally have a minor GC that results in application pause times that are much longer than the reported GC time itself. Here is an example log snippet: 485377.257: [GC 485378.857: [ParNew: 105845K-621K(118016K), 0.0028070 secs] 136492K-31374K(1035520K), 0.0028720 secs] [Times: user=0.01 sys=0.00, real=1.61 secs] Total time for which application threads were stopped: 1.6032830 seconds The total pause time here is orders of magnitude longer than the reported GC time. These are isolated and occasional events: the immediately preceding and succeeding minor GC events do not show this large discrepancy. The process is running on a dedicated machine, with lots of free memory, 8 cores, running Red Hat Enterprise Linux ES Release 4 Update 8 with kernel 2.6.9-89.0.1EL-smp. We have observed this with (32 bit) JVM versions 1.6.0_13 and 1.6.0_18. We are running with these flags: -server -ea -Xms512m -Xmx512m -XX:+UseConcMarkSweepGC -XX:NewSize=128m -XX:MaxNewSize=128m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:-TraceClassUnloading Can anybody offer some explanation as to what might be going on here, and/or some avenues for further investigation?

    Read the article

  • Javascript (using jQuery) in a large Project... organization, passing data, private method, etc.

    - by gaoshan88
    I'm working on a large project that is organized like so: Multiple javascript files are included as needed and most code is wrapped in anonymous functions... // wutang.js //Included Files Go Here // Public stuff var MethodMan; // Private stuff (function() { var someVar1; MethodMan = function(){...}; var APrivateMethod = function(){...}; $(function(){ //jquery page load stuff here $('#meh').click(APrivateMethod); }); })(); I'm wondering about a few things here. Assuming that there are a number of these files included on a page, what has access to what and what is the best way to pass data between the files? For example, I assume that MethodMan() can be accessed by anything on any included page. It is public, yes? var someVar1 is only accessible to the methods inside that particular anonymous function and nowhere else, yes? Also true of var APrivateMethod(), yes? What if I want APrivateMethod() to make something available to some other method in a different anonymous wrapped method in a different included page. What's the best way to pass data between these private, anonymous functions on different included pages? Do I simply have to make whatever I want to use between them public? How about if I want to minimize global variables? What about the following: var PootyTang = function(){ someVar1 = $('#someid').text(); //some stuff }; and in another included file used by that same page I have: var TangyPoot = function(){ someVar1 = $('#someid').text(); //some completely different stuff }; What's the best way to share the value of someVar1 across these anonymous (they are wrapped as the first example) functions?

    Read the article

  • SQL Database Schema Design For Large 3 Billion Relationship Database.

    - by K-Bell
    Get your geek on. Can you solve this? I am designing a products database for SQL Server 2008 R2 Ed. (not Enterprise Ed.) that will be used to store custom product configurations for over 30,000 distinct products. The database will have up to 500 users at a time. Here is the design problem… Each Product has a collection of Parts (up to 50 parts per product). So if I have 30,000 Products and each of them can have up to 50 Parts, that’s 1.5 million distinct Product-to-Part relationships …or as an equation… 30,000 (Products) X 50 (Parts) = 1.5 million Product-to-Parts records. …and If… Each Part can have up to 2000 finish options (A finish is a paint color). NOTE: Only one finish will be selected by a user at run-time. The 2000 finish options I need to store are the allowed options for a specific part on a specific product. So if I have 1.5 million distinct product-to-part relationships/records and each of those parts can have up to 2,000 finishes that is 3 billion allowable product-to-part-to finish relationships/records …or as an equation… 1.5 million (Parts) x 2,000 (Finishes) = 3 Billion Product-to-Part-to-Finishes records. How can I design this database so that I can execute fast and efficient queries for a specific product and return its list of Parts and all the allowable Finishes for each part without 3 Billion Product-to-Part-to-Finish records? Read time is more important then write time. Please post your thoughts/suggestions if you have experience with large databases. Thanks!

    Read the article

  • What hash algorithms are parallelizable? Optimizing the hashing of large files utilizing on multi-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • How to avoid geometric slowdown with large Linq transactions?

    - by Shaul
    I've written some really nice, funky libraries for use in LinqToSql. (Some day when I have time to think about it I might make it open source... :) ) Anyway, I'm not sure if this is related to my libraries or not, but I've discovered that when I have a large number of changed objects in one transaction, and then call DataContext.GetChangeSet(), things start getting reaalllly slooowwwww. When I break into the code, I find that my program is spinning its wheels doing an awful lot of Equals() comparisons between the objects in the change set. I can't guarantee this is true, but I suspect that if there are n objects in the change set, then the call to GetChangeSet() is causing every object to be compared to every other object for equivalence, i.e. at best (n^2-n)/2 calls to Equals()... Yes, of course I could commit each object separately, but that kinda defeats the purpose of transactions. And in the program I'm writing, I could have a batch job containing 100,000 separate items, that all need to be committed together. Around 5 billion comparisons there. So the question is: (1) is my assessment of the situation correct? Do you get this behavior in pure, textbook LinqToSql, or is this something my libraries are doing? And (2) is there a standard/reasonable workaround so that I can create my batch without making the program geometrically slower with every extra object in the change set?

    Read the article

  • Exemplars of large document-centric applications with COM/XPCOM/.NET interfaces.

    - by Warren P
    I am looking for exemplars (design examples) showing the use of interfaces (aka 'protocols' for you smalltalkers) to design a document management architecture in a large Word Processor, Spreadsheet, vector graphic or publishing package, or office-productivity (non-database) application with support for as many of the following as possible: any open source project, will be ideal, and language of implementation is unimportant since I am looking for design examples, however an object oriented language with support for "interfaces" is a must. I know at least a dozen languages, and I'm willing to study any application's source. use of "interface" could loosely be applied to either XPCOM or COM interfaces, or .NET interfaces, or even the use of pure-virtual (virtual+abstract) base-classes for OOP languages that lack the ability to declare an interface distinct from a class. I am mostly looking for a robust, thorough and flexible implementation for a document, IDocument, various document views (IDocumentView), and whatever operations make sense in that case. I am particular interested in cases where the product in question is a real-world product. For example, if anybody familiar with OpenOffice can tell me if the code contains a good sample design. I am looking for design documentation that outlines the design of the interfaces for such an application. So for example, if the openoffice spreadsheet has such an interface design, then that might be the best case, because it is a widely used real-world design, with millions of users, rather than a textbook example, which is minimal, and contrived. I know that the Mozilla platform uses XPCOM, and its design is heavily "interface" oriented, but I am looking more for a "word processor" or "spreadsheet" type of document design, rather than a web-browser. I am particularly interested in the interfaces used to access to data and meta-data such as markup (attributes like bold, and italics, and font size), and the ability to search and look up named entities within a document.

    Read the article

  • Large Product catalog with statistics - alternatives to Sql Server?

    - by Eric P
    I am building UI for a large product catalog (millions of products). I am using Sql Server, FreeText search and ASP.NET MVC. Tables are normalized and indexed. Most queries take less then a second to return. The issue is this. Let's say user does the search by keyword. On search results page I need to display/query for: First 20 matching products (paged, sorted) Total count of matching products for paging List of stores only of matching products List of brands only of matching products List of colors only of matching products Each query takes about .5 to 1 seconds. Altogether it is like 5 seconds. I would like to get the whole page to load under 1 second. There are several approaches: Optimize queries even more. I already spent a lot of time on this one, so not sure it can be pushed further. Load products first, then load the rest of the information using AJAX. More like a workaround. Will need to revise UI. Re-organize data to be more Report friendly. Already aggregated a lot of fields. I checked out several similar sites. For ex. zappos.com. Not only they display the same information as I would like in under 1 second, but they also include statistics (number of results in each category). The following is the search for keyword "white" http://www.zappos.com/white How do sites like zappos, amazon make their results, filters and stats appear almost instantly?

    Read the article

  • How can you exclude a large number of records in a cross db query using LINQ2SQL?

    - by tap
    So here is my situation: I have a vendor supplied DB we cannot modify and a custom db that imports data from the vendor app and acts on it. Once records are imported form the vendor app, they cannot appear on the list of records to be imported. Also we only want to display the 250 most recent records that have not been imported. What I originally started with was select the list of ids that have been imported from the custom db, and then query the vendor db, using the list of ids in a .Where(x = !idList.Contains(x.Id)) clause on the remote query. This worked up until we broke 2100 records imported into the custom db, as 2100 is the limit on the number of parameters that can be passed into SQL. After finding out this was the actual problem and not the 'invalid buffer'/'severe error' ADO.Net reported, my solution was to remove the first 2000 ids in the remote query, and then remove the remaining records in the local query. Having to pull back a large number of irrelevant records, just to exclude them, so I can get the correct 250 records seems very inelegant. Is there a better way to do this, short of doing a cross db stored procedure? Thanks in advance.

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >