Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 38/763 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • PPT Leveraging Azure for Performance Testing

    - by Tarun Arora
    I have recently presented a session on “How you can leverage Azure for Performance Testing” your application.  It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barrier in Load Testing & Performance Testing adoption is the high infrastructure and administration cost that comes with this phase of testing. In the session I tried to demonstrate how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. You can view the session presentation here, http://www.slideshare.net/aroratarun/leveraging-azure-for-performance-testing  I’ll be adding a video on this subject shortly… If you have any feedback or further suggestions to add to the goodness of this solution please get in touch.

    Read the article

  • Research topics for starting and optimizing a high-traffic website

    - by user745434
    I bury a good deal of my ideas for fear that I don't know enough about scaling web applications and high-traffic websites. That said, I'd like to know of any general topics to research in order to ensure that your web app doesn't break / slow down when you start getting to Twitter-level traffic. I'm looking for research topics with, preferably, additional resources. For example: Make sure you optimize SQL queries (see High Performance MySQL Optimization)

    Read the article

  • 5 Easy Ways to Get High PR Links to Boost Your Site in Google

    High PR links are some of the most valuable aspects in any link-building & SEO campaign. Not only do these links make Google respect your site more, but they can also boost your site's ranking overnight. Here are 5 places to get high quality links that will do a lot of good to your site's ranking in Google.

    Read the article

  • Harping on Metadata Performance: New Benchmarks

    <b>Linux Magazine:</b> "Metadata performance is perhaps the most neglected facet of storage performance. In previous articles we&#8217;ve looked into how best to improve metadata performance without too much luck. Could that be a function of the benchmark? Hmmm..."

    Read the article

  • Nvidia Powermizer Performance Levels

    - by jeffrey
    Is there anyway to configure Nvidia Powerimizer performance levels? My current setup has 3 power levels with the lowest one being 50mhz. The problem with this it that it lags compiz when it goes to the lowest performance level 0. Minimizing, maximizing, dragging windows, etc. are all sluggish when it's at the lowest level. Once powermizer leaves level 0 everything is very smooth and runs great. Is there anyway for me to remove level 0 and just run Level the two higher levels 1/2? I don't want to complete disable powermizer, but I can't stand the lagging once powermizer goes into performance level 0. Setting the option "prefer maximum performance" fixes the problem as it disables powermizer, but the GPU is overkill at stock speeds for most desktop use @ 850mhz. intel i5 2500k asus gene-z z68 evga 560ti fpb (driver 295.40) ubuntu 12.04 LTS x64

    Read the article

  • Delphi: Fast(er) widestring concatenation

    - by Ian Boyd
    i have a function who's job is to convert an ADO Recordset into html: class function RecordsetToHtml(const rs: _Recordset): WideString; And the guts of the function involves a lot of wide string concatenation: while not rs.EOF do begin Result := Result+CRLF+ '<TR>'; for i := 0 to rs.Fields.Count-1 do Result := Result+'<TD>'+VarAsString(rs.Fields[i].Value)+'</TD>'; Result := Result+'</TR>'; rs.MoveNext; end; With a few thousand results, the function takes, what any user would feel, is too long to run. The Delphi Sampling Profiler shows that 99.3% of the time is spent in widestring concatenation (@WStrCatN and @WstrCat). Can anyone think of a way to improve widestring concatenation? i don't think Delphi 5 has any kind of string builder. And Format doesn't support Unicode. And to make sure nobody tries to weasel out: pretend you are implementing the interface: IRecordsetToHtml = interface(IUnknown) function RecordsetToHtml(const rs: _Recordset): WideString; end; Update One I thought of using an IXMLDOMDocument, to build up the HTML as xml. But then i realized that the final HTML would be xhtml and not html - a subtle, but important, difference. Update Two Microsoft knowledge base article: How To Improve String Concatenation Performance

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • Update table instantly or “Bulk” Update in database later? And is it advisable?

    - by Mestika
    Hi, I have a question regarding a semi-constant update in a database. In short it is regarding a checkout function on a web page, which each time the checkout function is evoked it do five steps. I want to try to optimize this function and have my eye on a step where I update a table each time the checkout is performed. I take the information retrieved from the shopping cart and then update the table in question. I do have some indexes on the table, the gain from those are greater than leaving them so this is a cost I’m willing to take. Now, my question is. Could it in some way regarding to performance be better to not update the table instantly but collect every checkout items and save them in some way (maybe in a file) and then at a specific time (or several times) at day take this file and then update the table with the new information. Then I started thinking about if there was a possibility to use some sort of Bulk Update to take a file, hashmap, array (or?) and then update it. And I’m using IBM DB2 version 9.7 Mestika

    Read the article

  • SQL SERVER – What is Spatial Database? – Developing with SQL Server Spatial and Deep Dive into Spati

    - by pinaldave
    What is Spatial Database? A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. (Source: Wikipedia) Today I will be talking about the same subject at Microsoft TechEd India. If you want to learn about how to spatial aspect of data and how to integrate them with SQL Server this is the perfect session for you. Spatial is very special concept of SQL Server and I really like how it is implemented in SQL Server. In general Performance Tuning and Query Optimization is something I always have enjoyed in my professional life. Index are my best friends and many time, by implementing and many time by removing I have improved the performance of the system. In this session, I will be talking about Index along with Spatial Data. As Spatial Database is very interesting concept, I will cover super short but very interesting 10 quick slides about this subject. I will make sure in very first 20 mins, you will understand following topics Introduction to Spatial Database One line definition Understanding Spatial Indexing Index Internals Query/Performance Tuning Query Hinting/Cost Analysis Spatial Index Catalog Views Performance Troubleshooting Finding Optimal Index using Spatial Index SP Common Errors Index Maintenance This slides decks will be followed by around 30 mins demo which will have story of geometry, geography, index internals and performance tuning. If you are interested in learning how GIS works and how SQL Server out of the box supports this wonderful tools, you will really like how the story is told. I am sure all people who attend the event will know how the Bangalore is positioned on the map of India. I will take example of Bangalore and Hyderabad and demonstrate how index can improve the performance. Well there are lots of story to tell in the session, and I will be opening this session with the beautiful script of Botticelli’s Birth of Venus created by Michael J. Swart. I will also demonstrate few real life scenario where I will be talking about Spatial Database and its usage. Do not miss this session. At the end of session there will be book awarded to best participant. My session details: Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Date: April 14, 2010 Time: 5:00pm-6:00pm Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: Spatial Database

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario Conventional Structures Columnstore ? SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • Ado.net performance:What does SNIReadSync do?

    - by Beatles1692
    We have a query that takes 2 seconds to run in Sql Server Management Studio but it takes 13 seconds to be shown on a client screen. I used dotTrace to profile my source code and noticed there is this SNIReadSync method (part of ADO.net assemblies)that takes a lot of time to do its job(9 seconds).I ran my source over server so I could omit the network effects and the result was the same. It doesn't matter if I'm using OleDBConnection or SqlConnection. It doesn't matter if I'm using a DataReader or a DataSet. Connection pooling does not solve this issue(as my result shows). I googled this issue and I couldn't find an answer to the question that what this method is actually doing and how we can improve it. here's what I found on StakOverFlow that's not helpful either: http://stackoverflow.com/questions/1610874/snireadsync-executing-between-120-500-ms-for-a-simple-query-what-do-i-look-for

    Read the article

  • C# XDocument Attribute Performance Concerns

    - by Dested
    I have a loaded XDocument that I need to grab all the attributes that equal a certain value and is of a certain element efficiently. My current IEnumerable<XElement> vm; if (!cacher2.TryGetValue(name,out vm)) { vm = project.Descendants(XName.Get(name)); cacher2.Add(name, vm); } XElement[] abdl = (vm.Where(a => a.Attribute(attribute).Value == ab)).ToArray(); cacher2 is a Dictionary<string,IEnumerable<XElement>> The ToArray is so I can evaluate the expression now. I dont think this causes any real speed concerns. The problem is the Where itself. I am searching through anywhere from 1 to 10k items. Any help?

    Read the article

  • Improving long-polling Ajax performance

    - by Bears will eat you
    I'm writing a webapp (Firefox-compatible only) which uses long polling (via jQuery's ajax abilities) to send more-or-less constant updates from the server to the client. I'm concerned about the effects of leaving this running for long periods of time, say, all day or overnight. The basic code skeleton is this: function processResults(xml) { // do stuff with the xml from the server } function fetch() { setTimeout(function () { $.ajax({ type: 'GET', url: 'foo/bar/baz', dataType: 'xml', success: function (xml) { processResults(xml); fetch(); }, error: function (xhr, type, exception) { if (xhr.status === 0) { console.log('XMLHttpRequest cancelled'); } else { console.debug(xhr); fetch(); } } }); }, 500); } (The half-second "sleep" is so that the client doesn't hammer the server if the updates are coming back to the client quickly - which they usually are.) After leaving this running overnight, it tends to make Firefox crawl. I'd been thinking that this could be partially caused by a large stack depth since I've basically written an infinitely recursive function. However, if I use Firebug and throw a breakpoint into fetch, it looks like this is not the case. The stack that Firebug shows me is only about 4 or 5 frames deep, even after an hour. One of the solutions I'm considering is changing my recursive function to an iterative one, but I can't figure out how I would insert the delay in between Ajax requests without spinning. I've looked at the JS 1.7 "yield" keyword but I can't quite wrap my head around it, to figure out if it's what I need here. Is the best solution just to do a hard refresh on the page periodically, say, once every hour? Is there a better/leaner long-polling design pattern that won't put a hurt on the browser even after running for 8 or 12 hours? Or should I just skip the long polling altogether and use a different "constant update" pattern since I usually know how frequently the server will have a response for me?

    Read the article

  • Slow Performance -- ASP .NET ASPNET_WP.EXE and CSC.EXE Running After Clicking Redirect Link

    - by Dan7el
    I click on a link from one page that does a redirect to another page (Response.Redirect(page.aspx)). The browser churns for about 30 seconds and the page displays. I'm trying to track down why it takes so long to load the page. The page hosts two other custom controls. I have commented out the lines of code for each and both controls, and the page still takes about 30 seconds to load. I've set breakpoints on the Page_Load event for each of the controls as well as page.aspx and it also takes about 30 seconds from clicking the link with the Response.Redirect to the first break point. I loaded up task manager and clicked on the link. I notice aspnet_wp.exe and csc.exe run during this 30 second time frame. I'm wondering if there are some sort of code-behind shinanigans going on while I'm waiting for the page to load. This only occurs the first time I click on the link. Afterwards, it's not as slow. I've googled but there's not a lot of useful information about this. Anyone have any ideas? Thanks, ---Dan---

    Read the article

  • Updating multiple Sprites - AS3 performance best practices

    - by dani
    Within the container "BubbleContainer" I have multiple "Bubble sprites". Each bubble's graphics object (a circle) is updated on a timer event. Let's say I have 50 Bubble sprites and each circle's radius should be updated with a mathematical formula. How do I organize this logic? How do I update all Bubble sprites within the BubbleContainer? (should I call a bubble.update() function or make a temporary reference to the graphics object?) Where do I put the Math logic? (as static functions?)

    Read the article

  • jQuery selector performance

    - by rahul
    I have the following two code blocks. Code block 1 var checkboxes = $("div.c1 > input:checkbox.c2", "#main"); var totalCheckboxes = checkboxes.length; var checkedCheckboxes = checkboxes.filter(":checked").length; Code block 2 var totalCheckBoxes = $("div.c1 > input:checkbox.c2", "#main").length; var checkedCheckBoxes = $("div.c1 > input:checkbox.c2:checked", "#main").length; Which one of the above will be faster? Thanks, Rahul

    Read the article

  • Performance penalty of typecasting and boxing/unboxing types in C# when storing generic values

    - by kitsune
    I have a set-up similar to WPF's DependencyProperty and DependencyObject system. My properties however are generic. A BucketProperty has a static GlobalIndex (defined in BucketPropertyBase) which tracks all BucketProperties. A Bucket can have many BucketProperties of any type. A Bucket saves and gets the actual values of these BucketProperties... now my question is, how to deal with the storage of these values, and what is the penalty of using a typecasting when retrieving them? I currently use an array of BucketEntries that save the property values as simple objects. Is there any better way of saving and returning these values? Beneath is a simpliefied version: public class BucketProperty<T> : BucketPropertyBase { } public class Bucket { private BucketEntry[] _bucketEntries; public void SaveValue<T>(BucketProperty<T> property, T value) { SaveBucketEntry(property.GlobalIndex, value) } public T GetValue<T>(BucketProperty<T> property) { return (T)FindBucketEntry(property.GlobalIndex).Value; } } public class BucketEntry { private object _value; private uint _index; public BucketEntry(uint globalIndex, object value) { ... } }

    Read the article

  • Optimizing performance of large ASP.NET applications

    - by NLV
    Hello, I'm building a asp.net web application with lots and lots of controls and huge volumes of data. My application is very slow and it is taking a large amount of time to load the data into the .net controls like grid, tree view etc. I also have some ajaxified pages and controls in my application. I want to reduce the page load time in each postbacks. What are the standards/best practices to be followed while developing large asp.net applications? Thank you. NLV

    Read the article

  • Elisp performance on Windows and Linux

    - by JasonFruit
    I have the following dead simple elisp functions; the first removes the fill breaks from the current paragraph, and the second loops through the current document applying the first to each paragraph in turn, in effect removing all single line-breaks from the document. It runs fast on my low-spec Puppy Linux box using emacs 22.3 (10 seconds for 600 pages of Thomas Aquinas), but when I go to a powerful Windows XP machine with emacs 21.3, it takes almost an hour to do the same document. What can I do to make it run as well on the Windows machine with emacs 21.3? (defun remove-line-breaks () "Remove line endings in a paragraph." (interactive) (let ((fill-column 90002000)) (fill-paragraph nil))) : (defun remove-all-line-breaks () "Remove all single line-breaks in a document" (interactive) (while (not (= (point) (buffer-end 1))) (remove-line-breaks) (next-line 1))) Forgive my poor elisp; I'm having great fun learning Lisp and starting to use the power of emacs, but I'm new to it yet.

    Read the article

  • WPF performance on scaling a large scene

    - by Mark
    I have a full screen app that I want to be able to zoom in on certain areas. I have the code working fine, but I notice that when I get closer in, the zoom in animation (which animates the ScaleTransform.ScaleX and ScaleTransform.ScaleY properties on a Parent canvas) starts to jerk down a little and the frame rate suffers. Im not using any BitmapEffects or anything, and ideally I would like my scene to get more complicated than it currently already is. The scene is quite large, 1980x1024, this is a requirement and cannot be changed. The current layout is like this: <Canvas x:name="LayoutRoot"> <Canvas x:Name="ContainerCanvas"> <local:MyControl x:Name="c1" /> <!-- numerous or ther controls and elements that compose the scene --> </Canvas> </Canvas> The code that zooms in just animates the RenderTransform of the ContainerCanvas, which in tern, scales its children which gives the desired effect. However, Im wondering if I need to swap out the ContainerCanvas for a ViewBox or something like that? Ive never really worked with ViewBox/Viewport controls before in WPF can they even help me out here? Smooth zooming is a huge requirement of the client and I must get this resolved. All ideas are welcome Thanks a lot Mark

    Read the article

  • Oracle performance problems with large batch of XSL operations

    - by FrustratedWithFormsDesigner
    I have a system that is performing many XSL transformations on XMLType objects. The problem is that the system gradually slows down over time, and sometimes crashes when it runs out of memory. It seems that the slow down (and possibly memory crash) is around the dbms_xslprocessor.processXSL function call, which gradually takes longer and longer to complete. The code looks like this: v_doc dbms_xmldom.DOMDocument; v_transformer dbms_xmldom.DOMDocument; v_XSLprocessor dbms_xslprocessor.Processor; v_stylesheet dbms_xslprocessor.Stylesheet; v_clob clob; ... transformer := PKG_STUFF.getXSL(); v_transformer := dbms_xmldom.newDOMDocument(transformer); v_XSLprocessor := Dbms_Xslprocessor.newProcessor; v_stylesheet := dbms_xslprocessor.newStylesheet(v_transformer, ''); ... for source_data in (select id in source_tbl) loop begin v_doc := PKG_CONVERT.convert(in_id => source_data.id); --start time of operation v_begin_op_time := dbms_utility.get_time; --reset the CLOB v_clob := ' '; --Apply XSL Transform dbms_xslprocessor.processXSL(p => v_XSLprocessor, ss => v_stylesheet, xmldoc => v_Doc, cl => v_clob); v_doc := dbms_xmldom.newDOMDocument(XMLType(v_clob)); --end time v_end_op_time := dbms_utility.get_time; --calculate duration v_time_taken := (((v_end_op_time - v_begin_op_time))); --log the duration PKG_LOG.log_message('Time taken to transform XML: '||v_time_taken); ... ... DBMS_XMLDOM.freeDocument(v_Doc); DBMS_LOB.freetemporary(lob_loc => v_clob); end loop; The time taken to transform the XML is slowly creeping up (I suppose it might also be the call to dbms_xmldom.newDOMDocument, but I had thought that to be fairly straightforward). I have no idea why.... :( (Oracle 10g)

    Read the article

  • GXT Performance Issues

    - by pearl
    Hi All, We are working on a rather complex system using GXT. While everything works great on FF, IE (especially IE6) is a different story (looking at more than 10 seconds until the browser renders the page). I understand that one of the main reasons is DOM manipulation which is a disaster under IE6 (See http://www.quirksmode.org/dom/innerhtml.html). This can be thought to be a generic problem of a front-end Javascript framework (i.e. GWT) but a simple code (see below) that executes the same functionality proofs otherwise. In fact, under IE6 - getSomeGWT() takes 400ms while getSomeGXT() takes 4 seconds. That's a x10 factor which makes a huge different for the user experience !!! private HorizontalPanel getSomeGWT() { HorizontalPanel pointsLogoPanel = new HorizontalPanel(); for (int i=0; i<350; i++) { HorizontalPanel innerContainer = new HorizontalPanel(); innerContainer.add(new Label("some GWT text")); pointsLogoPanel.add(innerContainer); } return pointsLogoPanel; } private LayoutContainer getSomeGXT() { LayoutContainer pointsLogoPanel = new LayoutContainer(); pointsLogoPanel.setLayoutOnChange(true); for (int i=0; i<350; i++) { LayoutContainer innerContainer = new LayoutContainer(); innerContainer.add(new Text("just some text")); pointsLogoPanel.add(innerContainer); } return pointsLogoPanel; } So to solve/mitigate the issue one would need to - a. Reduce the number of DOM manipulations; or b. Replace them with innerHTML. AFAIK, (a) is simply a side effect of using GXT and (b) is only possible with UiBinder which isn't supported yet by GXT. Any ideas? Thanks in advance!

    Read the article

  • Dictionary looping performance comparison

    - by Shimmy
    I have the following 3 options, I believe there are more: For Each entry In Me Next For i = 0 To Count Dim key = Keys(0) Dim value = Values(0) Next For Each Key In Keys Dim value = Me(Key) Next Personally, I think the For Each is best since the GetEnumerator is TKey, TValue based, but I donnu.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >