Search Results

Search found 16126 results on 646 pages for 'wcf performance'.

Page 85/646 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Update table instantly or “Bulk” Update in database later? And is it advisable?

    - by Mestika
    Hi, I have a question regarding a semi-constant update in a database. In short it is regarding a checkout function on a web page, which each time the checkout function is evoked it do five steps. I want to try to optimize this function and have my eye on a step where I update a table each time the checkout is performed. I take the information retrieved from the shopping cart and then update the table in question. I do have some indexes on the table, the gain from those are greater than leaving them so this is a cost I’m willing to take. Now, my question is. Could it in some way regarding to performance be better to not update the table instantly but collect every checkout items and save them in some way (maybe in a file) and then at a specific time (or several times) at day take this file and then update the table with the new information. Then I started thinking about if there was a possibility to use some sort of Bulk Update to take a file, hashmap, array (or?) and then update it. And I’m using IBM DB2 version 9.7 Mestika

    Read the article

  • SQL SERVER – What is Spatial Database? – Developing with SQL Server Spatial and Deep Dive into Spati

    - by pinaldave
    What is Spatial Database? A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. (Source: Wikipedia) Today I will be talking about the same subject at Microsoft TechEd India. If you want to learn about how to spatial aspect of data and how to integrate them with SQL Server this is the perfect session for you. Spatial is very special concept of SQL Server and I really like how it is implemented in SQL Server. In general Performance Tuning and Query Optimization is something I always have enjoyed in my professional life. Index are my best friends and many time, by implementing and many time by removing I have improved the performance of the system. In this session, I will be talking about Index along with Spatial Data. As Spatial Database is very interesting concept, I will cover super short but very interesting 10 quick slides about this subject. I will make sure in very first 20 mins, you will understand following topics Introduction to Spatial Database One line definition Understanding Spatial Indexing Index Internals Query/Performance Tuning Query Hinting/Cost Analysis Spatial Index Catalog Views Performance Troubleshooting Finding Optimal Index using Spatial Index SP Common Errors Index Maintenance This slides decks will be followed by around 30 mins demo which will have story of geometry, geography, index internals and performance tuning. If you are interested in learning how GIS works and how SQL Server out of the box supports this wonderful tools, you will really like how the story is told. I am sure all people who attend the event will know how the Bangalore is positioned on the map of India. I will take example of Bangalore and Hyderabad and demonstrate how index can improve the performance. Well there are lots of story to tell in the session, and I will be opening this session with the beautiful script of Botticelli’s Birth of Venus created by Michael J. Swart. I will also demonstrate few real life scenario where I will be talking about Spatial Database and its usage. Do not miss this session. At the end of session there will be book awarded to best participant. My session details: Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Date: April 14, 2010 Time: 5:00pm-6:00pm Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: Spatial Database

    Read the article

  • Speaking at Microsoft's Duth DevDays

    - by gsusx
    Last week I had the pleasure of presenting two sessions at Microsoft's Dutch DevDays at Den Hague. On Tuesday I presented a sessions about how to implement real world RESTFul services patterns using WCF, WCF Data Services and ASP.NET MVC2. During that session I showed a total of 15 small demos that highlighted how to implement key aspects of RESTful solutions such as Security, LowREST clients, URI modeling, Validation, Error Handling, etc. As part of those demos I used the OAuth implementation created...(read more)

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario Conventional Structures Columnstore ? SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • Ado.net performance:What does SNIReadSync do?

    - by Beatles1692
    We have a query that takes 2 seconds to run in Sql Server Management Studio but it takes 13 seconds to be shown on a client screen. I used dotTrace to profile my source code and noticed there is this SNIReadSync method (part of ADO.net assemblies)that takes a lot of time to do its job(9 seconds).I ran my source over server so I could omit the network effects and the result was the same. It doesn't matter if I'm using OleDBConnection or SqlConnection. It doesn't matter if I'm using a DataReader or a DataSet. Connection pooling does not solve this issue(as my result shows). I googled this issue and I couldn't find an answer to the question that what this method is actually doing and how we can improve it. here's what I found on StakOverFlow that's not helpful either: http://stackoverflow.com/questions/1610874/snireadsync-executing-between-120-500-ms-for-a-simple-query-what-do-i-look-for

    Read the article

  • C# XDocument Attribute Performance Concerns

    - by Dested
    I have a loaded XDocument that I need to grab all the attributes that equal a certain value and is of a certain element efficiently. My current IEnumerable<XElement> vm; if (!cacher2.TryGetValue(name,out vm)) { vm = project.Descendants(XName.Get(name)); cacher2.Add(name, vm); } XElement[] abdl = (vm.Where(a => a.Attribute(attribute).Value == ab)).ToArray(); cacher2 is a Dictionary<string,IEnumerable<XElement>> The ToArray is so I can evaluate the expression now. I dont think this causes any real speed concerns. The problem is the Where itself. I am searching through anywhere from 1 to 10k items. Any help?

    Read the article

  • Improving long-polling Ajax performance

    - by Bears will eat you
    I'm writing a webapp (Firefox-compatible only) which uses long polling (via jQuery's ajax abilities) to send more-or-less constant updates from the server to the client. I'm concerned about the effects of leaving this running for long periods of time, say, all day or overnight. The basic code skeleton is this: function processResults(xml) { // do stuff with the xml from the server } function fetch() { setTimeout(function () { $.ajax({ type: 'GET', url: 'foo/bar/baz', dataType: 'xml', success: function (xml) { processResults(xml); fetch(); }, error: function (xhr, type, exception) { if (xhr.status === 0) { console.log('XMLHttpRequest cancelled'); } else { console.debug(xhr); fetch(); } } }); }, 500); } (The half-second "sleep" is so that the client doesn't hammer the server if the updates are coming back to the client quickly - which they usually are.) After leaving this running overnight, it tends to make Firefox crawl. I'd been thinking that this could be partially caused by a large stack depth since I've basically written an infinitely recursive function. However, if I use Firebug and throw a breakpoint into fetch, it looks like this is not the case. The stack that Firebug shows me is only about 4 or 5 frames deep, even after an hour. One of the solutions I'm considering is changing my recursive function to an iterative one, but I can't figure out how I would insert the delay in between Ajax requests without spinning. I've looked at the JS 1.7 "yield" keyword but I can't quite wrap my head around it, to figure out if it's what I need here. Is the best solution just to do a hard refresh on the page periodically, say, once every hour? Is there a better/leaner long-polling design pattern that won't put a hurt on the browser even after running for 8 or 12 hours? Or should I just skip the long polling altogether and use a different "constant update" pattern since I usually know how frequently the server will have a response for me?

    Read the article

  • Slow Performance -- ASP .NET ASPNET_WP.EXE and CSC.EXE Running After Clicking Redirect Link

    - by Dan7el
    I click on a link from one page that does a redirect to another page (Response.Redirect(page.aspx)). The browser churns for about 30 seconds and the page displays. I'm trying to track down why it takes so long to load the page. The page hosts two other custom controls. I have commented out the lines of code for each and both controls, and the page still takes about 30 seconds to load. I've set breakpoints on the Page_Load event for each of the controls as well as page.aspx and it also takes about 30 seconds from clicking the link with the Response.Redirect to the first break point. I loaded up task manager and clicked on the link. I notice aspnet_wp.exe and csc.exe run during this 30 second time frame. I'm wondering if there are some sort of code-behind shinanigans going on while I'm waiting for the page to load. This only occurs the first time I click on the link. Afterwards, it's not as slow. I've googled but there's not a lot of useful information about this. Anyone have any ideas? Thanks, ---Dan---

    Read the article

  • MySQL query performance - 100Mb ethernet vs 1Gb ethernet

    - by Rob Penridge
    Hi All I've just started a new job and noticed that the analysts computers are connected to the network at 100Mbps. The queries we run against the MySQL server can easily be 500MB+ and it seems at times when the servers are under high load the DBAs kill low priority jobs as they are taking too long to run. My question is this... How much of this server time is spent executing the request, and how much time is spent returning the data to the client? Could the query speeds be improved by upgrading the network connections to 1Gbps? Thanks Rob

    Read the article

  • Updating multiple Sprites - AS3 performance best practices

    - by dani
    Within the container "BubbleContainer" I have multiple "Bubble sprites". Each bubble's graphics object (a circle) is updated on a timer event. Let's say I have 50 Bubble sprites and each circle's radius should be updated with a mathematical formula. How do I organize this logic? How do I update all Bubble sprites within the BubbleContainer? (should I call a bubble.update() function or make a temporary reference to the graphics object?) Where do I put the Math logic? (as static functions?)

    Read the article

  • jQuery selector performance

    - by rahul
    I have the following two code blocks. Code block 1 var checkboxes = $("div.c1 > input:checkbox.c2", "#main"); var totalCheckboxes = checkboxes.length; var checkedCheckboxes = checkboxes.filter(":checked").length; Code block 2 var totalCheckBoxes = $("div.c1 > input:checkbox.c2", "#main").length; var checkedCheckBoxes = $("div.c1 > input:checkbox.c2:checked", "#main").length; Which one of the above will be faster? Thanks, Rahul

    Read the article

  • Performance penalty of typecasting and boxing/unboxing types in C# when storing generic values

    - by kitsune
    I have a set-up similar to WPF's DependencyProperty and DependencyObject system. My properties however are generic. A BucketProperty has a static GlobalIndex (defined in BucketPropertyBase) which tracks all BucketProperties. A Bucket can have many BucketProperties of any type. A Bucket saves and gets the actual values of these BucketProperties... now my question is, how to deal with the storage of these values, and what is the penalty of using a typecasting when retrieving them? I currently use an array of BucketEntries that save the property values as simple objects. Is there any better way of saving and returning these values? Beneath is a simpliefied version: public class BucketProperty<T> : BucketPropertyBase { } public class Bucket { private BucketEntry[] _bucketEntries; public void SaveValue<T>(BucketProperty<T> property, T value) { SaveBucketEntry(property.GlobalIndex, value) } public T GetValue<T>(BucketProperty<T> property) { return (T)FindBucketEntry(property.GlobalIndex).Value; } } public class BucketEntry { private object _value; private uint _index; public BucketEntry(uint globalIndex, object value) { ... } }

    Read the article

  • Optimizing performance of large ASP.NET applications

    - by NLV
    Hello, I'm building a asp.net web application with lots and lots of controls and huge volumes of data. My application is very slow and it is taking a large amount of time to load the data into the .net controls like grid, tree view etc. I also have some ajaxified pages and controls in my application. I want to reduce the page load time in each postbacks. What are the standards/best practices to be followed while developing large asp.net applications? Thank you. NLV

    Read the article

  • high performance hibernate insert

    - by luke
    I am working on a latency sensitive part of an application, basically i will receive a network event transform the data and then insert all the data into the DB. After profiling i see that basically all my time is spent trying to save the data. here is the code private void insertAllData(Collection<Data> dataItems) { long start_time = System.currentTimeMillis(); long save_time = 0; long commit_time = 0; Transaction tx = null; try { Session s = HibernateSessionFactory.getSession(); s.setCacheMode(CacheMode.IGNORE); s.setFlushMode(FlushMode.NEVER); tx = s.beginTransaction(); for(Data data : dataItems) { s.saveOrUpdate(data); } save_time = System.currentTimeMillis(); tx.commit(); s.flush(); s.clear(); } catch(HibernateException ex) { if(tx != null) tx.rollback(); } commit_time = System.currentTimeMillis(); System.out.println("Save: " + (save_time - start_time)); System.out.println("Commit: " + (commit_time - save_time)); System.out.println(); } The size of the collection is always less than 20. here is the timing data that i see: Save: 27 Commit: 9 Save: 27 Commit: 9 Save: 26 Commit: 9 Save: 36 Commit: 9 Save: 44 Commit: 0 This is confusing to me. I figure that the save should be quick and all the time should be spent on commit. but clearly I'm wrong. I have also tried removing the transaction (its not really necessary) but i saw worse times... I have set hibernate.jdbc.batch_size=20... i need this operation to be as fast as possible, ideally there would only be one roundtrip to the database. How can i do this?

    Read the article

  • C# Confusing Results from Performance Test

    - by aip.cd.aish
    I am currently working on an image processing application. The application captures images from a webcam and then does some processing on it. The app needs to be real time responsive (ideally < 50ms to process each request). I have been doing some timing tests on the code I have and I found something very interesting (see below). clearLog(); log("Log cleared"); camera.QueryFrame(); camera.QueryFrame(); log("Camera buffer cleared"); Sensor s = t.val; log("Sx: " + S.X + " Sy: " + S.Y); Image<Bgr, Byte> cameraImage = camera.QueryFrame(); log("Camera output acuired for processing"); Each time the log is called the time since the beginning of the processing is displayed. Here is my log output: [3 ms]Log cleared [41 ms]Camera buffer cleared [41 ms]Sx: 589 Sy: 414 [112 ms]Camera output acuired for processing The timings are computed using a StopWatch from System.Diagonostics. QUESTION 1 I find this slightly interesting, since when the same method is called twice it executes in ~40ms and when it is called once the next time it took longer (~70ms). Assigning the value can't really be taking that long right? QUESTION 2 Also the timing for each step recorded above varies from time to time. The values for some steps are sometimes as low as 0ms and sometimes as high as 100ms. Though most of the numbers seem to be relatively consistent. I guess this may be because the CPU was used by some other process in the mean time? (If this is for some other reason, please let me know) Is there some way to ensure that when this function runs, it gets the highest priority? So that the speed test results will be consistently low (in terms of time). EDIT I change the code to remove the two blank query frames from above, so the code is now: clearLog(); log("Log cleared"); Sensor s = t.val; log("Sx: " + S.X + " Sy: " + S.Y); Image<Bgr, Byte> cameraImage = camera.QueryFrame(); log("Camera output acuired for processing"); The timing results are now: [2 ms]Log cleared [3 ms]Sx: 589 Sy: 414 [5 ms]Camera output acuired for processing The next steps now take longer (sometimes, the next step jumps to after 20-30ms, while the next step was previously almost instantaneous). I am guessing this is due to the CPU scheduling. Is there someway I can ensure the CPU does not get scheduled to do something else while it is running through this code?

    Read the article

  • Elisp performance on Windows and Linux

    - by JasonFruit
    I have the following dead simple elisp functions; the first removes the fill breaks from the current paragraph, and the second loops through the current document applying the first to each paragraph in turn, in effect removing all single line-breaks from the document. It runs fast on my low-spec Puppy Linux box using emacs 22.3 (10 seconds for 600 pages of Thomas Aquinas), but when I go to a powerful Windows XP machine with emacs 21.3, it takes almost an hour to do the same document. What can I do to make it run as well on the Windows machine with emacs 21.3? (defun remove-line-breaks () "Remove line endings in a paragraph." (interactive) (let ((fill-column 90002000)) (fill-paragraph nil))) : (defun remove-all-line-breaks () "Remove all single line-breaks in a document" (interactive) (while (not (= (point) (buffer-end 1))) (remove-line-breaks) (next-line 1))) Forgive my poor elisp; I'm having great fun learning Lisp and starting to use the power of emacs, but I'm new to it yet.

    Read the article

  • WPF performance on scaling a large scene

    - by Mark
    I have a full screen app that I want to be able to zoom in on certain areas. I have the code working fine, but I notice that when I get closer in, the zoom in animation (which animates the ScaleTransform.ScaleX and ScaleTransform.ScaleY properties on a Parent canvas) starts to jerk down a little and the frame rate suffers. Im not using any BitmapEffects or anything, and ideally I would like my scene to get more complicated than it currently already is. The scene is quite large, 1980x1024, this is a requirement and cannot be changed. The current layout is like this: <Canvas x:name="LayoutRoot"> <Canvas x:Name="ContainerCanvas"> <local:MyControl x:Name="c1" /> <!-- numerous or ther controls and elements that compose the scene --> </Canvas> </Canvas> The code that zooms in just animates the RenderTransform of the ContainerCanvas, which in tern, scales its children which gives the desired effect. However, Im wondering if I need to swap out the ContainerCanvas for a ViewBox or something like that? Ive never really worked with ViewBox/Viewport controls before in WPF can they even help me out here? Smooth zooming is a huge requirement of the client and I must get this resolved. All ideas are welcome Thanks a lot Mark

    Read the article

  • Oracle performance problems with large batch of XSL operations

    - by FrustratedWithFormsDesigner
    I have a system that is performing many XSL transformations on XMLType objects. The problem is that the system gradually slows down over time, and sometimes crashes when it runs out of memory. It seems that the slow down (and possibly memory crash) is around the dbms_xslprocessor.processXSL function call, which gradually takes longer and longer to complete. The code looks like this: v_doc dbms_xmldom.DOMDocument; v_transformer dbms_xmldom.DOMDocument; v_XSLprocessor dbms_xslprocessor.Processor; v_stylesheet dbms_xslprocessor.Stylesheet; v_clob clob; ... transformer := PKG_STUFF.getXSL(); v_transformer := dbms_xmldom.newDOMDocument(transformer); v_XSLprocessor := Dbms_Xslprocessor.newProcessor; v_stylesheet := dbms_xslprocessor.newStylesheet(v_transformer, ''); ... for source_data in (select id in source_tbl) loop begin v_doc := PKG_CONVERT.convert(in_id => source_data.id); --start time of operation v_begin_op_time := dbms_utility.get_time; --reset the CLOB v_clob := ' '; --Apply XSL Transform dbms_xslprocessor.processXSL(p => v_XSLprocessor, ss => v_stylesheet, xmldoc => v_Doc, cl => v_clob); v_doc := dbms_xmldom.newDOMDocument(XMLType(v_clob)); --end time v_end_op_time := dbms_utility.get_time; --calculate duration v_time_taken := (((v_end_op_time - v_begin_op_time))); --log the duration PKG_LOG.log_message('Time taken to transform XML: '||v_time_taken); ... ... DBMS_XMLDOM.freeDocument(v_Doc); DBMS_LOB.freetemporary(lob_loc => v_clob); end loop; The time taken to transform the XML is slowly creeping up (I suppose it might also be the call to dbms_xmldom.newDOMDocument, but I had thought that to be fairly straightforward). I have no idea why.... :( (Oracle 10g)

    Read the article

  • GXT Performance Issues

    - by pearl
    Hi All, We are working on a rather complex system using GXT. While everything works great on FF, IE (especially IE6) is a different story (looking at more than 10 seconds until the browser renders the page). I understand that one of the main reasons is DOM manipulation which is a disaster under IE6 (See http://www.quirksmode.org/dom/innerhtml.html). This can be thought to be a generic problem of a front-end Javascript framework (i.e. GWT) but a simple code (see below) that executes the same functionality proofs otherwise. In fact, under IE6 - getSomeGWT() takes 400ms while getSomeGXT() takes 4 seconds. That's a x10 factor which makes a huge different for the user experience !!! private HorizontalPanel getSomeGWT() { HorizontalPanel pointsLogoPanel = new HorizontalPanel(); for (int i=0; i<350; i++) { HorizontalPanel innerContainer = new HorizontalPanel(); innerContainer.add(new Label("some GWT text")); pointsLogoPanel.add(innerContainer); } return pointsLogoPanel; } private LayoutContainer getSomeGXT() { LayoutContainer pointsLogoPanel = new LayoutContainer(); pointsLogoPanel.setLayoutOnChange(true); for (int i=0; i<350; i++) { LayoutContainer innerContainer = new LayoutContainer(); innerContainer.add(new Text("just some text")); pointsLogoPanel.add(innerContainer); } return pointsLogoPanel; } So to solve/mitigate the issue one would need to - a. Reduce the number of DOM manipulations; or b. Replace them with innerHTML. AFAIK, (a) is simply a side effect of using GXT and (b) is only possible with UiBinder which isn't supported yet by GXT. Any ideas? Thanks in advance!

    Read the article

  • Dictionary looping performance comparison

    - by Shimmy
    I have the following 3 options, I believe there are more: For Each entry In Me Next For i = 0 To Count Dim key = Keys(0) Dim value = Values(0) Next For Each Key In Keys Dim value = Me(Key) Next Personally, I think the For Each is best since the GetEnumerator is TKey, TValue based, but I donnu.

    Read the article

  • Javamail performance

    - by cbz
    Hi, I've been using javamail to retrieve mails from IMAP server (currently GMail). Javamail retrieves list of messages (only ids) in a particular folder from server very fast, but when I actually fetch message (only envelop not even contents) it takes around 1 to 2 seconds for each message. What are the techniques should be used for fast retrieval?

    Read the article

  • SQLite self-join performance

    - by Derk
    What I essentially want, is to retreive all features and values of products which have a particular feature and value. For example: I want to know all available hard drive sizes of products that have an Intel processor. I have three tables: product_to_value (product_id, feature_id, value_id) features (id, value) // for example Processor family, Storage size, etc. values (id, value) // for example Intel, 60GB, etc The simplified query I have now: SELECT features.name, featurevalues.name, featurevalues.value FROM products, products as prod2, features, features as feat2, values, values as val2 WHERE products.feature = features.id AND products.value = values.id AND products.product = prod2.product AND prod2.feature_id = feat2.id AND prod2.value_id = val2.id AND features.id = ? AND feat2.id = ? All columns have an index. I am using SQLite. The problem is that it's very slow (70ms per query, without the self-join it's <1ms). Is there a smarter way to fetch data like this? Or is this too much to ask from SQLite? I personally think I am simply overlooking something, as I am quite new to SQLite.

    Read the article

  • Performance problem with System.Net.Mail

    - by Saif Khan
    I have this unusual problem with mailing from my app. At first it wasn't working (getting unable to relay error crap) anyways I added the proper authentication and it works. My problem now is, if I try to send around 300 emails (each with a 500k attachment) the app starts hanging around 95% thru the process. Here is some of my code which is called for each mail to be sent Using mail As New MailMessage() With mail .From = New MailAddress(My.Resources.EmailFrom) For Each contact As Contact In Contacts .To.Add(contact.Email) Next .Subject = "Accounting" .Body = My.Resources.EmailBody 'Back the stream up to the beginning orelse the attachment 'will be sent as a zero (0) byte file. attachment.Seek(0, SeekOrigin.Begin) .Attachments.Add(New Attachment(attachment, String.Concat(Item.Year, Item.AttachmentType.Extension))) End With Dim smtp As New SmtpClient("192.168.1.2") With smtp .DeliveryMethod = SmtpDeliveryMethod.Network .UseDefaultCredentials = False .Credentials = New NetworkCredential("username", "password") .Send(mail) End With End Using With item .SentStatus = True .DateSent = DateTime.Now.Date .Save() End With Return I was thinking, can I just prepare all the mails and add them to a collection then open one SMTP conenction and just iterate the collection, calling the send like this Using mail As New MailMessage() ... MailCollection.Add(mail) End Using ... Dim smtp As New SmtpClient("192.168.1.2") With smtp .DeliveryMethod = SmtpDeliveryMethod.Network .UseDefaultCredentials = False .Credentials = New NetworkCredential("username", "password") For Each mail in MainCollection .Send(mail) Next End With

    Read the article

  • Horrible WPF performance!

    - by Erik
    Why am i using over 80% CPU when just hovering some links? As you can see in the video i uploaded: http://www.youtube.com/watch?v=3ALF9NquTRE the CPU goes to 80% CPU when i move my mouse over the links. My style for the items are as follows <Style x:Key="LinkStyle" TargetType="{x:Type Hyperlink}"> <Style.Triggers> <Trigger Property="IsMouseOver" Value="True"> <Setter Property="Foreground" Value="White" /> </Trigger> </Style.Triggers> <Setter Property="TextBlock.TextDecorations" Value="{x:Null}" /> <Setter Property="Foreground" Value="#FFDDDDDD"/> <Setter Property="Cursor" Value="Arrow" /> </Style> Why?

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >