Search Results

Search found 18148 results on 726 pages for 'performance monitor'.

Page 49/726 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • What does the "Maximum Frequency" number mean in the Windows Resource Monitor?

    - by nhinkle
    In the Windows Resource Monitor's CPU tab, there is a status box and graph for the "Maximum Frequency", right next to the "CPU Usage" values. What does this mean? The value is sometimes over 100% on my system... what could that imply? By looking at CPU-z's real-time report of the processor's clock speed, it seems to be loosely related to what frequency the CPU is running at, which would imply that it means "percent of maximum possible frequency the CPU is running at"; this would be of relevance on systems with SpeedStep and/or TurboBoost technology (or similar). Furthermore, setting the system to "power saving mode" lowers the "maximum frequency" value to around 60%, while setting it to "high performance" mode sets it to around 110%. However, the percentage does not seem to exactly correlate to the CPU speed being shown. What value is this actually representing then?

    Read the article

  • What is the largest flatscreen monitor available for PC use?

    - by Avery Payne
    I'll qualify this specifically (by order of preference): must have the highest diagonal measurement, widescreen or "normal" aspect ratio doesn't matter here, just the diagonal. must have the highest resolution available, which means 72 inches of 1280x1024 won't cut it. must not have a TV tuner built into it, I'm not looking for a TV set, this is a monitor! must be available at a retail outlet that caters to the general public, i.e. Best Buy, Sears, Costco (all of these examples are in the U.S., although you can suggest something from whatever chain is in your area/nation/geography). Non-retail or non-physical venues like eBay, or businesses that only cater to other businesses, do not qualify under this requirement. I should be able to walk into this place and purchase it, not just whip up an order online. If you are unsure about this requirement, just ask yourself: can I physically see it before I open my wallet and purchase it?

    Read the article

  • Has anyone tried the "Secret LCD Monitor" hack? [closed]

    - by cornjuliox
    I'm genuinely curious to know, has anyone tried this hack? I can get LCD monitors for cheap at a place near where I live, and I'd like to try it myself, but I'd like to get more info on it before I do so to increase my chances of success. I'm looking for more info on the entire process, especially about any solvents I can use should I run into any glue problems. Questions for anyone that HAS tried it: Does it actually work, or is this some gag? If it works, is there any decrease in image quality or viewing angles? Since the polarization filters are essentially stuck to glasses, does that mean you're going to have to sit directly in front of the monitor at all times, and any shift in your position means that you won't be able to see the image? Does it improve/worsen ghosting or other LCD artifacts? Are there any problems with eye strain?

    Read the article

  • When computer turns on: keyboard/mouse/monitor don't work?

    - by dave
    the computer is barely good,at least i think..features Intel core 2 4 GB ram Nvidia Radeon 9600 512 mb Gigabite motherboad (the others i can't remember).. Used this computer for 3 years with no problem...The problem started one day after while it froze everything, then I shut down with removing the electricity power. A few hours later, I tried to log in to the computer but nothing moved. Nor keyboard nor mouse nor monitor worked, I was thinking that should be a delay so I left it some time. After a minute it turned off without my help around 30 seconds then it turned on automatically, but was the same problem like before and it goes again and again...any suggestions? If more info/specs required, let me know.

    Read the article

  • How can I monitor network usage by process on Mac OS X?

    - by psmith
    Is there any way to find out which process using how much internet bandwidth on Mac OS X Lion? I'm on mobile internet now, which is not very fast, so it would be nice if I can tell that for example, Chrome using 10kB/s, and Skype using 2kB/s. I can see the total amount of traffic in Activity Monitor, but it is not enough for me. I'd like to use an existing application, not interested to write an app like this. And I'm not interested in the actual traffic, only the number of bytes transferred and received by each processes.

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario Conventional Structures Columnstore ? SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • Ubuntu Gnome 14.04 - 100% CPU usage alternating between cores

    - by AwDeOh
    I've noticed my Ubuntu Gnome 14.04 has been getting a bit sluggish lately - things like Gnome Shell overview animation are jerky where they were lightning fast, Elder Scrolls Online is stuttering and dropping to low FPS where I previously had a solid 50-60 fps. Out of interest I looked at the CPU History, and when running nothing but the system monitor, I was getting this: That was 15 minutes ago. The 100% load seemed to be alternating between the cores. PC specs: i3 2130 processor. 8gb DDR3 RAM. ASUS P8-Z77M motherboard. Samsung 128gb SSD I've been trying to reproduce the problem, and while I'm not getting the 100% any more at idle, the system monitor is showing an average load of about 20-30%, that's with just Chrome and the System Monitor open. Oddly, if I touch nothing, it'll average out to about 20% - if I start moving the mouse around and do some typing, it's closer to 40%. Is this normal? Any help appreciated, I wouldn't even know where to start here..

    Read the article

  • Convincing Upper Management the need of larger monitors for Developers

    - by The Rubber Duck
    The company I work for has recently hired on several developers, and there are a limited number of monitors to go around. There are two types in the office - a standard 15" (thankfully flatscreen) and a widescreen 23". No developer has a machine capable of a dual monitor setup, and the largest monitors went to the people who got here first. Three or four new senior level developers only have a 15" monitor to work on. To make matters worse, there are perhaps a total of 25-30 DBAs/Testers/Admin types in the company who all have dual screen 23" setups. We have brought the issue to management, and they refuse to take away large monitors from people who have been here for years for the sake of new employees, even if they are senior level. We have pitched the idea of testers sacrificing a large monitor for one of our small ones, but they won't go for that either. What can I say to management to illustrate the need of monitors for developers?

    Read the article

  • Erratic display behaviour on 12.04 using an Asus Zenbook

    - by Azarias R
    When I first plugged in my 23 inch monitor through the mini-VGA port, everything worked...both display were detected, and worked. On the second or third plug in, the internal display would turn off, and only the external display would work. When I try to 'detect monitor', I got "could not set the configuration for crtc 63" a few times. Now, when I plug in the external display, I get a screen full of orange color on the external display, and the laptop display turns off. The only way to get things back is to force turn off the laptop and restart. Unplugging/replugging monitor doesn't work, and seems like ubuntu crashes. Can anyone help?

    Read the article

  • Does this use of Monitor.Wait/Pulse have a race condition?

    - by jw
    I have a simple producer/consumer scenario, where there is only ever a single item being produced/consumed. Also, the producer waits for the worker thread to finish before continuing. I realize that kind of obviates the whole point of multithreading, but please just assume it really needs to be this way (: This code doesn't compile, but I hope you get the idea: // m_data is initially null // This could be called by any number of producer threads simultaneously void SetData(object foo) { lock(x) // Line A { assert(m_data == null); m_data = foo; Monitor.Pulse(x) // Line B while(m_data != null) Monitor.Wait(x) // Line C } } // This is only ever called by a single worker thread void UseData() { lock(x) // Line D { while(m_data == null) Monitor.Wait(x) // Line E // here, do something with m_data m_data = null; Monitor.Pulse(x) // Line F } } Here is the situation that I am not sure about: Suppose many threads call SetData() with different inputs. Only one of them will get inside the lock, and the rest will be blocked on Line A. Suppose the one that got inside the lock sets m_data and makes its way to Line C. Question: Could the Wait() on Line C allow another thread at Line A to obtain the lock and overwrite m_data before the worker thread even gets to it? Supposing that doesn't happen, and the worker thread processes the original m_data, and eventually makes its way to Line F, what happens when that Pulse() goes off? Will only the thread waiting on Line C be able to get the lock? Or will it be competing with all the other threads waiting on Line A as well? Essentially, I want to know if Pulse()/Wait() communicate with each other specially "under the hood" or if they are on the same level with lock(). The solution to these problems, if they exist, is obvious of course - just surround SetData() with another lock - say, lock(y). I'm just curious if it's even an issue to begin with.

    Read the article

  • Ado.net performance:What does SNIReadSync do?

    - by Beatles1692
    We have a query that takes 2 seconds to run in Sql Server Management Studio but it takes 13 seconds to be shown on a client screen. I used dotTrace to profile my source code and noticed there is this SNIReadSync method (part of ADO.net assemblies)that takes a lot of time to do its job(9 seconds).I ran my source over server so I could omit the network effects and the result was the same. It doesn't matter if I'm using OleDBConnection or SqlConnection. It doesn't matter if I'm using a DataReader or a DataSet. Connection pooling does not solve this issue(as my result shows). I googled this issue and I couldn't find an answer to the question that what this method is actually doing and how we can improve it. here's what I found on StakOverFlow that's not helpful either: http://stackoverflow.com/questions/1610874/snireadsync-executing-between-120-500-ms-for-a-simple-query-what-do-i-look-for

    Read the article

  • C# XDocument Attribute Performance Concerns

    - by Dested
    I have a loaded XDocument that I need to grab all the attributes that equal a certain value and is of a certain element efficiently. My current IEnumerable<XElement> vm; if (!cacher2.TryGetValue(name,out vm)) { vm = project.Descendants(XName.Get(name)); cacher2.Add(name, vm); } XElement[] abdl = (vm.Where(a => a.Attribute(attribute).Value == ab)).ToArray(); cacher2 is a Dictionary<string,IEnumerable<XElement>> The ToArray is so I can evaluate the expression now. I dont think this causes any real speed concerns. The problem is the Where itself. I am searching through anywhere from 1 to 10k items. Any help?

    Read the article

  • Improving long-polling Ajax performance

    - by Bears will eat you
    I'm writing a webapp (Firefox-compatible only) which uses long polling (via jQuery's ajax abilities) to send more-or-less constant updates from the server to the client. I'm concerned about the effects of leaving this running for long periods of time, say, all day or overnight. The basic code skeleton is this: function processResults(xml) { // do stuff with the xml from the server } function fetch() { setTimeout(function () { $.ajax({ type: 'GET', url: 'foo/bar/baz', dataType: 'xml', success: function (xml) { processResults(xml); fetch(); }, error: function (xhr, type, exception) { if (xhr.status === 0) { console.log('XMLHttpRequest cancelled'); } else { console.debug(xhr); fetch(); } } }); }, 500); } (The half-second "sleep" is so that the client doesn't hammer the server if the updates are coming back to the client quickly - which they usually are.) After leaving this running overnight, it tends to make Firefox crawl. I'd been thinking that this could be partially caused by a large stack depth since I've basically written an infinitely recursive function. However, if I use Firebug and throw a breakpoint into fetch, it looks like this is not the case. The stack that Firebug shows me is only about 4 or 5 frames deep, even after an hour. One of the solutions I'm considering is changing my recursive function to an iterative one, but I can't figure out how I would insert the delay in between Ajax requests without spinning. I've looked at the JS 1.7 "yield" keyword but I can't quite wrap my head around it, to figure out if it's what I need here. Is the best solution just to do a hard refresh on the page periodically, say, once every hour? Is there a better/leaner long-polling design pattern that won't put a hurt on the browser even after running for 8 or 12 hours? Or should I just skip the long polling altogether and use a different "constant update" pattern since I usually know how frequently the server will have a response for me?

    Read the article

  • Slow Performance -- ASP .NET ASPNET_WP.EXE and CSC.EXE Running After Clicking Redirect Link

    - by Dan7el
    I click on a link from one page that does a redirect to another page (Response.Redirect(page.aspx)). The browser churns for about 30 seconds and the page displays. I'm trying to track down why it takes so long to load the page. The page hosts two other custom controls. I have commented out the lines of code for each and both controls, and the page still takes about 30 seconds to load. I've set breakpoints on the Page_Load event for each of the controls as well as page.aspx and it also takes about 30 seconds from clicking the link with the Response.Redirect to the first break point. I loaded up task manager and clicked on the link. I notice aspnet_wp.exe and csc.exe run during this 30 second time frame. I'm wondering if there are some sort of code-behind shinanigans going on while I'm waiting for the page to load. This only occurs the first time I click on the link. Afterwards, it's not as slow. I've googled but there's not a lot of useful information about this. Anyone have any ideas? Thanks, ---Dan---

    Read the article

  • MySQL query performance - 100Mb ethernet vs 1Gb ethernet

    - by Rob Penridge
    Hi All I've just started a new job and noticed that the analysts computers are connected to the network at 100Mbps. The queries we run against the MySQL server can easily be 500MB+ and it seems at times when the servers are under high load the DBAs kill low priority jobs as they are taking too long to run. My question is this... How much of this server time is spent executing the request, and how much time is spent returning the data to the client? Could the query speeds be improved by upgrading the network connections to 1Gbps? Thanks Rob

    Read the article

  • Updating multiple Sprites - AS3 performance best practices

    - by dani
    Within the container "BubbleContainer" I have multiple "Bubble sprites". Each bubble's graphics object (a circle) is updated on a timer event. Let's say I have 50 Bubble sprites and each circle's radius should be updated with a mathematical formula. How do I organize this logic? How do I update all Bubble sprites within the BubbleContainer? (should I call a bubble.update() function or make a temporary reference to the graphics object?) Where do I put the Math logic? (as static functions?)

    Read the article

  • jQuery selector performance

    - by rahul
    I have the following two code blocks. Code block 1 var checkboxes = $("div.c1 > input:checkbox.c2", "#main"); var totalCheckboxes = checkboxes.length; var checkedCheckboxes = checkboxes.filter(":checked").length; Code block 2 var totalCheckBoxes = $("div.c1 > input:checkbox.c2", "#main").length; var checkedCheckBoxes = $("div.c1 > input:checkbox.c2:checked", "#main").length; Which one of the above will be faster? Thanks, Rahul

    Read the article

  • Performance penalty of typecasting and boxing/unboxing types in C# when storing generic values

    - by kitsune
    I have a set-up similar to WPF's DependencyProperty and DependencyObject system. My properties however are generic. A BucketProperty has a static GlobalIndex (defined in BucketPropertyBase) which tracks all BucketProperties. A Bucket can have many BucketProperties of any type. A Bucket saves and gets the actual values of these BucketProperties... now my question is, how to deal with the storage of these values, and what is the penalty of using a typecasting when retrieving them? I currently use an array of BucketEntries that save the property values as simple objects. Is there any better way of saving and returning these values? Beneath is a simpliefied version: public class BucketProperty<T> : BucketPropertyBase { } public class Bucket { private BucketEntry[] _bucketEntries; public void SaveValue<T>(BucketProperty<T> property, T value) { SaveBucketEntry(property.GlobalIndex, value) } public T GetValue<T>(BucketProperty<T> property) { return (T)FindBucketEntry(property.GlobalIndex).Value; } } public class BucketEntry { private object _value; private uint _index; public BucketEntry(uint globalIndex, object value) { ... } }

    Read the article

  • Optimizing performance of large ASP.NET applications

    - by NLV
    Hello, I'm building a asp.net web application with lots and lots of controls and huge volumes of data. My application is very slow and it is taking a large amount of time to load the data into the .net controls like grid, tree view etc. I also have some ajaxified pages and controls in my application. I want to reduce the page load time in each postbacks. What are the standards/best practices to be followed while developing large asp.net applications? Thank you. NLV

    Read the article

  • high performance hibernate insert

    - by luke
    I am working on a latency sensitive part of an application, basically i will receive a network event transform the data and then insert all the data into the DB. After profiling i see that basically all my time is spent trying to save the data. here is the code private void insertAllData(Collection<Data> dataItems) { long start_time = System.currentTimeMillis(); long save_time = 0; long commit_time = 0; Transaction tx = null; try { Session s = HibernateSessionFactory.getSession(); s.setCacheMode(CacheMode.IGNORE); s.setFlushMode(FlushMode.NEVER); tx = s.beginTransaction(); for(Data data : dataItems) { s.saveOrUpdate(data); } save_time = System.currentTimeMillis(); tx.commit(); s.flush(); s.clear(); } catch(HibernateException ex) { if(tx != null) tx.rollback(); } commit_time = System.currentTimeMillis(); System.out.println("Save: " + (save_time - start_time)); System.out.println("Commit: " + (commit_time - save_time)); System.out.println(); } The size of the collection is always less than 20. here is the timing data that i see: Save: 27 Commit: 9 Save: 27 Commit: 9 Save: 26 Commit: 9 Save: 36 Commit: 9 Save: 44 Commit: 0 This is confusing to me. I figure that the save should be quick and all the time should be spent on commit. but clearly I'm wrong. I have also tried removing the transaction (its not really necessary) but i saw worse times... I have set hibernate.jdbc.batch_size=20... i need this operation to be as fast as possible, ideally there would only be one roundtrip to the database. How can i do this?

    Read the article

  • C# Confusing Results from Performance Test

    - by aip.cd.aish
    I am currently working on an image processing application. The application captures images from a webcam and then does some processing on it. The app needs to be real time responsive (ideally < 50ms to process each request). I have been doing some timing tests on the code I have and I found something very interesting (see below). clearLog(); log("Log cleared"); camera.QueryFrame(); camera.QueryFrame(); log("Camera buffer cleared"); Sensor s = t.val; log("Sx: " + S.X + " Sy: " + S.Y); Image<Bgr, Byte> cameraImage = camera.QueryFrame(); log("Camera output acuired for processing"); Each time the log is called the time since the beginning of the processing is displayed. Here is my log output: [3 ms]Log cleared [41 ms]Camera buffer cleared [41 ms]Sx: 589 Sy: 414 [112 ms]Camera output acuired for processing The timings are computed using a StopWatch from System.Diagonostics. QUESTION 1 I find this slightly interesting, since when the same method is called twice it executes in ~40ms and when it is called once the next time it took longer (~70ms). Assigning the value can't really be taking that long right? QUESTION 2 Also the timing for each step recorded above varies from time to time. The values for some steps are sometimes as low as 0ms and sometimes as high as 100ms. Though most of the numbers seem to be relatively consistent. I guess this may be because the CPU was used by some other process in the mean time? (If this is for some other reason, please let me know) Is there some way to ensure that when this function runs, it gets the highest priority? So that the speed test results will be consistently low (in terms of time). EDIT I change the code to remove the two blank query frames from above, so the code is now: clearLog(); log("Log cleared"); Sensor s = t.val; log("Sx: " + S.X + " Sy: " + S.Y); Image<Bgr, Byte> cameraImage = camera.QueryFrame(); log("Camera output acuired for processing"); The timing results are now: [2 ms]Log cleared [3 ms]Sx: 589 Sy: 414 [5 ms]Camera output acuired for processing The next steps now take longer (sometimes, the next step jumps to after 20-30ms, while the next step was previously almost instantaneous). I am guessing this is due to the CPU scheduling. Is there someway I can ensure the CPU does not get scheduled to do something else while it is running through this code?

    Read the article

  • Elisp performance on Windows and Linux

    - by JasonFruit
    I have the following dead simple elisp functions; the first removes the fill breaks from the current paragraph, and the second loops through the current document applying the first to each paragraph in turn, in effect removing all single line-breaks from the document. It runs fast on my low-spec Puppy Linux box using emacs 22.3 (10 seconds for 600 pages of Thomas Aquinas), but when I go to a powerful Windows XP machine with emacs 21.3, it takes almost an hour to do the same document. What can I do to make it run as well on the Windows machine with emacs 21.3? (defun remove-line-breaks () "Remove line endings in a paragraph." (interactive) (let ((fill-column 90002000)) (fill-paragraph nil))) : (defun remove-all-line-breaks () "Remove all single line-breaks in a document" (interactive) (while (not (= (point) (buffer-end 1))) (remove-line-breaks) (next-line 1))) Forgive my poor elisp; I'm having great fun learning Lisp and starting to use the power of emacs, but I'm new to it yet.

    Read the article

  • WPF performance on scaling a large scene

    - by Mark
    I have a full screen app that I want to be able to zoom in on certain areas. I have the code working fine, but I notice that when I get closer in, the zoom in animation (which animates the ScaleTransform.ScaleX and ScaleTransform.ScaleY properties on a Parent canvas) starts to jerk down a little and the frame rate suffers. Im not using any BitmapEffects or anything, and ideally I would like my scene to get more complicated than it currently already is. The scene is quite large, 1980x1024, this is a requirement and cannot be changed. The current layout is like this: <Canvas x:name="LayoutRoot"> <Canvas x:Name="ContainerCanvas"> <local:MyControl x:Name="c1" /> <!-- numerous or ther controls and elements that compose the scene --> </Canvas> </Canvas> The code that zooms in just animates the RenderTransform of the ContainerCanvas, which in tern, scales its children which gives the desired effect. However, Im wondering if I need to swap out the ContainerCanvas for a ViewBox or something like that? Ive never really worked with ViewBox/Viewport controls before in WPF can they even help me out here? Smooth zooming is a huge requirement of the client and I must get this resolved. All ideas are welcome Thanks a lot Mark

    Read the article

  • Oracle performance problems with large batch of XSL operations

    - by FrustratedWithFormsDesigner
    I have a system that is performing many XSL transformations on XMLType objects. The problem is that the system gradually slows down over time, and sometimes crashes when it runs out of memory. It seems that the slow down (and possibly memory crash) is around the dbms_xslprocessor.processXSL function call, which gradually takes longer and longer to complete. The code looks like this: v_doc dbms_xmldom.DOMDocument; v_transformer dbms_xmldom.DOMDocument; v_XSLprocessor dbms_xslprocessor.Processor; v_stylesheet dbms_xslprocessor.Stylesheet; v_clob clob; ... transformer := PKG_STUFF.getXSL(); v_transformer := dbms_xmldom.newDOMDocument(transformer); v_XSLprocessor := Dbms_Xslprocessor.newProcessor; v_stylesheet := dbms_xslprocessor.newStylesheet(v_transformer, ''); ... for source_data in (select id in source_tbl) loop begin v_doc := PKG_CONVERT.convert(in_id => source_data.id); --start time of operation v_begin_op_time := dbms_utility.get_time; --reset the CLOB v_clob := ' '; --Apply XSL Transform dbms_xslprocessor.processXSL(p => v_XSLprocessor, ss => v_stylesheet, xmldoc => v_Doc, cl => v_clob); v_doc := dbms_xmldom.newDOMDocument(XMLType(v_clob)); --end time v_end_op_time := dbms_utility.get_time; --calculate duration v_time_taken := (((v_end_op_time - v_begin_op_time))); --log the duration PKG_LOG.log_message('Time taken to transform XML: '||v_time_taken); ... ... DBMS_XMLDOM.freeDocument(v_Doc); DBMS_LOB.freetemporary(lob_loc => v_clob); end loop; The time taken to transform the XML is slowly creeping up (I suppose it might also be the call to dbms_xmldom.newDOMDocument, but I had thought that to be fairly straightforward). I have no idea why.... :( (Oracle 10g)

    Read the article

  • GXT Performance Issues

    - by pearl
    Hi All, We are working on a rather complex system using GXT. While everything works great on FF, IE (especially IE6) is a different story (looking at more than 10 seconds until the browser renders the page). I understand that one of the main reasons is DOM manipulation which is a disaster under IE6 (See http://www.quirksmode.org/dom/innerhtml.html). This can be thought to be a generic problem of a front-end Javascript framework (i.e. GWT) but a simple code (see below) that executes the same functionality proofs otherwise. In fact, under IE6 - getSomeGWT() takes 400ms while getSomeGXT() takes 4 seconds. That's a x10 factor which makes a huge different for the user experience !!! private HorizontalPanel getSomeGWT() { HorizontalPanel pointsLogoPanel = new HorizontalPanel(); for (int i=0; i<350; i++) { HorizontalPanel innerContainer = new HorizontalPanel(); innerContainer.add(new Label("some GWT text")); pointsLogoPanel.add(innerContainer); } return pointsLogoPanel; } private LayoutContainer getSomeGXT() { LayoutContainer pointsLogoPanel = new LayoutContainer(); pointsLogoPanel.setLayoutOnChange(true); for (int i=0; i<350; i++) { LayoutContainer innerContainer = new LayoutContainer(); innerContainer.add(new Text("just some text")); pointsLogoPanel.add(innerContainer); } return pointsLogoPanel; } So to solve/mitigate the issue one would need to - a. Reduce the number of DOM manipulations; or b. Replace them with innerHTML. AFAIK, (a) is simply a side effect of using GXT and (b) is only possible with UiBinder which isn't supported yet by GXT. Any ideas? Thanks in advance!

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >