Search Results

Search found 16126 results on 646 pages for 'wcf performance'.

Page 409/646 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • Advanced Analytics Oracle Data Mining - NEW 2-Day Training Course

    - by Mike.Hallett(at)Oracle-BI&EPM
    A NEW 2-Day Oracle University (OU) Instructor Led Course on Oracle Data Mining has been developed for partners and customers to learn more about data mining, predictive analytics and knowledge discovery inside the Oracle Database. Oracle Data Mining, provides data mining algorithms that run native for high performance in-database model building and model deployment. This OU course is a great way to learn the advantages and benefits of "big data analytics"; mining data, building and deploying "predictive analytics" all inside the Oracle Database and to work with OBI. To register for a class, click here, then click on View Schedule to see the latest scheduled classes and/or submit your information expressing interest in attending a class.

    Read the article

  • Google I/O 2010 - Writing zippy Android apps

    Google I/O 2010 - Writing zippy Android apps Google I/O 2010 - Writing zippy Android apps Android 201 Brad Fitzpatrick Come hear tips & war stories on making fast, responsive (aka "non-janky") Android apps. No more ANRs! Eliminate event loop stalls! Fast start-ups! Optimized database queries with minimal I/O! Also, learn about the tools and techniques we use to find performance problems across the system and hear what's coming in the future. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 3 0 ratings Time: 57:38 More in Science & Technology

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • Google I/O 2010 - What's hot in Java for App Engine

    Google I/O 2010 - What's hot in Java for App Engine Google I/O 2010 - What's hot in Java for App Engine App Engine 201 Toby Reyelts, Don Schwarz Learn what's new with Java on App Engine. We'll take a whirlwind tour through the changes since last year, walk through a code sample for task queues and the new blobstore service, and demonstrate techniques for improving your application's performance. We'll top it off with a glimpse into some new features that we've planned for the year ahead. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 8 0 ratings Time: 01:02:10 More in Science & Technology

    Read the article

  • Setting coding priorities

    - by dotnetdev
    Hi, In the dev shops I've worked in, nobody has ever mentioned "coding priorities". I read this in a book or site somewhere, and sets the expectation of what priority should be first in the code. In places where this is not specified, what should the first priority be? It may sound simple to say "do what the business need requires", but that could be at the expense of performance/maintainability. Many people say maintainability first, regardless, some say fulfill the need regardless. I am a young developer, so I am probably missing the point somewhere. Of course, programming is engineering and tough because you can never have the perfect solution. Thanks

    Read the article

  • Database Insider Newsletter: February 2011 Edition Available

    - by jenny.gelhausen
    The February edition of the Database Insider Newsletter is now available. This edition covers the upcoming IOUG's Day of Real World Performance Tour What's coming for Collaborate 2011 How Oracle helps you steer clear of security pitfalls and much more... Enjoy! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Trim on encrypted SSD--Urandom first?

    - by cb474
    My understanding (I'm not sure I'm getting this all right) is that if one uses Trim on an encrypted SSD, it defeats some of the security benefits, because the drive will write zeros to empty space (as files are deleted). See: http://www.askubuntu.com/questions/115823/trim-on-an-encrypted-ssd And: http://asalor.blogspot.com/2011/08/trim-dm-crypt-problems.html My question is: From the perspective of the performance of the SSD and the functioning of Trim, would it therefore be better to simply zero out the SSD, before setting up an encrypted system, rather than writing random data to the drive, with urandom, as one usually does? Would this basically leave one with the same level of security anyway? And more importantly, would it better enable the Trim functionality to work as intended, with the encrypted SSD?

    Read the article

  • Throttle and overheating on Dell XPS Studio 1645

    - by Ross
    I realise there is an older thread on the very subject but that seems to be pretty dead. I just got a Dell Studio XPS 1645 laptop and the fan noise and overheating is pretty ridiculous. This is actually a well known problem with the laptop that is apparently solved with the combination of a BIOS update and the purchase of their 130w charger. I plan on buying this charger as soon as possible, however I've noticed that since installing Ubuntu the fan noise has became more permanent and the overheating is quite a bit worse too. I've had to turn it off twice to let it cool down for an hour or so because it starts seriously affecting the performance. It makes watching things, listening to music or leaving the laptop on while I sleep a real pain. If anyone has some new information on this issue or could help out in anyway at all I'd be very grateful. Thanks.

    Read the article

  • Is Clojure, Scala and other restrained by the JVM vs CLR

    - by jia93
    The Java implementors seem slow to adopt language improvements, for example compare C# with full closures, expression trees, LINQ etc.. to Java, and even the push back of some stuff to Java 8 will still leave it behind the current implementation of C#. However since I dont intend to use either Java or C# that particular language war isnt of interest too much, im more concerned with the JVM vs CLR. Is this lagging-behind also applicable to the JVM? Will Scala, Clojure etc.. will they be able to continue to innovate or score optimal performance in the face of slowly progressing underlying VM such as JVM? Is Clojure/Scala restrained at present by JVM limitations?

    Read the article

  • Lower your Applications Infrastructure Cost with Oracle Database 11g

    - by john.brust
    If you missed our live Oracle Database 11g Release 2 webcast last Friday, the replay is available. So, join us for the on demand free Webcast in which Mark Townsend, Vice President of Oracle Database Product Management, discusses how running your Oracle applications (Oracle eBusiness Suite, Oracle's PeopleSoft, and Oracle's Siebel ) on Oracle Database 11g can improve performance and scalability, eliminate downtime, and reduce IT infrastructure costs. In the Q&A segment, Mark answers questions about compression, virtual machines, Oracle Active Data Guard, online application upgrades, and much more. Note: Turn off pop-up blockers if the slides do not advance automatically.

    Read the article

  • How to improve quality of software

    - by hariharan
    Last week in our organization, we triggered a topic related to different ways of improving the quality of software (both technical as well as functional related topics). Since i am a technical person, i suggested following ideas, Use case based detailed design document – Both technical as well as functional specification should be well organized according to use case requirement. Design patterns – Will help developers to adopt common approach irrespective of technologies. Analyze and implement new technologies – Helps to improve the performance as well as the security of the application. As I am not a well experienced technical candidate , i am unable to provide other solutions. If any suggestions or topics related to this (including testing, functional requirement), please post your valuable comments.

    Read the article

  • How to crawl a webPage with dynamic content added by javascript

    - by blunderboy
    I guess there is a news that Google bots have the capability to understand our javascript code. It means this is possible to fully crawl a webpage which has lazy loading feature enabled. I am using Apache Nutch to crawl websites but I don't think it has the capability to fetch the URLs being injected in HTML page by javascript when the page is scrolled down. I see a lot of websites doing lazy loading for performance issue. So Can somebody please explain me how can i crawl the data which comes in HTML page on lazy load. (On scrolling the page down).

    Read the article

  • Building a Fusion Applications Ready Foundation

    Designed from the ground-up using the latest technology advances and incorporating the best practices gathered from Oracle's thousands of customers, Fusion Applications are 100 percent open standards-based business applications that set a new standard for the way we innovate, work and adopt technology. Delivered as a complete suite of modular applications, Fusion Applications work with your existing portfolio to evolve your business to a new level of performance. In this AppCast, part of a special series on Fusion Applications, you will hear how components of Oracle Fusion Middleware, the very same platform that underpins Oracle Fusion Applications, can work with and enhance your Oracle E-Business Suite, Siebel, PeopleSoft, JD Edwards and other application investments. You will learn how you can build a Fusion-ready Applications Foundation and how you prepare your IT and operational skills to use and run Oracle Fusion Applications.

    Read the article

  • Choosing the Right Financial Consolidation and Reporting Solution

    Financial reporting requirements for publicly-held companies are changing and getting more complex. With the upcoming convergence of US GAAP and IFRS, demand for more detailed non-financial disclosures, and the SEC mandate for XBRL financial executives are under pressure to ensure they have the right systems in place to support current and future reporting requirements. Tune into this conversation with Rich Clayton, VP of Enterprise Performance Management and BI products for Oracle, and Annette Melatti, Senior Director of Product Marketing for Financial Applications to learn about the latest market requirements, what capabilities are provided by Oracle's General Ledgers, and how customers can extend their investment in Oracle General Ledger solutions with Oracle's market-leading financial close and reporting products.

    Read the article

  • Update drivers for TL-WN851ND

    - by Tony_GPR
    Today I bought a new PCI wireless card, TP-Link WN851ND, with Atheros AR9227 chipset. It has 2 antennas and is compatible with Wifi N so I thought it would improve the quality of the signal. But after install it on my computer the result is the opposite to expected. It doesn't connect to my network, while my old Wifi BG card connect without problems, I created an access point from my smartphone to try the card, and work, but is very slow loading pages. In Windows 7 works perfectly, so I think the problem is the driver. I have Ubuntu 12.04 LTS with kernel 3.2.0.31, is there a way to update the driver or I can apply a patch to improve the performance of the card? Otherwise, anyone know if there is a work in progress to improve compatibility with this chipset, or is it better to change the card and buy one with better driver compatibility. And finally, which wireless N compatible chipsets have good support under Linux/Ubuntu?

    Read the article

  • What are the advantages and disadventages of git or bzr + rsync vs rdiff-backup?

    - by Azendale
    I used to use rsync to do backups, but then I switched to rdiff-backup to incremental backups. Recently, I discovered git and bzr while working on a coding project. So, I was thinking, I could have my backup disk be a repository in either git or bzr. Then I could rsync to the repository, and commit the changes. Would there be any performance concerns with this? Any other issues that I'm not thinking of? The benefit I see in using rsync is that you can restart an interrupted transfer, while rdiff-backup reverts to the last version, and then starts again. Any reason not to do it this way? Anything I'm not thinking of?

    Read the article

  • Asus 1215n GPU driver/s don't give me a "full" OS experience

    - by AFD
    I'm use to not having specific drivers from a manufacture on my laptop when running a Linux OS and that has always been fine - there's been adequate FOSS drivers for my needs and it hasn't ruined any of my OS experience. When I bought an Asus 1215n one of the upsides to the hardware seemed to be the switchable GPU that could give lots of performance or lots more battery life and would switch on-the-fly... with Windows of course. Seems that the Nvidia driver are crap and people advise not installing them. I have some sort of workaround for vga_switcharoo (?) and the on-the-fly nature of the GPUs has turned in to a manual one :( The worst bit though (aside from shorter battery life) is the web experience with HTML5. If I visit Mozilla's Web O'Wonder site I'm told I don't have WebGL working due to driver issues. This really blows - is it possible that proprietary drivers can now ruin my web experience too?!

    Read the article

  • Should you salary reflect how much work there is for you or does that not matter? [closed]

    - by Kevin Simper
    I am working in a consulting company, where the company mostly do IT support. The website is also only focused on IT support, and we do not therefore capture leads for the Web Department. We aim for Small busniess, which needs new computers and firewalls. We were having a performance conversation and talked about salary and my employer told that he was not impressed by the revenue I was generating. I told that I did not have enough work and I would like to get more tasks and project so that i could reach the goal, but that i did not think it was my fault that there was not enough work. He said that it was not his fault either, but he could not pay me more. Is he right that I should not get paid more just because my employee can not get enough Web projects, or should i be paid what i am worth not based on the work amount the sales generate?

    Read the article

  • OTN APAC Tour 2012: Bangkok, Thailand - Oct 22, 2012

    - by Mike Dietrich
    Roy had done some of the South America OTN Tour 2012 dates earlier this year in Peru and Chile. And I'm looking forward to present next Monday, October 22nd, 2012, on the OTN Tour 2012 in Bangkok, Thailand. The event will be held at the Eastin Grand Hotel in Bangkok. Register today for the OTN APAC Tour 2012 in Bangkok, Thailand! Presentations will include: 9:30am - 10:15am:Best Practices for Upgrading to Oracle Database 11.2 1:00pm - 1:45pm:How to improve Upgrade Performance - Real Speed, Real Customers, Real Secrets 2:45pm - 3:30pm:Oracle Data Pump: Overview and Best Practices Plus presentations  about Security, RMAN and other topics by Francisco Alvarez and others. Please find the complete agenda here. Looking forward to meet you on Monday - CU there

    Read the article

  • SQL Server Express Profiler

    - by David Turner
    During a recent project, while waiting for our Development Database to be provisioned on the clients corporate SQL Server Environment (these things can sometimes take weeks or months to be setup), we began our initial development against a local instance on SQL Server Express, just as an interim measure until the Development database was live.  This was going just fine, until we found that we needed to do some profiling to understand a problem we were having with the performance of our ORM generated Data Access Layer.  The full version of SQL Server Management Studio includes a profiler, that we could use to help with this kind of problem, however the Express version does not, so I was really pleased to find that there is a freely available Profiler for SQL Server Express imaginatively titled ‘SQL Server Express Profiler’, and it worked great for us.  http://sites.google.com/site/sqlprofiler/

    Read the article

  • is there any elegant way to analyze an engineer's process?

    - by NewAlexandria
    Plenty of sentiment exists that measuring commits is inappropriate. Has any study been done that tries to draw in more sources than commits - such as: browsing patterns IDE work (pre-commit) idle time multitasking I can't think of an easy way to do these measures, but I wonder if any study has been done. On a personal note, I do believe that reflection on one's own 'metrics' could be valuable regardless of (or in the absence of) using these for performance eval. I.E. an un-biased way to reflect on your habits. But this is a discussion matter beyond Q&A.

    Read the article

  • How can I store all my level data in a single file instead of spread out over many files?

    - by Jon
    I am currently generating my level data, and saving to disk to ensure that any modifications done to the level are saved. I am storing "chunks" of 2048x2048 pixels into a file. Whenever the player moves over a section that doesn't have a file associated with the position, a new file is created. This works great, and is very fast. My issue, is that as you are playing the file count gets larger and larger. I'm wondering what are techniques that can be used to alleviate the file count, without taking a performance hit. I am interested in how you would store/seek/update this data in a single file instead of multiple files efficiently.

    Read the article

  • Improving Shopfloor Data Collection with Oracle Manufacturing Operations Center

    Successful factories around the world leverage information to drive their production and supply chains. New tools are available today to further catapult the data collection, analysis, contextualization and collaboration to the various stakeholders involved in the manufacturing process. Oracle Manufacturing Operations Center (MOC) addresses the factory's need for accurate and timely information about product and process quality, insight into shop floor operations, and performance of production assets. It solves the complex problem of connecting fragmented disconnected shop floor data to the business context of your ERP and provides the solid foundation for running Continuous Improvement (CI) programs such as Lean and Six Sigma.

    Read the article

  • All Access Pass to Oracle Support

    - by Leslie-Oracle
    Untitled Document Looking for tips, recommendations and resources to help you keep your Oracle applications and systems running at peak performance? Want to find out how to get more out of your Oracle Premier Support coverage? More than 500 experts from across Services and Support will be on hand at Oracle OpenWorld to answer your questions and share best practices for adopting and optimizing Oracle technology. Find out what Oracle experts know about the best tools, tips and resources for supporting and upgrading Oracle technology. Attend one of our “Best Practices” sessions. Stop by the Oracle Support Stars Bar to talk with support experts. Open daily @ Moscone West, Exhibition hall 3161. See Oracle support tools in action at one of our demos. View the schedule of all of our Oracle Premier Support activities at Oracle OpenWorld for more information. See you there!

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >