Search Results

Search found 14282 results on 572 pages for 'performance counter'.

Page 58/572 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Workshop CUOA-Oracle Hyperion "Pianificazione economico-finanziaria, reporting e performance management" - Altavilla Vicentina, 25/10/2012

    - by Edilio Rossi
    Più di 100 professsionisti -  manager della funzione Amministrazione, Finanza e Controllo in azienda e consulenti del settore - hanno partecipato al Workshop, organizzato da CUOA e Oracle Hyperion, in collaborazione con Adacta Studio Associato. E' stata un'occasione unica per approfondire i temi della pianificazione "estesa" e del controllo di gestione nelle imprese italiane - piccole, medie e grandi - alternando chiavi di lettura diverse (accademica, consulenziale, tecnologico-applicativa e utenti) ma tutte legate dal filo conduttore dell'evoluzione dei modelli, degli strumenti e dell'utilizzo dei sistemi evoluti di planning e budgeting economico-finanziario e patrimoniale. Una particolare attenzione è stata posta sul rapporto banca-impresa alla luce dell'attuale crisi e di come i sistemi innovativi di performance management e business intelligence possono aiutare il management nel ridisegno del sistema di finanziamento delle aziende e nella negoziazione con i diversi stakeholders. Grazie alle testimonianze dei casi aziendali GIV (Gruppo Italiano Vini) e Datalogic si è potuto "toccare con mano" l'utlizzo dei modelli e degli strumenti di pianificazione e controllo in realtà aziendali diverse ma che affrontano entrambe alcune delle sfide che i mercati oggi pongono alle imprese italiane.  Le presentazioni sono disponibili su richiesta inviando una mail a: paolo.leveghi-AT-oracle.com

    Read the article

  • High Performance SQL Views Using WITH(NOLOCK)

    - by gt0084e1
    Every now and then you find a simple way to make everything much faster. We often find customers creating data warehouses or OLAP cubes even though they have a relatively small amount of data (a few gigs) compared to their server memory. If you have more server memory than the size of your database or working set, nearly any aggregate query should run in a second or less. In some situations there may be high traffic on from the transactional application and SQL server may wait for several other queries to run before giving you your results. The purpose of this is make sure you don’t get two versions of the truth. In an ATM system, you want to give the bank balance after the withdrawal, not before or you may get a very unhappy customer. So by default databases are rightly very conservative about this kind of thing. Unfortunately this split-second precision comes at a cost. The performance of the query may not be acceptable by today’s standards because the database has to maintain locks on the server. Fortunately, SQL Server gives you a simple way to ask for the current version of the data without the pending transactions. To better facilitate reporting, you can create a view that includes these directives. CREATE VIEW CategoriesAndProducts AS SELECT * FROM dbo.Categories WITH(NOLOCK) INNER JOIN dbo.Products WITH(NOLOCK) ON dbo.Categories.CategoryID = dbo.Products.CategoryID In some cases quires that are taking minutes end up taking seconds. Much easier than moving the data to a separate database and it’s still pretty much real time give or take a few milliseconds. You’ve been warned not to use this for bank balances though. More from Data Stream

    Read the article

  • Honing Performance Tuning Skills on MySQL

    - by Antoinette O'Sullivan
    Get hands-on experience with techniques for tuning a MySQL Server with the Authorized MySQL Performance Tuning course.  This course is designed for database administrators, database developers and system administrators who are responsible for managing, optimizing, and tuning a MySQL Server. You can follow this live instructor led training: From your desk. Choose from among the 800+ events on the live-virtual training schedule. In a classroom. A selection of events/locations listed below  Location  Date  Delivery Language  Prague, Czech Republic  1 October 2012  Czech  Warsaw, Poland  9 July 2012  Polish  London, UK  19 November 2012  English  Rome, Italy  23 October 2012  Italian  Lisbon, Portugal  17 September 2012  European Portugese  Aix-en-Provence, France  4 September 2012  French  Strasbourg, France  16 October 2012  French  Nieuwegein, Netherlands  3 September 2012  Dutch  Madrid, Spain  6 August 2012  Spanish  Mechelen, Belgium  1 October 2012  English  Riga, Latvia  10 December 2012  Latvian  Petaling Jaya, Malaysia  10 September 2012  English  Edmonton, Canada  27 August 2012  English  Vancouver, Canada  27 August 2012  English  Ottawa, Canada  26 November 2012  English  Toronto, Canada  26 November 2012  English  Montreal, Canada  26 November 2012  English  Mexico City, Mexico  9 July 2012  Spanish  Sao Paulo, Brazil  2 July 2012  Brazilian Portugese To find a virtual or in-class event that suits you, go or http://oracle.com/education and choose a course and delivery type in your location.  

    Read the article

  • How to improve Minecraft-esque voxel world performance?

    - by SomeXnaChump
    After playing Minecraft I marveled a bit at its large worlds but at the same time I found them extremely slow to navigate, even with a quad core and meaty graphics card. Now I assume Minecraft is fairly slow because: A) It's written in Java, and as most of the spatial partitioning and memory management activities happen in there, it would naturally be slower than a native C++ version. B) It doesn't partition its world very well. I could be wrong on both assumptions; however it got me thinking about the best way to manage large voxel worlds. As it is a true 3D world, where a block can exist in any part of the world, it is basically a big 3D array [x][y][z], where each block in the world has a type (i.e BlockType.Empty = 0, BlockType.Dirt = 1 etc.) Now, I am assuming to make this sort of world perform well you would need to: A) Use a tree of some variety (oct/kd/bsp) to split all the cubes out; it seems like an oct/kd would be the better option as you can just partition on a per cube level not a per triangle level. B) Use some algorithm to work out which blocks can currently be seen, as blocks closer to the user could obfuscate the blocks behind, making it pointless to render them. C) Keep the block object themselves lightweight, so it is quick to add and remove them from the trees. I guess there is no right answer to this, but I would be interested to see peoples' opinions on the subject. How would you improve performance in a large voxel-based world?

    Read the article

  • Minimum percentage of free physical memory that Linux require for optimal performance

    - by csoto
    Recently, we have been getting questions about this percentage of free physical memory that OS require for optimal performance, mainly applicable to physical compute nodes. Under normal conditions you may see that at the nodes without any application running the OS take (for example) between 24 and 25 GB of memory. The Linux system reports the free memory in a different way, and most of those 25gbs (of the example) are available for user processes. IE: Mem: 99191652k total, 23785732k used, 75405920k free, 173320k buffers The MOS Doc Id. 233753.1 - "Analyzing Data Provided by '/proc/meminfo'" - explains it (section 4 - "Final Remarks"): Free Memory and Used Memory Estimating the resource usage, especially the memory consumption of processes is by far more complicated than it looks like at a first glance. The philosophy is an unused resource is a wasted resource.The kernel therefore will use as much RAM as it can to cache information from your local and remote filesystems/disks. This builds up over time as reads and writes are done on the system trying to keep the data stored in RAM as relevant as possible to the processes that have been running on your system. If there is free RAM available, more caching will be performed and thus more memory 'consumed'. However this doesn't really count as resource usage, since this cached memory is available in case some other process needs it. The cache is reclaimed, not at the time of process exit (you might start up another process soon that needs the same data), but upon demand. That said, focusing more specifically on the percentage question, apart from this memory that OS takes, how much should be the minimum free memory that must be available every node so that they operate normally? The answer is: As a rule of thumb 80% memory utilization is a good threshold, anything bigger than that should be investigated and remedied.

    Read the article

  • What's the best way to measure and track performance over various calls at runtime?

    - by bitcruncher
    Hello. I'm trying to optimize the performance of my code, but I'm not familiar with xcode's debuggers or debuggers in general. Is it possible to track the execution time and frequency of calls being made at runtime? Imagine a chain of events with some recursive calls over a fraction of a second. What's the best way to track where the CPU spends most of its time? Many thanks. Edit: Maybe this is better asked by saying, how do I use the xcode debug tools to do a stack trace?

    Read the article

  • What is the performance impact of tracing in C# and ASP.NET?

    - by SkippyFire
    I found this in some production login code I was looking at recently... HttpContext.Current.Trace.Write(query + ": " + username + ", " + password)); ...where query is a short SQL query to grab matching users. Does this have any sort of performance impact? I assume its very small. Also, what is the purpose of this exact type of trace, using the HTTP Context? Where does this data get traced to? Thanks in advance!

    Read the article

  • Are there ways to improve NHibernate's performance regarding entity instantiation?

    - by denny_ch
    Hi folks, while profiling NHibernate with NHProf I noticed that a lot of time is spend for entity building or at least spend outside the query duration (database roundtrip). The project I'm currently working on prefetches some static data (which goes into the 2nd level cache) at application start. There are about 3000 rows in the result set (and maybe 30 columns) that is queried in 75 ms. The overall duration observed by NHProf is about 13 SECONDS! Is this typical beheviour? I know that NHibernate shouldn't be used for bulk operations, but I didn't thought that entity instantiation would be so expensive. Are there ways to improve performance in such situations or do I have to live with it? Thx, denny_ch

    Read the article

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • Having all Views in the Shared folder - works but is throwing "caught exceptions". Performance conc

    - by Scott
    Hi everyone, I have a simple but heavily used app done in VS2010/MVC2. I didn't like having separate folders for each view/controller and so have all the views in the Shared folder. It's working fine but while debugging in VS, I noticed that it's throwing IO "caught exceptions" since it seems to be looking in the [FolderName]/[ViewName] folder before going down to the Shared folder. Again, the app runs fine but I'm concerned that all these "caught exceptions" will have a minor performance impact since they do have a cost in via the CLR. Is there any way I can configure the Routing so that it will only look in the Shared folder? Thanks.

    Read the article

  • How to test the performance of a user's PC in/for Flash?

    - by Jan P.
    Hey, I'm a developer on nice space MMO using Flash. On new PCs performance is quite good, but some features shouldn't be enabled on older PCs because the framerate drops to shit if we do. Flash wasn't made for this, but hey, pushing boundaries is fun. An example is fullscreen mode. Of course every user can manually enable it, but "advertising" it to a user with and oldie PC would be a bad idea - but for the Alienware crowd it would be dumb not to. So I want to find out how "capable" a user's PC is to decide if I should enable or disable some features for him. Any ideas? Thanks, Sujan

    Read the article

  • Does normalization really hurt performance in high traffic sites?

    - by Luke101
    I am designing a database and I would like to normalize the database. I one query I will joining about 30-40 tables. Will this hurt the website performance if it ever becomes extremely popular? This will be the main query and it will be getting called 50% of the time. The other queries I will be joining about 2 tables. I have a choice right now to normalize or not to normalize but if the normalization becomes a problem in the future i may have to rewrite 40% of the software and it may take me a long time. Does normalization really hurt in this case? Should I denormalize now while I have the time?

    Read the article

  • When to trash hashmap contents to avoid performance degradation?

    - by Jack
    Hello, I'm woking on Java with a large (millions) hashmap that is actually built with a capacity of 10.000.000 and a load factor of .75 and it's used to cache some values since cached values become useless with time (not accessed anymore) but I can't remove useless ones while on the way I would like to entirely empty the cache when its performance starts to degrade. How can I decide when it's good to do it? For example, with 10 millions capacity and .75 should I empty it when it reaches 7.5 millions of elements? Because I tried various threshold values but I would like to have an analytic one. I've already tested the fact that emping it when it's quite full is a boost for perfomance (first 2-3 algorithm iterations after the wipe just fill it back, then it starts running faster than before the wipe) Thanks

    Read the article

  • Is there a linear-time performance guarantee with using an Iterator?

    - by polygenelubricants
    If all that you're doing is a simple one-pass iteration (i.e. only hasNext() and next(), no remove()), are you guaranteed linear time performance and/or amortized constant cost per operation? Is this specified in the Iterator contract anywhere? Are there data structures/Java Collection which cannot be iterated in linear time? java.util.Scanner implements Iterator<String>. A Scanner is hardly a data structure (e.g. remove() makes absolutely no sense). Is this considered a design blunder? Is something like PrimeGenerator implements Iterator<Integer> considered bad design, or is this exactly what Iterator is for? (hasNext() always returns true, next() computes the next number on demand, remove() makes no sense). Similarly, would it have made sense for java.util.Random implements Iterator<Double>?

    Read the article

  • What are the performance characteristics of SignalR at scale?

    - by Joel Martinez
    I'm interested in the performance characteristics of SignalR at scale ... particularly, how it behaves at the fringes of capability. When a server is at capacity, what happens? Does it drop messages? Do some clients not get notified? Are messages queued until all are delivered? And if so, will the queue eventually overflow and crash the server? I ask because conducting such a test myself would be impractical, and I'm hoping someone could point me to documentation speaking to this ... or perhaps someone could comment that has seen how SignalR behaves at scale. Thanks! note: I'm familiar with this other stackoverflow question on the stability and scalability of SignalR. But I believe my question is asking a slightly different question in that I'm not concerned with the theoretical scaling limits, I want to know how it behaves when it reaches the limits ... so I know what to be on the lookout for.

    Read the article

  • Performance Impact of Generating 100's of Dynamic Methods in Ruby?

    - by viatropos
    What are the performance issues associated with generating 100's of dynamic methods in Ruby? I've been interested in using the Ruby Preferences Gem and noticed that it generates a bunch of helper methods for each preference you set. For instance: class User < ActiveRecord::Base preference :hot_salsa end ...generates something like: user.prefers_hot_salsa? # => false user.prefers_hot_salsa # => false If there are 100's of preferences like this, how does this impact the application? I assume it's not really a big deal but I'm just wondering, theoretically.

    Read the article

  • Why do debug symbols so adversely affect the performance of threaded applications on Linux?

    - by fluffels
    Hi. I'm writing a ray tracer. Recently, I added threading to the program to exploit the additional cores on my i5 Quad Core. In a weird turn of events the debug version of the application is now running slower, but the optimized build is running faster than before I added threading. I'm passing the "-g -pg" flags to gcc for the debug build and the "-O3" flag for the optimized build. Host system: Ubuntu Linux 10.4 AMD64. I know that debug symbols add significant overhead to the program, but the relative performance has always been maintained. I.e. a faster algorithm will always run faster in both debug and optimization builds. Any idea why I'm seeing this behavior?

    Read the article

  • If I take a large datatype. Will it affect performance in sql server

    - by Shantanu Gupta
    If i takes larger datatype where i know i should have taken datatype that was sufficient for possible values that i will insert into a table will affect any performance in sql server in terms of speed or any other way. eg. IsActive (0,1,2,3) not more than 3 in any case. I know i must take tinyint but due to some reasons consider it as compulsion, i am taking every numeric field as bigint and every character field as nVarchar(Max) Please give statistics if possible, to let me try to overcoming that compulsion. I need some solid analysis that can really make someone rethink before taking any datatype.

    Read the article

  • Does INNER JOIN performance depends on order of tables?

    - by Kartic
    A question suddenly came to my mind while I was tuning one stored procedure. Let me ask it - I have two tables, table1 and table2. table1 contains huge data and table2 contains less data. Is there performance-wise any difference between these two queries(I am changing order of the tables)? Query1: SELECT t1.col1, t2.col2 FROM table1 t1 INNER JOIN table2 t2 ON t1.col1=t2.col2 Query2: SELECT t1.col1, t2.col2 FROM table2 t2 INNER JOIN table1 t1 ON t1.col1=t2.col2 We are using Microsoft SQL server 2005.

    Read the article

  • Can using non primitive Integer/ Long datatypes too frequently in the application, hurt the performance??

    - by Marcos
    I am using Long/Integer data types very frequently in my application, to build Generic datatypes. I fear that using these wrapper objects instead of primitive data types may be harmful for performance since each time it needs to create objects which is an expensive operation. but also it seems that I have no other choice(when I have to use primtives with generics) rather than just using them. However, still it would be great if you can suggest if there is anything I could do to make it better. or any way if I could just avoid it ?? Also What may be the downsides of this ? Suggestions welcomed!

    Read the article

  • Improving the performance of an nHibernate Data Access Layer.

    - by Amitabh
    I am working on improving the performance of DataAccess Layer of an existing Asp.Net Web Application. The scenerios are. Its a web based application in Asp.Net. DataAccess layer is built using NHibernate 1.2 and exposed as WCF Service. The Entity class is marked with DataContract. Lazy loading is not used and because of the eager-fetching of the relations there is huge no of database objects are loaded in the memory. No of hits to the database is also high. For example I profiled the application using NHProfiler and there were about 50+ sql calls to load one of the Entity object using the primary key. I also can not change code much as its an existing live application with no NUnit test cases at all. Please can I get some suggestions here?

    Read the article

  • Performance optimization for mssql: decrease stored procedures execution time or unload the server?

    - by tim
    Hello everybody! We have a web service which provides search over hotels. There is a problem with performance: a single request to the service takes around 5000 ms. Almost all of the time is spent in database by executing storing procedures. During the request our server (mssql2008) consumes ~90% of the processor time. When 2 requests are made in parallel the average time grows and is around 7000 ms. When number of request is increasing, the average time of response is increasing as well. We have 20-30 requests per minute. Which kind of optimization is the best in this case having in mind that the goal is to provide stable response time for the service: 1) Try to decrease the stored procedures execution time 2) Try to find the way how to unload the server It is interesting to hear from people who deal with booking sites. Thanks!

    Read the article

  • What's the most performance effective way to have a webbrowser inside a class library ?

    - by Xaqron
    I'm developing a class library. Need some data from internet and this cannot be done with HttpWebRequest in my case so I wanna use WebBrowser component. WebBrowser is used for opening a single page and fetch some data from it, so WebBrowser life-time is very short. Running thread is MTA and no message pump or STA thread is available by default (class library is used by an ASP.NET application). How to create a WebBrowser object, run it with a STA thread, fetch data from a web page and finally dispose it with the least performance impact on the application ? I just need the idea/concept and will find details myself. Thanks guys

    Read the article

  • JavaScript: String Concatenation slow performance? Array.join('')?

    - by NickNick
    I've read that if I have a for loop, I should not use string concation because it's slow. Such as: for (i=0;i<10000000;i++) { str += 'a'; } And instead, I should use Array.join(), since it's much faster: var tmp = []; for (i=0;i<10000000;i++) { tmp.push('a'); } var str = tmp.join(''); However, I have also read that string concatention is ONLY a problem for Internet Explorer and that browsers such as Safari/Chrome, which use Webkit, actually perform FASTER is using string concatention than Array.join(). I've attempting to find a performance comparison between all major browser of string concatenation vs Array.join() and haven't been able to find one. As such, what is faster and more efficient JavaScript code? Using string concatenation or Array.join()?

    Read the article

  • SQL SEVER – Finding Memory Pressure – External and Internal

    - by pinaldave
    Following query will provide details of external and internal memory pressure. It will return the data how much portion in the existing memory is assigned to what kind of memory type. SELECT TYPE, SUM(single_pages_kb) InternalPressure, SUM(multi_pages_kb) ExtermalPressure FROM sys.dm_os_memory_clerks GROUP BY TYPE ORDER BY SUM(single_pages_kb) DESC, SUM(multi_pages_kb) DESC GO What is your method to find memory pressure? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >