Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 63/537 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Issue in understanding how to compare performance of classifier using ROC

    - by user1214586
    I am trying to demystify pattern recognition techniques and understood few of them. I am trying to design a classifier M. A gesture is classified based on the hamming distance between the sample time series y and the training time series x. The result of the classifier are probabilistic values. There are 3 classes/categories with labels A,B,C which classifies hand gestures where there are 100 samples for each class which are to be classified (single feature and data length=100). The data are different time series (x coordinate vs time). The training set is used to assign probabilities indicating which gesture has occured how many times. So,out of 10 training samples if gesture A appeared 6 times then probability that a gesture falls under category A is P(A)=0.6 similarly P(B)=0.3 and P(C)=0.1 Now, I am trying to compare the performance of this classifier with Bayes classifier, K-NN, Principal component analysis (PCA) and Neural Network. On what basis,parameter and method should I do it if I consider ROC or cross validate since the features for my classifier are the probabilistic values for the ROC plot hence what shall be the features for k-nn,bayes classification and PCA? Is there a code for it which will be useful. What should be the value of k is there are 3 classes of gestures? Please help. I am in a fix.

    Read the article

  • How to evaluate a user against optimal performance?

    - by Alex K
    I have trouble coming up with a system of assigning a rating to player's performance. Well, technically there is is a trivial rating system, but I don't like it because it would mean assigning negative scores, which I think most players will be discouraged by. The problem is that I only know the ideal number of actions to get the desired result. The worst case is infinite number of actions, so there is no obvious scale. The trivial way I referred to above is to take score = (#optimal-moves - #players-moves), with ideal score being zero. However, psychologically people like big numbers. No one wants to win by getting a mark of 0. I wonder if there is a system that someone else has come up with before to solve this problem? Essentially I wish to score the players based on: How close they've come to the ideal solution. Different challenges will have different optimal number of actions, so the scoring system needs to take that into account, e.g. Challenge 1 - max 10 points, Challenge 2 - max 20 points. I don't mind giving the players negative scores if they've performed exceptionally badly, I just don't want all scores to be <=0

    Read the article

  • Optimizing MySQL, Improving Performance of Database Servers

    - by Antoinette O'Sullivan
    Optimization involves improving the performance of a database server and queries that run against it. Optimization reduces query execution time and optimized queries benefit everyone that uses the server. When the server runs more smoothly and processes more queries with less, it performs better as a whole. To learn more about how a MySQL developer can make a difference with optimization, take the MySQL Developers training course. This 5-day instructor-led course is available as: Live-Virtual Event: Attend a live class from your own desk - no travel required. Choose from a selection of events on the schedule to suit different timezones. In-Class Event: Travel to an education center to attend an event. Below is a selection of the events on the schedule.  Location  Date  Delivery Language  Vienna, Austria  17 November 2014  German  Brussels, Belgium  8 December 2014  English  Sao Paulo, Brazil  14 July 2014  Brazilian Portuguese London, English  29 September 2014  English   Belfast, Ireland  6 October 2014  English  Dublin, Ireland  27 October 2014  English  Milan, Italy  10 November 2014  Italian  Rome, Italy  21 July 2014  Italian  Nairobi, Kenya  14 July 2014  English  Petaling Jaya, Malaysia  25 August 2014  English  Utrecht, Netherlands  21 July 2014  English  Makati City, Philippines  29 September 2014  English  Warsaw, Poland  25 August 2014  Polish  Lisbon, Portugal  13 October 2014  European Portuguese  Porto, Portugal  13 October 2014  European Portuguese  Barcelona, Spain  7 July 2014  Spanish  Madrid, Spain  3 November 2014  Spanish  Valencia, Spain  24 November 2014  Spanish  Basel, Switzerland  4 August 2014  German  Bern, Switzerland  4 August 2014  German  Zurich, Switzerland  4 August 2014  German The MySQL for Developers course helps prepare you for the MySQL 5.6 Developers OCP certification exam. To register for an event, request an additional event or learn more about the authentic MySQL curriculum, go to http://education.oracle.com/mysql.

    Read the article

  • High Performance SQL Views Using WITH(NOLOCK)

    - by gt0084e1
    Every now and then you find a simple way to make everything much faster. We often find customers creating data warehouses or OLAP cubes even though they have a relatively small amount of data (a few gigs) compared to their server memory. If you have more server memory than the size of your database or working set, nearly any aggregate query should run in a second or less. In some situations there may be high traffic on from the transactional application and SQL server may wait for several other queries to run before giving you your results. The purpose of this is make sure you don’t get two versions of the truth. In an ATM system, you want to give the bank balance after the withdrawal, not before or you may get a very unhappy customer. So by default databases are rightly very conservative about this kind of thing. Unfortunately this split-second precision comes at a cost. The performance of the query may not be acceptable by today’s standards because the database has to maintain locks on the server. Fortunately, SQL Server gives you a simple way to ask for the current version of the data without the pending transactions. To better facilitate reporting, you can create a view that includes these directives. CREATE VIEW CategoriesAndProducts AS SELECT * FROM dbo.Categories WITH(NOLOCK) INNER JOIN dbo.Products WITH(NOLOCK) ON dbo.Categories.CategoryID = dbo.Products.CategoryID In some cases quires that are taking minutes end up taking seconds. Much easier than moving the data to a separate database and it’s still pretty much real time give or take a few milliseconds. You’ve been warned not to use this for bank balances though. More from Data Stream

    Read the article

  • How to improve Minecraft-esque voxel world performance?

    - by SomeXnaChump
    After playing Minecraft I marveled a bit at its large worlds but at the same time I found them extremely slow to navigate, even with a quad core and meaty graphics card. Now I assume Minecraft is fairly slow because: A) It's written in Java, and as most of the spatial partitioning and memory management activities happen in there, it would naturally be slower than a native C++ version. B) It doesn't partition its world very well. I could be wrong on both assumptions; however it got me thinking about the best way to manage large voxel worlds. As it is a true 3D world, where a block can exist in any part of the world, it is basically a big 3D array [x][y][z], where each block in the world has a type (i.e BlockType.Empty = 0, BlockType.Dirt = 1 etc.) Now, I am assuming to make this sort of world perform well you would need to: A) Use a tree of some variety (oct/kd/bsp) to split all the cubes out; it seems like an oct/kd would be the better option as you can just partition on a per cube level not a per triangle level. B) Use some algorithm to work out which blocks can currently be seen, as blocks closer to the user could obfuscate the blocks behind, making it pointless to render them. C) Keep the block object themselves lightweight, so it is quick to add and remove them from the trees. I guess there is no right answer to this, but I would be interested to see peoples' opinions on the subject. How would you improve performance in a large voxel-based world?

    Read the article

  • Minimum percentage of free physical memory that Linux require for optimal performance

    - by csoto
    Recently, we have been getting questions about this percentage of free physical memory that OS require for optimal performance, mainly applicable to physical compute nodes. Under normal conditions you may see that at the nodes without any application running the OS take (for example) between 24 and 25 GB of memory. The Linux system reports the free memory in a different way, and most of those 25gbs (of the example) are available for user processes. IE: Mem: 99191652k total, 23785732k used, 75405920k free, 173320k buffers The MOS Doc Id. 233753.1 - "Analyzing Data Provided by '/proc/meminfo'" - explains it (section 4 - "Final Remarks"): Free Memory and Used Memory Estimating the resource usage, especially the memory consumption of processes is by far more complicated than it looks like at a first glance. The philosophy is an unused resource is a wasted resource.The kernel therefore will use as much RAM as it can to cache information from your local and remote filesystems/disks. This builds up over time as reads and writes are done on the system trying to keep the data stored in RAM as relevant as possible to the processes that have been running on your system. If there is free RAM available, more caching will be performed and thus more memory 'consumed'. However this doesn't really count as resource usage, since this cached memory is available in case some other process needs it. The cache is reclaimed, not at the time of process exit (you might start up another process soon that needs the same data), but upon demand. That said, focusing more specifically on the percentage question, apart from this memory that OS takes, how much should be the minimum free memory that must be available every node so that they operate normally? The answer is: As a rule of thumb 80% memory utilization is a good threshold, anything bigger than that should be investigated and remedied.

    Read the article

  • Will client side performance improve if images/scripts/styles on different subdomains?

    - by Andrey
    Hi, I have a domain specifically for static content, so cookies don't travel along with requests to images/scripts/css. Now, I think I've read somewhere that most browsers only open one download thread for each domain/subdomain, so different static content can't be downloaded in parallel if on the same domain. Will it make difference for browsers if i place scripts in script.mycdn.com, styles in css.mycdn.com and images in images.mycdn.com? Will it allow to let browser download images at the same time as scripts and styles? mycdn.com is of course a made up name :) Thanks! Andrey

    Read the article

  • What is the performance impact of tracing in C# and ASP.NET?

    - by SkippyFire
    I found this in some production login code I was looking at recently... HttpContext.Current.Trace.Write(query + ": " + username + ", " + password)); ...where query is a short SQL query to grab matching users. Does this have any sort of performance impact? I assume its very small. Also, what is the purpose of this exact type of trace, using the HTTP Context? Where does this data get traced to? Thanks in advance!

    Read the article

  • Are there ways to improve NHibernate's performance regarding entity instantiation?

    - by denny_ch
    Hi folks, while profiling NHibernate with NHProf I noticed that a lot of time is spend for entity building or at least spend outside the query duration (database roundtrip). The project I'm currently working on prefetches some static data (which goes into the 2nd level cache) at application start. There are about 3000 rows in the result set (and maybe 30 columns) that is queried in 75 ms. The overall duration observed by NHProf is about 13 SECONDS! Is this typical beheviour? I know that NHibernate shouldn't be used for bulk operations, but I didn't thought that entity instantiation would be so expensive. Are there ways to improve performance in such situations or do I have to live with it? Thx, denny_ch

    Read the article

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • How to test the performance of a user's PC in/for Flash?

    - by Jan P.
    Hey, I'm a developer on nice space MMO using Flash. On new PCs performance is quite good, but some features shouldn't be enabled on older PCs because the framerate drops to shit if we do. Flash wasn't made for this, but hey, pushing boundaries is fun. An example is fullscreen mode. Of course every user can manually enable it, but "advertising" it to a user with and oldie PC would be a bad idea - but for the Alienware crowd it would be dumb not to. So I want to find out how "capable" a user's PC is to decide if I should enable or disable some features for him. Any ideas? Thanks, Sujan

    Read the article

  • When to trash hashmap contents to avoid performance degradation?

    - by Jack
    Hello, I'm woking on Java with a large (millions) hashmap that is actually built with a capacity of 10.000.000 and a load factor of .75 and it's used to cache some values since cached values become useless with time (not accessed anymore) but I can't remove useless ones while on the way I would like to entirely empty the cache when its performance starts to degrade. How can I decide when it's good to do it? For example, with 10 millions capacity and .75 should I empty it when it reaches 7.5 millions of elements? Because I tried various threshold values but I would like to have an analytic one. I've already tested the fact that emping it when it's quite full is a boost for perfomance (first 2-3 algorithm iterations after the wipe just fill it back, then it starts running faster than before the wipe) Thanks

    Read the article

  • Does normalization really hurt performance in high traffic sites?

    - by Luke101
    I am designing a database and I would like to normalize the database. I one query I will joining about 30-40 tables. Will this hurt the website performance if it ever becomes extremely popular? This will be the main query and it will be getting called 50% of the time. The other queries I will be joining about 2 tables. I have a choice right now to normalize or not to normalize but if the normalization becomes a problem in the future i may have to rewrite 40% of the software and it may take me a long time. Does normalization really hurt in this case? Should I denormalize now while I have the time?

    Read the article

  • What are the performance characteristics of SignalR at scale?

    - by Joel Martinez
    I'm interested in the performance characteristics of SignalR at scale ... particularly, how it behaves at the fringes of capability. When a server is at capacity, what happens? Does it drop messages? Do some clients not get notified? Are messages queued until all are delivered? And if so, will the queue eventually overflow and crash the server? I ask because conducting such a test myself would be impractical, and I'm hoping someone could point me to documentation speaking to this ... or perhaps someone could comment that has seen how SignalR behaves at scale. Thanks! note: I'm familiar with this other stackoverflow question on the stability and scalability of SignalR. But I believe my question is asking a slightly different question in that I'm not concerned with the theoretical scaling limits, I want to know how it behaves when it reaches the limits ... so I know what to be on the lookout for.

    Read the article

  • Is there a linear-time performance guarantee with using an Iterator?

    - by polygenelubricants
    If all that you're doing is a simple one-pass iteration (i.e. only hasNext() and next(), no remove()), are you guaranteed linear time performance and/or amortized constant cost per operation? Is this specified in the Iterator contract anywhere? Are there data structures/Java Collection which cannot be iterated in linear time? java.util.Scanner implements Iterator<String>. A Scanner is hardly a data structure (e.g. remove() makes absolutely no sense). Is this considered a design blunder? Is something like PrimeGenerator implements Iterator<Integer> considered bad design, or is this exactly what Iterator is for? (hasNext() always returns true, next() computes the next number on demand, remove() makes no sense). Similarly, would it have made sense for java.util.Random implements Iterator<Double>?

    Read the article

  • Performance Impact of Generating 100's of Dynamic Methods in Ruby?

    - by viatropos
    What are the performance issues associated with generating 100's of dynamic methods in Ruby? I've been interested in using the Ruby Preferences Gem and noticed that it generates a bunch of helper methods for each preference you set. For instance: class User < ActiveRecord::Base preference :hot_salsa end ...generates something like: user.prefers_hot_salsa? # => false user.prefers_hot_salsa # => false If there are 100's of preferences like this, how does this impact the application? I assume it's not really a big deal but I'm just wondering, theoretically.

    Read the article

  • Why do debug symbols so adversely affect the performance of threaded applications on Linux?

    - by fluffels
    Hi. I'm writing a ray tracer. Recently, I added threading to the program to exploit the additional cores on my i5 Quad Core. In a weird turn of events the debug version of the application is now running slower, but the optimized build is running faster than before I added threading. I'm passing the "-g -pg" flags to gcc for the debug build and the "-O3" flag for the optimized build. Host system: Ubuntu Linux 10.4 AMD64. I know that debug symbols add significant overhead to the program, but the relative performance has always been maintained. I.e. a faster algorithm will always run faster in both debug and optimization builds. Any idea why I'm seeing this behavior?

    Read the article

  • If I take a large datatype. Will it affect performance in sql server

    - by Shantanu Gupta
    If i takes larger datatype where i know i should have taken datatype that was sufficient for possible values that i will insert into a table will affect any performance in sql server in terms of speed or any other way. eg. IsActive (0,1,2,3) not more than 3 in any case. I know i must take tinyint but due to some reasons consider it as compulsion, i am taking every numeric field as bigint and every character field as nVarchar(Max) Please give statistics if possible, to let me try to overcoming that compulsion. I need some solid analysis that can really make someone rethink before taking any datatype.

    Read the article

  • Can using non primitive Integer/ Long datatypes too frequently in the application, hurt the performance??

    - by Marcos
    I am using Long/Integer data types very frequently in my application, to build Generic datatypes. I fear that using these wrapper objects instead of primitive data types may be harmful for performance since each time it needs to create objects which is an expensive operation. but also it seems that I have no other choice(when I have to use primtives with generics) rather than just using them. However, still it would be great if you can suggest if there is anything I could do to make it better. or any way if I could just avoid it ?? Also What may be the downsides of this ? Suggestions welcomed!

    Read the article

  • Improving the performance of an nHibernate Data Access Layer.

    - by Amitabh
    I am working on improving the performance of DataAccess Layer of an existing Asp.Net Web Application. The scenerios are. Its a web based application in Asp.Net. DataAccess layer is built using NHibernate 1.2 and exposed as WCF Service. The Entity class is marked with DataContract. Lazy loading is not used and because of the eager-fetching of the relations there is huge no of database objects are loaded in the memory. No of hits to the database is also high. For example I profiled the application using NHProfiler and there were about 50+ sql calls to load one of the Entity object using the primary key. I also can not change code much as its an existing live application with no NUnit test cases at all. Please can I get some suggestions here?

    Read the article

  • Performance optimization for mssql: decrease stored procedures execution time or unload the server?

    - by tim
    Hello everybody! We have a web service which provides search over hotels. There is a problem with performance: a single request to the service takes around 5000 ms. Almost all of the time is spent in database by executing storing procedures. During the request our server (mssql2008) consumes ~90% of the processor time. When 2 requests are made in parallel the average time grows and is around 7000 ms. When number of request is increasing, the average time of response is increasing as well. We have 20-30 requests per minute. Which kind of optimization is the best in this case having in mind that the goal is to provide stable response time for the service: 1) Try to decrease the stored procedures execution time 2) Try to find the way how to unload the server It is interesting to hear from people who deal with booking sites. Thanks!

    Read the article

  • What's the most performance effective way to have a webbrowser inside a class library ?

    - by Xaqron
    I'm developing a class library. Need some data from internet and this cannot be done with HttpWebRequest in my case so I wanna use WebBrowser component. WebBrowser is used for opening a single page and fetch some data from it, so WebBrowser life-time is very short. Running thread is MTA and no message pump or STA thread is available by default (class library is used by an ASP.NET application). How to create a WebBrowser object, run it with a STA thread, fetch data from a web page and finally dispose it with the least performance impact on the application ? I just need the idea/concept and will find details myself. Thanks guys

    Read the article

  • JavaScript: String Concatenation slow performance? Array.join('')?

    - by NickNick
    I've read that if I have a for loop, I should not use string concation because it's slow. Such as: for (i=0;i<10000000;i++) { str += 'a'; } And instead, I should use Array.join(), since it's much faster: var tmp = []; for (i=0;i<10000000;i++) { tmp.push('a'); } var str = tmp.join(''); However, I have also read that string concatention is ONLY a problem for Internet Explorer and that browsers such as Safari/Chrome, which use Webkit, actually perform FASTER is using string concatention than Array.join(). I've attempting to find a performance comparison between all major browser of string concatenation vs Array.join() and haven't been able to find one. As such, what is faster and more efficient JavaScript code? Using string concatenation or Array.join()?

    Read the article

  • SQL SEVER – Finding Memory Pressure – External and Internal

    - by pinaldave
    Following query will provide details of external and internal memory pressure. It will return the data how much portion in the existing memory is assigned to what kind of memory type. SELECT TYPE, SUM(single_pages_kb) InternalPressure, SUM(multi_pages_kb) ExtermalPressure FROM sys.dm_os_memory_clerks GROUP BY TYPE ORDER BY SUM(single_pages_kb) DESC, SUM(multi_pages_kb) DESC GO What is your method to find memory pressure? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >