Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 115/537 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Lots of pointer casts in QGraphicsView framework and performance

    - by kleimola
    Since most of the convenience functions of QGraphicsScene and QGraphicsItem (such as items(), collidingItems(), childItems() etc.) return a QList you're forced to do lots of qgraphicsitem_cast or static_cast and QGraphicsItem::Type() checks to get hold of the actual items when you have lots of different type of items in the scene. I thought doing lots of subclass casts were not a desirable coding style, but I guess in this case there are no other viable way, or is there? QList<QGraphicsItem *> itemsHit = someItem->collidingItems(Qt::IntersectsItemShape); foreach (QGraphicsItem *item, itemsHit) { if (item->type() == QGraphicsEllipseItem::type()) { QGraphicsEllipseItem *ellipse = qgraphicsitem_cast<QGraphicsEllipseItem *>(item); // do something } else if (item->type() == MyItemSubclass::type()) { MyItemSubClass *myItem = qgraphicsitem_cast<MyItemSubClass *>(item); // do something } // etc } The above qgraphicsitem_cast could be replaced by static_cast since correct type is already verified. When doing lots of these all the time (very dynamic scene), will the numerous casting affect performance beyond the normal if-else evaluation?

    Read the article

  • Disk IO Performance Limitations based on numbers of folders/files

    - by Josh
    I have an application where users are allowed to upload images to the server. Our Web Server is a windows 2008 server and we have a site (images.mysite.com) that points to a shared drive on a unix box. The code used to do the uploading is C# 3.5. The system currently supports a workflow where after a threshold is met a new subfolder can be generated. The question we have is how many files and/or subfolders can you have in a single folder before there is a degredation in performance - in serving the images up through IIS 7 and reading/writing through code?

    Read the article

  • Performance improvement to a big if clause in SQL Server function

    - by Miles D
    I am maintaining a function in SQL Server 2005, that based on an integer input parameter needs to call different functions e.g. IF @rule_id = 1 -- execute function 1 ELSE IF @rule_id = 2 -- execute function 2 ELSE IF @rule_id = 3 ... etc The problem is that there are a fair few rules (about 100), and although the above is fairly readable, its performance isn't great. At the moment it's implemented as a series of IF's that do a binary-chop, which is much faster, but becomes fairly unpleasant to read and maintain. Any alternative ideas for something that performs well and is fairly maintainable?

    Read the article

  • Performance side effect with static internal Util classes?

    - by Fostah
    For a util class that contains a bunch of static functionality that's related to the same component, but has different purposes, I like to use static internal classes to organize the functionality, like so: class ComponentUtil { static class Layout { static int calculateX(/* ... */) { // ... } static int calculateY(/* ... */) { // ... } } static class Process { static int doThis(/* ... */) { // ... } static int doThat(/* ... */) { // ... } } } Is there any performance degradation using these internal classes vs. just having all the functionality in the Util class?

    Read the article

  • Simple performance testing tool in C#?

    - by Tomas
    Hi, At first -I need to do it as my university project so I am not interested in using existing tools. I would like to know whether it is even possible to write a very simple tool that I could use for performance testing of web applications. It would only record actions (I do not know, maybe just packet sniffering?) and then replay. However I have basic idea (record packets on port 80 and sending them again), I do not know how to measure time for each transaction as they are not differentiated. Any help is greatly appreciated, thank you!

    Read the article

  • Performance: Subquerry or Joining

    - by Auro
    HelloHello I got a little Question about Performance of a Subquerry /Joining another table INSERT INTO Original.Person ( PID, Name, Surname, SID ) ( SELECT ma.PID_new , TBL.Name , ma.Surname, TBL.SID FROM Copy.Person TBL , original.MATabelle MA WHERE TBL.PID = p_PID_old AND TBL.PID = MA.PID_old ); This is my SQL, now this thing runs around 1 million times or more. Now my question is what would be faster? if I change TBL.SID to (Select new from helptable where old = tbl.sid) or if I add helptable to the from and do the joining in the where? greets Auro

    Read the article

  • C++ performance when accessing class members

    - by Dr. Acula
    I'm writing something performance-critical and wanted to know if it could make a difference if I use: int test( int a, int b, int c ) { // Do millions of calculations with a, b, c } or class myStorage { public: int a, b, c; }; int test( myStorage values ) { // Do millions of calculations with values.a, values.b, values.c } Does this basically result in similar code? Is there an extra overhead of accessing the class members? I'm sure that this is clear to an expert in C++ so I won't try and write an unrealistic benchmark for it right now

    Read the article

  • SqlCeCommand ExecuteNonQuery performance issue

    - by Michael
    I've been asked to resolve an issue with a .Net/SqlServerCe application. Specifically, after repeated inserts against the db, performance becomes increasingly degraded. In one instance at ~200 rows, in another at ~1000 rows. In the latter case the code being used looks like this: Dim cm1 As System.Data.SqlServerCe.SqlCeCommand = cn1.CreateCommand cm1.CommandText = "INSERT INTO Table1 Values(?,?,?,?,?,?,?,?,?,?,?,?,?)" For j = 0 To ds.Tables(0).Rows.Count - 1 'this is 3110 For i = 0 To 12 cm1.Parameters(tbl(i, 0)).Value = Vals(j,i) 'values taken from a different db Next cm1.ExecuteNonQuery() Next The specifics aren't super important (like what 'tbl' is, etc) but rather whether or not this code should be expected to handle this number of inserts, or if the crawl I'm witnessing is to be expected.

    Read the article

  • Performance when accessing class members

    - by Dr. Acula
    I'm writing something performance-critical and wanted to know if it could make a difference if I use: int test( int a, int b, int c ) { // Do millions of calculations with a, b, c } or class myStorage { public: int a, b, c; }; int test( myStorage values ) { // Do millions of calculations with values.a, values.b, values.c } Does this basically result in similar code? Is there an extra overhead of accessing the class members? I'm sure that this is clear to an expert in C++ so I won't try and write an unrealistic benchmark for it right now

    Read the article

  • Performance effect of using print statements in Python script

    - by Sudar
    I have a Python script that process a huge text file (with around 4 millon lines) and writes the data into two separate files. I have added a print statement, which outputs a string for every line for debugging. I want to know how bad it could be from the performance perspective? If it is going to very bad, I can remove the debugging line. Edit It turns out that having a print statement for every line in a file with 4 million lines is increasing the time way too much.

    Read the article

  • C++ STL: Array vs Vector: Raw element accessing performance

    - by oh boy
    I'm building an interpreter and as I'm aiming for raw speed this time, every clock cycle matters for me in this (raw) case. Do you have any experience or information what of the both is faster: Vector or Array? All what matters is the speed I can access an element (opcode receiving), I don't care about inserting, allocation, sorting, etc. I'm going to lean myself out of the window now and say: Arrays are at least a bit faster than vectors in terms of accessing an element i. It seems really logical for me. With vectors you have all those security and controlling overhead which doesn't exist for arrays. (Why) Am I wrong? No, I can't ignore the performance difference - even if it is so small - I have already optimized and minimized every other part of the VM which executes the opcodes :)

    Read the article

  • Performance: Subquery or Joining

    - by Auro
    Hello I got a little question about performance of a subquery / joining another table INSERT INTO Original.Person ( PID, Name, Surname, SID ) ( SELECT ma.PID_new , TBL.Name , ma.Surname, TBL.SID FROM Copy.Person TBL , original.MATabelle MA WHERE TBL.PID = p_PID_old AND TBL.PID = MA.PID_old ); This is my SQL, now this thing runs around 1 million times or more. Now my question is what would be faster? if I change TBL.SID to (Select new from helptable where old = tbl.sid) or if I add helptable to the from and do the joining in the where? greets Auro

    Read the article

  • Does async and await incease performance of an ASP.Net application

    - by Kerezo
    I recently read a article about c#-5 and new $ nice asynchronous programming. I see it works greate in windows application. The question came to me before is if this feature can increase ASP.Net performance? consider this code: public T GetData() { var d = GetSomeData(); return d; } and public async T GetData2() { var d = await GetSomeData(); return d; } Has in an ASP.Net appication that two codes difference? thanks

    Read the article

  • Does 'throw' or 'try...catch' hinder performance?

    - by Richard
    I've been reading all over the place (including here) about when exception should / shouldn't be used. I now want to change my code that would throw to make the method return false and handle it like that, but my question is: Is it the throwing or try..catch-ing that can hinder performance...? What I mean is, would this be acceptable: bool method someMmethod() { try { // ...Do something catch (Exception ex) // Don't care too much what at the moment... { // Output error // Return false } return true // No errors Or would there be a better way to do it? (I'm bloody sick of seeing "Unhandled exception..." LOL!)

    Read the article

  • Performance problem on a query.

    - by yapiskan
    Hi, I have a performance problem on a query. First table is a Customer table which has millions records in it. Customer table has a column of email address and some other information about customer. Second table is a CommunicationInfo table which contains just Email addresses. And What I want in here is; how many times the email address in CommunicationInfo table repeats in Customers table. What could be the the most performer query. The basic query that I can explain this situation is; Select ci.Email, count(*) from Customer c left join CommunicationInfo ci on c.Email1 = ci.Email or c.Email2 = ci.Email Group by ci.Email But sure, it takes about 5, 6 minutes in execution. Thanks in Advance.

    Read the article

  • Slow performance of query

    - by user642378
    Hi, I have asked the performance of query and i tried to simplyfy it.but still it not works.I am adding my query below.Please can you simplify it more effectively select r.parent_itemid f_id, parent_item.name f_name, parent_item.typeid f_typeid, parent_item.ownerid f_ownerid, parent_item.created f_created, parent_item.modifiedby f_modifiedby, parent_item.modified f_modified, pt.name f_tname, child_item.id i_id, t.name i_tname, child_item.typeid i_typeid, child_item.name i_name, child_item.ownerid i_ownerid, child_item.created i_created, child_item.modifiedby i_modifiedby, child_item.modified i_modified, r.ordinal i_ordinal from item child_item, type t, relation r, item parent_item, type pt where r.child_itemid = child_item.id and t.id=child_item.typeid and parent_item.id = r.parent_itemid and pt.id = parent_item.typeid and parent_item.id in ( select itemid from permission where itemid=parent_item.id and (holder_itemid in (10,100) and level > 0) ) order by r.parent_itemid, r.relation_typeid, r.ordinal Thanks you regards jennie

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • DataView.RowFilter Vs DataTable.Select() vs DataTable.Rows.Find()

    - by Aseem Gautam
    Considering the code below: Dataview someView = new DataView(sometable) someView.RowFilter = someFilter; if(someView.count > 0) { …. } Quite a number of articles which say Datatable.Select() is better than using DataViews, but these are prior to VS2008. Solved: The Mystery of DataView's Poor Performance with Large Recordsets Array of DataRecord vs. DataView: A Dramatic Difference in Performance Googling on this topic I found some articles/forum topics which mention Datatable.Select() itself is quite buggy(not sure on this) and underperforms in various scenarios. On this(Best Practices ADO.NET) topic on msdn it is suggested that if there is primary key defined on a datatable the findrows() or find() methods should be used insted of Datatable.Select(). This article here (.NET 1.1) benchmarks all the three approaches plus a couple more. But this is for version 1.1 so not sure if these are valid still now. Accroding to this DataRowCollection.Find() outperforms all approaches and Datatable.Select() outperforms DataView.RowFilter. So I am quite confused on what might be the best approach on finding rows in a datatable. Or there is no single good way to do this, multiple solutions exist depending upon the scenario?

    Read the article

  • How do I use PerformanceCounterType AverageTimer32?

    - by Patrick J Collins
    I'm trying to measure the time it takes to execute a piece of code on my production server. I'd like to monitor this information in real time, so I decided to give Performance Analyser a whizz. I understand from MSDN that I need to create both an AverageTimer32 and an AverageBase performance counter, which I duly have. I increment the counter in my program, and I can see the CallCount go up and down, but the AverageTime is always zero. What am I doing wrong? Thanks! Here's a snippit of code : long init_call_time = Environment.TickCount; // *** // Lots and lots of code... // *** // Count number of calls PerformanceCounter perf = new PerformanceCounter("Cat", "CallCount", "Instance", false); perf.Increment(); perf.Close(); // Count execution time PerformanceCounter perf2 = new PerformanceCounter("Cat", "CallTime", "Instance", false); perf2.NextValue(); perf2.IncrementBy(Environment.TickCount - init_call_time); perf2.Close(); // Average base for execution time PerformanceCounter perf3 = new PerformanceCounter("Cat", "CallTimeBase", "Instance", false); perf3.Increment(); perf3.Close(); perf2.NextValue();

    Read the article

  • How to get decent MySQL driver perfomance in Ruby

    - by Zombies
    I notice that I am getting very poor performance for either or both inserts and queries. The queries themselves are basic and can execute with no delay directly from mysql. The ruby script that I wrote is only 1 thread, so only 1 connection is being used, and never closed unless the script is terminated. Pretty basic, I am just trying to insert a lot of rows. There is a look-up or two to get a surrogate key, or to check for duplicates, but the complexity is just O(n). Also, it isn't like there are millions of records, so again the queries themselves take no time to run. I am using: Ruby 1.9.1 Gem/driver:ruby-mysql 2.9.2 MySQL 5.1.37-1ubuntu5.1 ^ all 32 bit versions on a 32bit ubuntu distro I am getting about 1-2 inserts per second, pretty slow. I know a lot of people will suggest to change drivers, but that means I have some refactoring and resting to do. So I would really appreciate any help, but please if you do recomend that at least say why you do (eg: if you have used ruby-mysql x.x.x before and found another mysql driver to be better).ruby-mysql 2.9.2 What I would like to know: How can I improve performance with ruby-mysql 2.9.2 If and only if I cannot do this with ruby-mysql 2.9.2, what should I do?

    Read the article

  • Will fixed-point arithmetic be worth my trouble?

    - by Thomas
    I'm working on a fluid dynamics Navier-Stokes solver that should run in real time. Hence, performance is important. Right now, I'm looking at a number of tight loops that each account for a significant fraction of the execution time: there is no single bottleneck. Most of these loops do some floating-point arithmetic, but there's a lot of branching in between. The floating-point operations are mostly limited to additions, subtractions, multiplications, divisions and comparisons. All this is done using 32-bit floats. My target platform is x86 with at least SSE1 instructions. (I've verified in the assembler output that the compiler indeed generates SSE instructions.) Most of the floating-point values that I'm working with have a reasonably small upper bound, and precision for near-zero values isn't very important. So the thought occurred to me: maybe switching to fixed-point arithmetic could speed things up? I know the only way to be really sure is to measure it, that might take days, so I'd like to know the odds of success beforehand. Fixed-point was all the rage back in the days of Doom, but I'm not sure where it stands anno 2010. Considering how much silicon is nowadays pumped into floating-point performance, is there a chance that fixed-point arithmetic will still give me a significant speed boost? Does anyone have any real-world experience that may apply to my situation?

    Read the article

  • Using Oracle hint "FIRST_ROWS" to improve Oracle database performances

    - by bobetko
    I have a statement that runs on Oracle database server. The statement has about 5 joins and there is nothing unusual there. It looks pretty much like below: SELECT field1, field2, field3, ... FROM table1, table2, table3, table4, table5 WHERE table1.id = table2.id AND table2.id = table3.id AND ... table5.userid = 1 The problem (and what is interesting) is that statement for userid = 1 takes 1 second to return 590 records. Statement for userid = 2 takes around 30 seconds to return 70 records. I don't understand why is difference so big. It seems that different execution plan is chosen for statement with userid = 1 and different for userid = 2. After I implemented Oracle Hint FIRST_ROW, performance become significantly better. Both statements (for both ids 1 and 2) produce return in under 1 second. SELECT /*+ FIRST_ROWS */ field1, field2, field3, ... FROM table1, table2, table3, table4, table5 WHERE table1.id = table2.id AND table2.id = table3.id AND ... table5.userid = 1 Questions: 1) What are possible reasons for bad performance when userid = 2 (when hint is not used)? 2) Why would execution plan be different for one vs another statement (when hint is not used)? 3) Is there anything that I should be careful about when deciding to add this hint to my queries? Thanks

    Read the article

  • Optimizing code using PIL

    - by freakazo
    Firstly sorry for the long piece of code pasted below. This is my first time actually having to worry about performance of an application so I haven't really ever worried about performance. This piece of code pretty much searches for an image inside another image, it takes 30 seconds to run on my computer, converting the images to greyscale and other changes shaved of 15 seconds, I need another 15 shaved off. I did read a bunch of pages and looked at examples but I couldn't find the same problems in my code. So any help would be greatly appreciated. From the looks of it (cProfile) 25 seconds is spent within the Image module, and only 5 seconds in my code. from PIL import Image import os, ImageGrab, pdb, time, win32api, win32con import cProfile def GetImage(name): name = name + '.bmp' try: print(os.path.join(os.getcwd(),"Images",name)) image = Image.open(os.path.join(os.getcwd(),"Images",name)) except: print('error opening image;', name) return image def Find(name): image = GetImage(name) imagebbox = image.getbbox() screen = ImageGrab.grab() #screen = Image.open(os.path.join(os.getcwd(),"Images","Untitled.bmp")) YLimit = screen.getbbox()[3] - imagebbox[3] XLimit = screen.getbbox()[2] - imagebbox[2] image = image.convert("L") Screen = screen.convert("L") Screen.load() image.load() #print(XLimit, YLimit) Found = False image = image.getdata() for y in range(0,YLimit): for x in range(0,XLimit): BoxCoordinates = x, y, x+imagebbox[2], y+imagebbox[3] ScreenGrab = screen.crop(BoxCoordinates) ScreenGrab = ScreenGrab.getdata() if image == ScreenGrab: Found = True #print("woop") return x,y if Found == False: return "Not Found" cProfile.run('print(Find("Login"))')

    Read the article

  • Java repaint is slow under certain conditions.

    - by Gabriel A. Zorrilla
    I'm doing a simple grid which each square is highlighted by the cursor: They are a couple of JPanels, mapgrid and overlay inside a JLayeredPane, with mapgrid on the bottom. Mapgrid just draws on initialization the grid, its paint metodh is: public void paintComponent(Graphics g) { super.paintComponent(g); Graphics2D g2d = (Graphics2D) g; g2d.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); for (int i = 0; i < h; i++) { for (int j = 0; j < w; j++) { g2d.setColor(new Color(128, 128, 128, 255)); g2d.drawRect(tileSize * j, i * tileSize, tileSize, tileSize); } } In the overlay JPanel is where the highlighting occurs, this is what is repainted when the mouse is moved: public void paintComponent(Graphics g) { super.paintComponent(g); Graphics2D g2d = (Graphics2D) g; g2d.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); g2d.setColor(new Color(255, 255, 128, 255)); g2d.drawRect((pointerX/tileSize)*tileSize,(pointerY/ tileSize)*tileSize, tileSize, tileSize); } I noticed that even though the base layer (mapgrid) is NOT repainted when the mouse moves, just the transparent overlay layer, the performance is lacking. If i give the overlay JPanel a background, its way faster. If i remove the mapgrid Antialiasing, its a bit faster too. I don't know why giving a background to the overlay layer (and thus, hiding the mapgrid) or disabling antialiasing in the mapgrid leads to much better performance. Is there a better way to do this? Why does this happen?

    Read the article

  • Java Collections and Garbage Collector

    - by Anth0
    A little question regarding performance in a Java web app. Let's assume I have a List<Rubrique> listRubriques with ten Rubrique objects. A Rubrique contains one list of products (List<product> listProducts) and one list of clients (List<Client> listClients). What exactly happens in memory if I do this: listRubriques.clear(); listRubriques = null; My point of view would be that, since listRubriques is empty, all my objects previously referenced by this list (including listProducts and listClients) will be garbage collected pretty soon. But since Collection in Java are a little bit tricky and since I have quite performance issues with my app i'm asking the question :) edit : let's assume now that my Client object contains a List<Client>. Therefore, I have kind of a circular reference between my objects. What would happen then if my listRubrique is set to null? This time, my point of view would be that my Client objects will become "unreachable" and might create a memory leak?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >