Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 233/1257 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • Performance: Subquerry or Joining

    - by Auro
    HelloHello I got a little Question about Performance of a Subquerry /Joining another table INSERT INTO Original.Person ( PID, Name, Surname, SID ) ( SELECT ma.PID_new , TBL.Name , ma.Surname, TBL.SID FROM Copy.Person TBL , original.MATabelle MA WHERE TBL.PID = p_PID_old AND TBL.PID = MA.PID_old ); This is my SQL, now this thing runs around 1 million times or more. Now my question is what would be faster? if I change TBL.SID to (Select new from helptable where old = tbl.sid) or if I add helptable to the from and do the joining in the where? greets Auro

    Read the article

  • C++ performance when accessing class members

    - by Dr. Acula
    I'm writing something performance-critical and wanted to know if it could make a difference if I use: int test( int a, int b, int c ) { // Do millions of calculations with a, b, c } or class myStorage { public: int a, b, c; }; int test( myStorage values ) { // Do millions of calculations with values.a, values.b, values.c } Does this basically result in similar code? Is there an extra overhead of accessing the class members? I'm sure that this is clear to an expert in C++ so I won't try and write an unrealistic benchmark for it right now

    Read the article

  • Simple performance testing tool in C#?

    - by Tomas
    Hi, At first -I need to do it as my university project so I am not interested in using existing tools. I would like to know whether it is even possible to write a very simple tool that I could use for performance testing of web applications. It would only record actions (I do not know, maybe just packet sniffering?) and then replay. However I have basic idea (record packets on port 80 and sending them again), I do not know how to measure time for each transaction as they are not differentiated. Any help is greatly appreciated, thank you!

    Read the article

  • SqlCeCommand ExecuteNonQuery performance issue

    - by Michael
    I've been asked to resolve an issue with a .Net/SqlServerCe application. Specifically, after repeated inserts against the db, performance becomes increasingly degraded. In one instance at ~200 rows, in another at ~1000 rows. In the latter case the code being used looks like this: Dim cm1 As System.Data.SqlServerCe.SqlCeCommand = cn1.CreateCommand cm1.CommandText = "INSERT INTO Table1 Values(?,?,?,?,?,?,?,?,?,?,?,?,?)" For j = 0 To ds.Tables(0).Rows.Count - 1 'this is 3110 For i = 0 To 12 cm1.Parameters(tbl(i, 0)).Value = Vals(j,i) 'values taken from a different db Next cm1.ExecuteNonQuery() Next The specifics aren't super important (like what 'tbl' is, etc) but rather whether or not this code should be expected to handle this number of inserts, or if the crawl I'm witnessing is to be expected.

    Read the article

  • Performance when accessing class members

    - by Dr. Acula
    I'm writing something performance-critical and wanted to know if it could make a difference if I use: int test( int a, int b, int c ) { // Do millions of calculations with a, b, c } or class myStorage { public: int a, b, c; }; int test( myStorage values ) { // Do millions of calculations with values.a, values.b, values.c } Does this basically result in similar code? Is there an extra overhead of accessing the class members? I'm sure that this is clear to an expert in C++ so I won't try and write an unrealistic benchmark for it right now

    Read the article

  • Performance effect of using print statements in Python script

    - by Sudar
    I have a Python script that process a huge text file (with around 4 millon lines) and writes the data into two separate files. I have added a print statement, which outputs a string for every line for debugging. I want to know how bad it could be from the performance perspective? If it is going to very bad, I can remove the debugging line. Edit It turns out that having a print statement for every line in a file with 4 million lines is increasing the time way too much.

    Read the article

  • C++ STL: Array vs Vector: Raw element accessing performance

    - by oh boy
    I'm building an interpreter and as I'm aiming for raw speed this time, every clock cycle matters for me in this (raw) case. Do you have any experience or information what of the both is faster: Vector or Array? All what matters is the speed I can access an element (opcode receiving), I don't care about inserting, allocation, sorting, etc. I'm going to lean myself out of the window now and say: Arrays are at least a bit faster than vectors in terms of accessing an element i. It seems really logical for me. With vectors you have all those security and controlling overhead which doesn't exist for arrays. (Why) Am I wrong? No, I can't ignore the performance difference - even if it is so small - I have already optimized and minimized every other part of the VM which executes the opcodes :)

    Read the article

  • Does async and await incease performance of an ASP.Net application

    - by Kerezo
    I recently read a article about c#-5 and new $ nice asynchronous programming. I see it works greate in windows application. The question came to me before is if this feature can increase ASP.Net performance? consider this code: public T GetData() { var d = GetSomeData(); return d; } and public async T GetData2() { var d = await GetSomeData(); return d; } Has in an ASP.Net appication that two codes difference? thanks

    Read the article

  • Performance: Subquery or Joining

    - by Auro
    Hello I got a little question about performance of a subquery / joining another table INSERT INTO Original.Person ( PID, Name, Surname, SID ) ( SELECT ma.PID_new , TBL.Name , ma.Surname, TBL.SID FROM Copy.Person TBL , original.MATabelle MA WHERE TBL.PID = p_PID_old AND TBL.PID = MA.PID_old ); This is my SQL, now this thing runs around 1 million times or more. Now my question is what would be faster? if I change TBL.SID to (Select new from helptable where old = tbl.sid) or if I add helptable to the from and do the joining in the where? greets Auro

    Read the article

  • Does 'throw' or 'try...catch' hinder performance?

    - by Richard
    I've been reading all over the place (including here) about when exception should / shouldn't be used. I now want to change my code that would throw to make the method return false and handle it like that, but my question is: Is it the throwing or try..catch-ing that can hinder performance...? What I mean is, would this be acceptable: bool method someMmethod() { try { // ...Do something catch (Exception ex) // Don't care too much what at the moment... { // Output error // Return false } return true // No errors Or would there be a better way to do it? (I'm bloody sick of seeing "Unhandled exception..." LOL!)

    Read the article

  • Performance problem on a query.

    - by yapiskan
    Hi, I have a performance problem on a query. First table is a Customer table which has millions records in it. Customer table has a column of email address and some other information about customer. Second table is a CommunicationInfo table which contains just Email addresses. And What I want in here is; how many times the email address in CommunicationInfo table repeats in Customers table. What could be the the most performer query. The basic query that I can explain this situation is; Select ci.Email, count(*) from Customer c left join CommunicationInfo ci on c.Email1 = ci.Email or c.Email2 = ci.Email Group by ci.Email But sure, it takes about 5, 6 minutes in execution. Thanks in Advance.

    Read the article

  • Slow performance of query

    - by user642378
    Hi, I have asked the performance of query and i tried to simplyfy it.but still it not works.I am adding my query below.Please can you simplify it more effectively select r.parent_itemid f_id, parent_item.name f_name, parent_item.typeid f_typeid, parent_item.ownerid f_ownerid, parent_item.created f_created, parent_item.modifiedby f_modifiedby, parent_item.modified f_modified, pt.name f_tname, child_item.id i_id, t.name i_tname, child_item.typeid i_typeid, child_item.name i_name, child_item.ownerid i_ownerid, child_item.created i_created, child_item.modifiedby i_modifiedby, child_item.modified i_modified, r.ordinal i_ordinal from item child_item, type t, relation r, item parent_item, type pt where r.child_itemid = child_item.id and t.id=child_item.typeid and parent_item.id = r.parent_itemid and pt.id = parent_item.typeid and parent_item.id in ( select itemid from permission where itemid=parent_item.id and (holder_itemid in (10,100) and level > 0) ) order by r.parent_itemid, r.relation_typeid, r.ordinal Thanks you regards jennie

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • The challenge of communicating externally with IRM secured content

    - by Simon Thorpe
    I am often asked by customers about how they handle sending IRM secured documents to external parties. Their concern is that using IRM to secure sensitive information they need to share outside their business, is troubled with the inability for third parties to install the software which enables them to gain access to the information. It is a very legitimate question and one i've had to answer many times in the past 10 years whilst helping customers plan successful IRM deployments. The operating system does not provide the required level of content security The problem arises from what IRM delivers, persistent security to your sensitive information where ever it resides and whenever it is in use. Oracle IRM gives customers an array of features that help ensure sensitive information in an IRM document or email is always protected and only accessed by authorized users using legitimate applications. Examples of such functionality are; Control of the clipboard, either by disabling completely in the opened document or by allowing the cut and pasting of information between secured IRM documents but not into insecure applications. Protection against programmatic access to the document. Office documents and PDF documents have the ability to be accessed by other applications and scripts. With Oracle IRM we have to protect against this to ensure content cannot be leaked by someone writing a simple program. Securing of decrypted content in memory. At some point during the process of opening and presenting a sealed document to an end user, we must decrypt it and give it to the application (Adobe Reader, Microsoft Word, Excel etc). This process must be secure so that someone cannot simply get access to the decrypted information. The operating system alone just doesn't have the functionality to deliver these types of features. This is why for every IRM technology there must be some extra software installed and typically this software requires administrative rights to do so. The fact is that if you want to have very strong security and access control over a document you are going to send to someone who is beyond your network infrastructure, there must be some software to provide that functionality. Simple installation with Oracle IRM The software used to control access to Oracle IRM sealed content is called the Oracle IRM Desktop. It is a small, free piece of software roughly about 12mb in size. This software delivers functionality for everything a user needs to work with an Oracle IRM solution. It provides the functionality for all formats we support, the storage and transparent synchronization of user rights and unique to Oracle, the ability to search inside sealed files stored on the local computer. In Oracle we've made every technical effort to ensure that installing this software is a simple as possible. In situations where the user's computer is part of the enterprise, this software is typically deployed using existing technologies such as Systems Management Server from Microsoft or by using Active Directory Group Policies. However when sending sealed content externally, you cannot automatically install software on the end users machine. You need to rely on them to download and install themselves. Again we've made every effort for this manual install process to be as simple as we can. Starting with the small download size of the software itself to the simple installation process, most end users are able to install and access sealed content very quickly. You can see for yourself how easily this is done by walking through our free and easy self service demonstration of using sealed content. How to handle objections and ensure there is value However the fact still remains that end users may object to installing, or may simply be unable to install the software themselves due to lack of permissions. This is often a problem with any technology that requires specialized software to access a new type of document. In Oracle, over the past 10 years, we've learned many ways to get over this barrier of getting software deployed by external users. First and I would say of most importance, is the content MUST have some value to the person you are asking to install software. Without some type of value proposition you are going to find it very difficult to get past objections to installing the IRM Desktop. Imagine if you were going to secure the weekly campus restaurant menu and send this to contractors. Their initial response will be, "why on earth are you asking me to download some software just to access your menu!?". A valid objection... there is no value to the user in doing this. Now consider the scenario where you are sending one of your contractors their employment contract which contains their address, social security number and bank account details. Are they likely to take 5 minutes to install the IRM Desktop? You bet they are, because there is real value in doing so and they understand why you are doing it. They want their personal information to be securely handled and a quick download and install of some software is a small task in comparison to dealing with the loss of this information. Be clear in communicating this value So when sending sealed content to people externally, you must be clear in communicating why you are using an IRM technology and why they need to install some software to access the content. Do not try and avoid the issue, you must be clear and upfront about it. In doing so you will significantly reduce the "I didn't know I needed to do this..." responses and also gain respect for being straight forward. One customer I worked with, 6 months after the initial deployment of Oracle IRM, called me panicking that the partner they had started to share their engineering documents with refused to install any software to access this highly confidential intellectual property. I explained they had to communicate to the partner why they were doing this. I told them to go back with the statement that "the company takes protecting its intellectual property seriously and had decided to use IRM to control access to engineering documents." and if the partner didn't respect this decision, they would find another company that would. The result? A few days later the partner had made the Oracle IRM Desktop part of their approved list of software in the company. Companies are successful when sending sealed content to third parties We have many, many customers who send sensitive content to third parties. Some customers actually sell access to Oracle IRM protected content and therefore 99% of their users are external to their business, one in particular has sold content to hundreds of thousands of external users. Oracle themselves use the technology to secure M&A documents, payroll data and security assessments which go beyond the traditional enterprise security perimeter. Pretty much every company who deploys Oracle IRM will at some point be sending those documents to people outside of the company, these customers must be successful otherwise Oracle IRM wouldn't be successful. Because our software is used by a wide variety of companies, some who use it to sell content, i've often run into people i'm sharing a sealed document with and they already have the IRM Desktop installed due to accessing content from another company. The future In summary I would say that yes, this is a hurdle that many customers are concerned about but we see much evidence that in practice, people leap that hurdle with relative ease as long as they are good at communicating the value of using IRM and also take measures to ensure end users can easily go through the process of installation. We are constantly developing new ideas to reducing this hurdle and maybe one day the operating systems will give us enough rich security functionality to have no software installation. Until then, Oracle IRM is by far the easiest solution to balance security and usability for your business. If you would like to evaluate it for yourselves, please contact us.

    Read the article

  • DataView.RowFilter Vs DataTable.Select() vs DataTable.Rows.Find()

    - by Aseem Gautam
    Considering the code below: Dataview someView = new DataView(sometable) someView.RowFilter = someFilter; if(someView.count > 0) { …. } Quite a number of articles which say Datatable.Select() is better than using DataViews, but these are prior to VS2008. Solved: The Mystery of DataView's Poor Performance with Large Recordsets Array of DataRecord vs. DataView: A Dramatic Difference in Performance Googling on this topic I found some articles/forum topics which mention Datatable.Select() itself is quite buggy(not sure on this) and underperforms in various scenarios. On this(Best Practices ADO.NET) topic on msdn it is suggested that if there is primary key defined on a datatable the findrows() or find() methods should be used insted of Datatable.Select(). This article here (.NET 1.1) benchmarks all the three approaches plus a couple more. But this is for version 1.1 so not sure if these are valid still now. Accroding to this DataRowCollection.Find() outperforms all approaches and Datatable.Select() outperforms DataView.RowFilter. So I am quite confused on what might be the best approach on finding rows in a datatable. Or there is no single good way to do this, multiple solutions exist depending upon the scenario?

    Read the article

  • How do I use PerformanceCounterType AverageTimer32?

    - by Patrick J Collins
    I'm trying to measure the time it takes to execute a piece of code on my production server. I'd like to monitor this information in real time, so I decided to give Performance Analyser a whizz. I understand from MSDN that I need to create both an AverageTimer32 and an AverageBase performance counter, which I duly have. I increment the counter in my program, and I can see the CallCount go up and down, but the AverageTime is always zero. What am I doing wrong? Thanks! Here's a snippit of code : long init_call_time = Environment.TickCount; // *** // Lots and lots of code... // *** // Count number of calls PerformanceCounter perf = new PerformanceCounter("Cat", "CallCount", "Instance", false); perf.Increment(); perf.Close(); // Count execution time PerformanceCounter perf2 = new PerformanceCounter("Cat", "CallTime", "Instance", false); perf2.NextValue(); perf2.IncrementBy(Environment.TickCount - init_call_time); perf2.Close(); // Average base for execution time PerformanceCounter perf3 = new PerformanceCounter("Cat", "CallTimeBase", "Instance", false); perf3.Increment(); perf3.Close(); perf2.NextValue();

    Read the article

  • Diagramming Software for a Developer/Designer

    - by Craig Walker
    For a long time I've been looking for a good diagramming/vector-based drawing program that meets my needs as a developer. I'd like to: Draw database diagrams Draw flow charts Draw object-modeling diagrams (UML being the standard) Draw other free-form diagrams (basically boxes & arrows with the occasional clipart) Draw mockups of user interfaces and web pages EDIT: I want good-looking electronic-format diagrams that I can show to 3rd parties, not just something for my own internal use. EDIT 2: I'm also looking for Windows software, although I'm toying with the idea of switching to Mac, so a really good Mac-only product might get me to switch. Basically I need a good vector graphic program (with decent grouping, connecting lines, and ideally auto-routing). I'd prefer a diagramming tool that can also be used for drawing (for the UI mockups) rather than a drawing tool that can also be used for diagrams. I've tried Visio on several occasions, and every time I've been disappointed. The interface always seems to get in my way at some point. It's pretty close to what I want, and the latest version (I got the trail from MS) seems to be better than previous ones in terms of usability, but I really don't want to plunk down that sort of cash for a mediocre product. I've tried Dia and Inkscape, and while initially promising and with the right price tag, I found both of them to be lacking in several ways (including some recurring bugs). I've toyed with getting Adobe Illustrator, but I've never used it before, and I have a feeling that it wouldn't handle the diagramming aspect very well, and I don't want to buy a copy just to find out it doesn't meet my needs. So far, the product that I've had the most success with is, sadly, OpenOffice Draw. It's free of course (which lowers my expectations and thus improves my view of it) and its usability is pretty good, but in the end I'd like something more suited to diagramming. I'm willing to spend real money (in the $500-$1K range) for a really good piece of software if it does everything I want it to. The front runner is of course Visio but I'm hoping for more. Does anybody have any recommendations? CONCLUSION: @dlamblin had the most informative post, but the part I gained the most from was his/her (and others) mention of OmniGraffle, not Gliffy. I gave Gliffy a try, and it seemed neet for occational use, but since it's a Flash app (note: not AJAX as dlamblin mentioned) it's still a bit of a pain to use (no keyboard shortcuts for copy/paste was pretty much a deal breaker for me). I also tried SmartDraw, but it had 3-strikes-you're-out against it: The trial period was only 7 days long. It used some nonstandard (and visually jarring) GUI widget toolkit for its UI. At the very least it makes me suspicious (how do I know it will actually work & support the standard Windows features?) It crashed on me early into my trial. OmniGraffle looks like exactly what I want... except that it's Mac-only (so I couldn't give it a try). However, it got good reviews from my Mac-owning coworker, and I hope to try it on a friend's Mac soon. If it's good enough then I might spring for a new MacBook.

    Read the article

  • How to get decent MySQL driver perfomance in Ruby

    - by Zombies
    I notice that I am getting very poor performance for either or both inserts and queries. The queries themselves are basic and can execute with no delay directly from mysql. The ruby script that I wrote is only 1 thread, so only 1 connection is being used, and never closed unless the script is terminated. Pretty basic, I am just trying to insert a lot of rows. There is a look-up or two to get a surrogate key, or to check for duplicates, but the complexity is just O(n). Also, it isn't like there are millions of records, so again the queries themselves take no time to run. I am using: Ruby 1.9.1 Gem/driver:ruby-mysql 2.9.2 MySQL 5.1.37-1ubuntu5.1 ^ all 32 bit versions on a 32bit ubuntu distro I am getting about 1-2 inserts per second, pretty slow. I know a lot of people will suggest to change drivers, but that means I have some refactoring and resting to do. So I would really appreciate any help, but please if you do recomend that at least say why you do (eg: if you have used ruby-mysql x.x.x before and found another mysql driver to be better).ruby-mysql 2.9.2 What I would like to know: How can I improve performance with ruby-mysql 2.9.2 If and only if I cannot do this with ruby-mysql 2.9.2, what should I do?

    Read the article

  • Using Oracle hint "FIRST_ROWS" to improve Oracle database performances

    - by bobetko
    I have a statement that runs on Oracle database server. The statement has about 5 joins and there is nothing unusual there. It looks pretty much like below: SELECT field1, field2, field3, ... FROM table1, table2, table3, table4, table5 WHERE table1.id = table2.id AND table2.id = table3.id AND ... table5.userid = 1 The problem (and what is interesting) is that statement for userid = 1 takes 1 second to return 590 records. Statement for userid = 2 takes around 30 seconds to return 70 records. I don't understand why is difference so big. It seems that different execution plan is chosen for statement with userid = 1 and different for userid = 2. After I implemented Oracle Hint FIRST_ROW, performance become significantly better. Both statements (for both ids 1 and 2) produce return in under 1 second. SELECT /*+ FIRST_ROWS */ field1, field2, field3, ... FROM table1, table2, table3, table4, table5 WHERE table1.id = table2.id AND table2.id = table3.id AND ... table5.userid = 1 Questions: 1) What are possible reasons for bad performance when userid = 2 (when hint is not used)? 2) Why would execution plan be different for one vs another statement (when hint is not used)? 3) Is there anything that I should be careful about when deciding to add this hint to my queries? Thanks

    Read the article

  • Will fixed-point arithmetic be worth my trouble?

    - by Thomas
    I'm working on a fluid dynamics Navier-Stokes solver that should run in real time. Hence, performance is important. Right now, I'm looking at a number of tight loops that each account for a significant fraction of the execution time: there is no single bottleneck. Most of these loops do some floating-point arithmetic, but there's a lot of branching in between. The floating-point operations are mostly limited to additions, subtractions, multiplications, divisions and comparisons. All this is done using 32-bit floats. My target platform is x86 with at least SSE1 instructions. (I've verified in the assembler output that the compiler indeed generates SSE instructions.) Most of the floating-point values that I'm working with have a reasonably small upper bound, and precision for near-zero values isn't very important. So the thought occurred to me: maybe switching to fixed-point arithmetic could speed things up? I know the only way to be really sure is to measure it, that might take days, so I'd like to know the odds of success beforehand. Fixed-point was all the rage back in the days of Doom, but I'm not sure where it stands anno 2010. Considering how much silicon is nowadays pumped into floating-point performance, is there a chance that fixed-point arithmetic will still give me a significant speed boost? Does anyone have any real-world experience that may apply to my situation?

    Read the article

  • Optimizing code using PIL

    - by freakazo
    Firstly sorry for the long piece of code pasted below. This is my first time actually having to worry about performance of an application so I haven't really ever worried about performance. This piece of code pretty much searches for an image inside another image, it takes 30 seconds to run on my computer, converting the images to greyscale and other changes shaved of 15 seconds, I need another 15 shaved off. I did read a bunch of pages and looked at examples but I couldn't find the same problems in my code. So any help would be greatly appreciated. From the looks of it (cProfile) 25 seconds is spent within the Image module, and only 5 seconds in my code. from PIL import Image import os, ImageGrab, pdb, time, win32api, win32con import cProfile def GetImage(name): name = name + '.bmp' try: print(os.path.join(os.getcwd(),"Images",name)) image = Image.open(os.path.join(os.getcwd(),"Images",name)) except: print('error opening image;', name) return image def Find(name): image = GetImage(name) imagebbox = image.getbbox() screen = ImageGrab.grab() #screen = Image.open(os.path.join(os.getcwd(),"Images","Untitled.bmp")) YLimit = screen.getbbox()[3] - imagebbox[3] XLimit = screen.getbbox()[2] - imagebbox[2] image = image.convert("L") Screen = screen.convert("L") Screen.load() image.load() #print(XLimit, YLimit) Found = False image = image.getdata() for y in range(0,YLimit): for x in range(0,XLimit): BoxCoordinates = x, y, x+imagebbox[2], y+imagebbox[3] ScreenGrab = screen.crop(BoxCoordinates) ScreenGrab = ScreenGrab.getdata() if image == ScreenGrab: Found = True #print("woop") return x,y if Found == False: return "Not Found" cProfile.run('print(Find("Login"))')

    Read the article

  • Java repaint is slow under certain conditions.

    - by Gabriel A. Zorrilla
    I'm doing a simple grid which each square is highlighted by the cursor: They are a couple of JPanels, mapgrid and overlay inside a JLayeredPane, with mapgrid on the bottom. Mapgrid just draws on initialization the grid, its paint metodh is: public void paintComponent(Graphics g) { super.paintComponent(g); Graphics2D g2d = (Graphics2D) g; g2d.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); for (int i = 0; i < h; i++) { for (int j = 0; j < w; j++) { g2d.setColor(new Color(128, 128, 128, 255)); g2d.drawRect(tileSize * j, i * tileSize, tileSize, tileSize); } } In the overlay JPanel is where the highlighting occurs, this is what is repainted when the mouse is moved: public void paintComponent(Graphics g) { super.paintComponent(g); Graphics2D g2d = (Graphics2D) g; g2d.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); g2d.setColor(new Color(255, 255, 128, 255)); g2d.drawRect((pointerX/tileSize)*tileSize,(pointerY/ tileSize)*tileSize, tileSize, tileSize); } I noticed that even though the base layer (mapgrid) is NOT repainted when the mouse moves, just the transparent overlay layer, the performance is lacking. If i give the overlay JPanel a background, its way faster. If i remove the mapgrid Antialiasing, its a bit faster too. I don't know why giving a background to the overlay layer (and thus, hiding the mapgrid) or disabling antialiasing in the mapgrid leads to much better performance. Is there a better way to do this? Why does this happen?

    Read the article

  • Java Collections and Garbage Collector

    - by Anth0
    A little question regarding performance in a Java web app. Let's assume I have a List<Rubrique> listRubriques with ten Rubrique objects. A Rubrique contains one list of products (List<product> listProducts) and one list of clients (List<Client> listClients). What exactly happens in memory if I do this: listRubriques.clear(); listRubriques = null; My point of view would be that, since listRubriques is empty, all my objects previously referenced by this list (including listProducts and listClients) will be garbage collected pretty soon. But since Collection in Java are a little bit tricky and since I have quite performance issues with my app i'm asking the question :) edit : let's assume now that my Client object contains a List<Client>. Therefore, I have kind of a circular reference between my objects. What would happen then if my listRubrique is set to null? This time, my point of view would be that my Client objects will become "unreachable" and might create a memory leak?

    Read the article

  • Sql serve Full Text Search with Containstable is very slow when Used in JOIN!

    - by Bob
    Hello, I am using sql 2008 full text search and I am having serious issues with performance depending on how I use Contains or ContainsTable. Here are sample: (table one has about 5000 records and there is a covered index on table1 which has all the fields in the where clause. I tried to simplify the statements so forgive me if there is syntax issues.) Scenario 1: select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select top 1 * from containstable(table1,*, 'something') as t2 where t2.[key]=t1.id) results: 10 second (very slow) Scenario 2: select * from table1 as t1 join containstable(table1,*, 'something') as t2 on t2.[key] = t1.id where t1.field1=90 and t1.field2='something' results: 10 second (very slow) Scenario 3: Declare @tbl Table(id uniqueidentifier primary key) insert into @tbl select {key] from containstable(table1,*, 'something') select * from table1 as t1 where t1.field1=90 and t1.field2='something' and Exists(select id from @tbl as tbl where id=req1.id) results: fraction of a second (super fast) Bottom line, it seems if I use Containstable in any kind of join or where clause condition of a select statement that also has other conditions, the performance is really bad. In addition if you look at profiler, the number of reads from the database goes to the roof. But if I first do the full text search and put results in a table variable and use that variable everything goes super fast. The number of reads are also much lower. It seems in "bad" scenarios, somehow it gets stuck in a loop which causes it to read many times from teh database but of course I don't understant why. Now the question is first of all whyis that happening? and question two is that how scalable table variables are? what if it results to 10s of thousands of records? is it still going to be fast. Any ideas? Thanks

    Read the article

  • Why is the Clojure Hello World program so slow compared to Java and Python?

    - by viksit
    Hi all, I'm reading "Programming Clojure" and I was comparing some languages I use for some simple code. I noticed that the clojure implementations were the slowest in each case. For instance, Python - hello.py def hello_world(name): print "Hello, %s" % name hello_world("world") and result, $ time python hello.py Hello, world real 0m0.027s user 0m0.013s sys 0m0.014s Java - hello.java import java.io.*; public class hello { public static void hello_world(String name) { System.out.println("Hello, " + name); } public static void main(String[] args) { hello_world("world"); } } and result, $ time java hello Hello, world real 0m0.324s user 0m0.296s sys 0m0.065s and finally, Clojure - hellofun.clj (defn hello-world [username] (println (format "Hello, %s" username))) (hello-world "world") and results, $ time clj hellofun.clj Hello, world real 0m1.418s user 0m1.649s sys 0m0.154s Thats a whole, garangutan 1.4 seconds! Does anyone have pointers on what the cause of this could be? Is Clojure really that slow, or are there JVM tricks et al that need to be used in order to speed up execution? More importantly - isn't this huge difference in performance going to be an issue at some point? (I mean, lets say I was using Clojure for a production system - the gain I get in using lisp seems completely offset by the performance issues I can see here). The machine used here is a 2007 Macbook Pro running Snow Leopard, a 2.16Ghz Intel C2D and 2G DDR2 SDRAM. BTW, the clj script I'm using is from here and looks like, #!/bin/bash JAVA=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home/bin/java CLJ_DIR=/opt/jars CLOJURE=$CLJ_DIR/clojure.jar CONTRIB=$CLJ_DIR/clojure-contrib.jar JLINE=$CLJ_DIR/jline-0.9.94.jar CP=$PWD:$CLOJURE:$JLINE:$CONTRIB # Add extra jars as specified by `.clojure` file if [ -f .clojure ] then CP=$CP:`cat .clojure` fi if [ -z "$1" ]; then $JAVA -server -cp $CP \ jline.ConsoleRunner clojure.lang.Repl else scriptname=$1 $JAVA -server -cp $CP clojure.main $scriptname -- $* fi

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >