Search Results

Search found 97855 results on 3915 pages for 'code performance'.

Page 34/3915 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Performance of Managed C++ Vs UnManaged/native C++

    - by bsobaid
    I am writing a very high performance application that handles and processes hundreds of events every millisecond. Is Unmanaged C++ faster than managed c++? and why? Managed C++ deals with CLR instead of OS and CLR takes care of memory management, which simplifies the code and is probably also more efficient than code written by "a programmer" in unmanaged C++? or there is some other reason? When using managed, how can one then avoid dynamic memory allocation, which causes a performance hit, if it is all transparent to the programmer and handled by CLR? So coming back to my question, Is managed C++ more efficient in terms of speed than unmanaged C++ and why?

    Read the article

  • C vs. C++ for performance in memory allocation

    - by Andrei
    Hi, I am planning to participate in development of a code written in C language for Monte Carlo analysis of complex problems. This codes allocates huge data arrays in memory to speed up its performance, therefore the author of the code has chosen C instead of C++ claiming that one can make faster and more reliable (concerning memory leaks) code with C. Do you agree with that? What would be your choice, if you need to store 4-16 Gb of data arrays in memory during calculation?

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at Game Development

    Read the article

  • C# debug vs release performance

    - by sagie
    Hi. I've encountered in the following paragraph: “Debug vs Release setting in the IDE when you compile your code in Visual Studio makes almost no difference to performance… the generated code is almost the same. The C# compiler doesn’t really do any optimisation. The C# compiler just spits out IL… and at the runtime it’s the JITer that does all the optimisation. The JITer does have a Debug/Release mode and that makes a huge difference to performance. But that doesn’t key off whether you run the Debug or Release configuration of your project, that keys off whether a debugger is attached.” The source is here and the podcast is here. Can someone direct me to a microsoft an article that can actualy prove this?

    Read the article

  • which toString() method can be used performance wise??

    - by Mrityunjay
    hi, I am working on one project for performance enhancement. I had one doubt, while we are during a process, we tend to trace the current state of the DTO and entity used. So, for this we have included toString() method in all POJOs for the same. I have now implemented toString() in three different ways which are following :- public String toString() { return "POJO :" + this.class.getName() + " RollNo :" + this.rollNo + " Name :" + this.name; } public String toString() { StringBuffer buff = new StringBuffer("POJO :").append(this.class.getName()).append(" RollNo :").append(this.rollNo).append(" Name :").append(this.name); return buff.toString(); } public String toString() { StringBuilder builder = new StringBuilder("POJO :").append(this.class.getName()).append(" RollNo :").append(this.rollNo).append(" Name :").append(this.name); return builder .toString(); } can anyone please help me to find out which one is best and should be used for enhancing performance.

    Read the article

  • server performance: multiple external connections and performance

    - by websiteguru
    I am creating a php script that requires the server to make several cURL requests per run. I'll be running this script through cron every 3 minutes. Im looking to maximize the amount of cURL requests I can make in a 24 hr period. What I am wondering is if it would be better from a performance standpoint to get a dedicated server, or several small shared hosting accounts. With the problem being number of external connections and not system resources I'm wondering which is the best approach.

    Read the article

  • javascript object access performance

    - by youdontmeanmuch
    In Javascript, when your getting a property of an object, is there a performance penalty to getting the whole object vs only getting a property of that object? Also Keep in mind I'm not talking about DOM access these are pure simple Javascript objects. For example: Is there some kind of performance difference between the following code: Assumed to be faster but not sure: var length = some.object[key].length; if(length === condition){ // Do something that doesnt need anything inside of some.object[key] } else{ var object = some.object[key]; // Do something that requires stuff inside of some.object[key] } I think this would be slower but not sure if it matters. var object = some.object[key]; if(object.length === condition){ // Do something that doesnt need anything inside of some.object[key] } else{ // Do something that requires stuff inside of some.object[key] }

    Read the article

  • Android -- Object Creation/Memory Allocation vs. Performance

    - by borg17of20
    Hello all, This is probably an easy one. I have about 20 TextViews/ImageViews in my current project that I access like this: ((TextView)multiLayout.findViewById(R.id.GameBoard_Multi_Answer1_Text)).setText(""); //or ((ImageView)multiLayout.findViewById(R.id.GameBoard_Multi_Answer1_Right)).setVisibility(View.INVISIBLE); My question is this, am I better off, from a performance standpoint, just assigning these object variables? Further, am I losing some performance to the constant "search" process that goes on as a part of the findViewById(...) method? (i.e. Does findsViewById(...) use some sort of hashtable/hashmap for look-ups or does it implement an iterative search over the view hierarchy?) At present, my program never uses more than 2.5MB of RAM, so will assigning 20 or so more object variables drastically affect this? I don't think so, but I figured I'd ask. Thanks.

    Read the article

  • Best way to handle MySQL date for performance with thousands of users

    - by bitLost
    I am currently part of a team designing a site that will potentially have thousands of users who will be doing a number of date related searches. During the design phase we have been trying to determine which makes more sense for performance optimization. Should we store the datetime field as a mysql datetime. Or should be break it up into a number of fields (year, month, day, hour, minute, ...) The question is with a large data set and a potentially large set of users, would we gain performance wise breaking the datetime into multiple fields and saving on relying on mysql date functions? Or is mysql already optimized for this?

    Read the article

  • java statistics collection for performance evaluation

    - by user384706
    What is the most efficient way to collect and report performance statistic analysis from an application? If I have an application that uses a series of network apis, and I want to report statistics at runtime, e.g. Method doA() was called 3 times and consumed on avg 500ms Method doB() was called 5 times and consumed on avg 1200ms etc Then, I thought of using a well defined data structure (of collection) that each thread updates per remote call, and this can be used for the report. But I think that it will make the performance worse, for the time spend for statistics collection. Am I correct? How would I procceed if I used a background thread for this, and the other threads that did the remote calls were unaware of this collection gathering? Thanks

    Read the article

  • Performance considerations of a large hard-coded array in the .cs file

    - by terence
    I'm writing some code where performance is important. In one part of it, I have to compare a large set of pre-computed data against dynamic values. Currently, I'm storing that pre-computed data in a giant array in the .cs file: Data[] data = { /* my data set */ }; The data set is about 90kb, or roughly 13k elements. I was wondering if there's any downside to doing this, as opposed to loading it in from an external file? I'm not entirely sure how C# works internally, so I just wanted to be aware of any performance issues I might encounter with this method.

    Read the article

  • PHP include(): File size & performance

    - by Tom
    An inexperienced PHP question: I've got a PHP script file that I need to include on different pages lots of times in lots of places. I have the option of either breaking the included file down into several smaller files and include these on a as-needed basis... OR ... I could just keep it all together in a single PHP file. I'm wondering if there's any performance impact of using a larger vs. smaller file for include() in this context? For example, is there any performance difference between a 200KB file and a 20KB file? Thank you.

    Read the article

  • Python performance profiling (file close)

    - by user1853986
    First of all thanks for your attention. My question is how to reduce the execution time of my code. Here is the relevant code. The below code is called in iteration from the main. def call_prism(prism_input_file,random_length): prism_output_file = "path.txt" cmd = "prism %s -simpath %d %s" % (prism_input_file,random_length,prism_output_file) p = os.popen(cmd) p.close() return prism_output_file def main(prism_input_file, number_of_strings): ... for n in range(number_of_strings): prism_output_file = call_prism(prism_input_file,z[n]) ... return I used statistics from the "profile statistics browser" when I profiled my code. The "file close" system command took the maximum time (14.546 seconds). The call_prism routine is called 10 times. But the number_of_strings is usually in thousands, so, my program takes lot of time to complete. Let me know if you need more information. By the way I tried with subprocess, too. Thanks.

    Read the article

  • “It’s only test code…”

    - by Chris George
    “Let me hack this in, it’s only test code”, “Don’t worry about getting it reviewed, it’s only test code”, “It doesn’t have to be elegant or efficient, it’s only test code”… do these phrases sound familiar? Chances are if you’ve working with test automation, at one point or other you will have heard these phrases, you have probably even used them yourself! What is certain is that code written under this “it’s only test code” mantra will come back and bite you in the arse! I’ve recently encountered a case where a test was giving a false positive, therefore hiding a real product bug because that test code was very badly written. Firstly it was very difficult to understand what the test was actually trying to achieve let alone how it was doing it, and this complexity masked a simple logic error. These issues are real and they do happen. Let’s take a step back from this and look at what we are trying to do. We are writing test code that tests product code, and we do this to create a suite of tests that will help protect our software against regressions. This test code is making sure that the product behaves as it should by employing some sort of expected result verification. The simple cases of these are generally not a problem. However, automation allows us to explore more complex scenarios in many more permutations. As this complexity increases then so does the complexity of the test code. It is at this point that code which has not been architected properly will cause problems.   Keep your friends close… So, how do we make sure we are doing it right? The development teams I have worked on have always had Test Engineers working very closely with their Software Engineers. This is something that I have always tried to take full advantage of. They are coding experts! So run your ideas past them, ask for advice on how to structure your code, help you design your data structures. This may require a shift in your teams viewpoint, as contrary to this section title and folklore, Software Engineers are not actually the mortal enemy of Test Engineers. As time progresses, and test automation becomes more and more ingrained in what we do, the two roles are converging more than ever. Over the 16 years I have spent as a Test Engineer, I have seen the grey area between the two roles grow significantly larger. This serves to strengthen the relationship and common bond between the two roles which helps to make test code activities so much easier!   Pair for the win Possibly the best thing you could do to write good test code is to pair program on the task. This will serve a few purposes. you will get the benefit of the Software Engineers knowledge and experience the Software Engineer will gain knowledge on the testing process. Sharing the love is a wonderful thing! two pairs of eyes are always better than one… And so are two brains. Between the two of you, I will guarantee you will derive more useful test cases than if it was just one of you.   Code reviews Another policy which certainly pays dividends is the practice of code reviews. By having one of your peers review your code before you commit it serves two purposes. Firstly, it forces you to explain your code. Just the act of doing this will often pick up errors in your code. Secondly, it gets yet another pair of eyes on your code! I cannot stress enough how important code reviews are. The benefits they offer apply as much to product code as test code. In short, Software and Test Engineers should all be doing them! It can be extended even further by getting test code reviewed by a Software Engineer and a Test Engineer, and likewise product code. This serves to keep both functions in the loop with changes going on within your code base.   Learn from your devs I briefly touched on this earlier but I’d like to go into more detail here. Pairing with your Software Engineers when writing your test code is such an amazing opportunity to improve your coding skills. As I sit here writing this article waiting to be called into court for jury service, it reminds me that it takes a lot of patience to be a Test Engineer, almost as much as it takes to be a juror! However tempting it is to go rushing in and start writing your automated tests, resist that urge. Discuss what you want to achieve then talk through the approach you’re going to take. Then code it up together. I find it really enlightening to ask questions like ‘is there a better way to do this?’ Or ‘is this how you would code it?’ The latter question, especially, is where I learn the most. I’ve found that most Software Engineers will be reluctant to show you the ‘right way’ to code something when writing tests because they perceive the ‘right way’ to be too complicated for the Test Engineer (e.g. not mentioning LINQ and instead doing something verbose). So by asking how THEY would code it, it unleashes their true dev-ness and advanced code usually ensues! I would like to point out, however, that you don’t have to accept their method as the final answer. On numerous occasions I have opted for the more simple/verbose solution because I found the code written by the Software Engineer too advanced and therefore I would find it unreadable when I return to the code in a months’ time! Always keep the target audience in mind when writing clever code, and in my case that is mostly Test Engineers.  

    Read the article

  • What's the format of Real World Performance Day?

    - by william.hardie
    A question that has cropped a lot of late is "what's the format of Real World Performance Day?" Not an unreasonable question you might think. Sure enough, a quick check of the Independent Oracle User Group's website tells us a bit about the Real World Performance Day event, but no formal agenda? This was one of the questions I posed to Tom Kyte (one of the main presenters) in our recent podcast. Tom tells us that this isn't your traditional event where one speaker follows another with loads of slides. In fact, the Real World Performance Day features Tom and fellow Oracle performance experts - Andrew Holdsworth and Graham Wood - continuously on stage throughout the day. All three will be discussing database performance challenges and solutions from development, architectural design and management perspectives. There's going to be multi-terabyte demos on show, less of the traditional slides, and more interactive debate and discussion going on. Tune-in and hear what else Tom has to say about this fairly unique event!

    Read the article

  • Analysing Group & Individual Member Performance -RUP

    - by user23871
    I am writing a report which requires the analysis of performance of each individual team member. This is for a software development project developed using the Unified Process (UP). I was just wondering if there are any existing group & individual appraisal metrics used so I don't have to reinvent the wheel... EDIT This is by no means correct but something like: Individual Contribution (IC) = time spent (individual) / time spent (total) = Performance = ? (should use individual contribution (IC) combined with something to gain a measure of overall performance).... Maybe I am talking complete hash and I know generally its really difficult to analyse performance with numbers but any mathematicians out there that can lend a hand or know a somewhat more accurate method of analysing performance than arbitrary marking (e.g. 8 out 10)

    Read the article

  • Performance surprise with "as" and nullable types

    - by Jon Skeet
    I'm just revising chapter 4 of C# in Depth which deals with nullable types, and I'm adding a section about using the "as" operator, which allows you to write: object o = ...; int? x = o as int?; if (x.HasValue) { ... // Use x.Value in here } I thought this was really neat, and that it could improve performance over the C# 1 equivalent, using "is" followed by a cast - after all, this way we only need to ask for dynamic type checking once, and then a simple value check. This appears not to be the case, however. I've included a sample test app below, which basically sums all the integers within an object array - but the array contains a lot of null references and string references as well as boxed integers. The benchmark measures the code you'd have to use in C# 1, the code using the "as" operator, and just for kicks a LINQ solution. To my astonishment, the C# 1 code is 20 times faster in this case - and even the LINQ code (which I'd have expected to be slower, given the iterators involved) beats the "as" code. Is the .NET implementation of isinst for nullable types just really slow? Is it the additional unbox.any that causes the problem? Is there another explanation for this? At the moment it feels like I'm going to have to include a warning against using this in performance sensitive situations... Results: Cast: 10000000 : 121 As: 10000000 : 2211 LINQ: 10000000 : 2143 Code: using System; using System.Diagnostics; using System.Linq; class Test { const int Size = 30000000; static void Main() { object[] values = new object[Size]; for (int i = 0; i < Size - 2; i += 3) { values[i] = null; values[i+1] = ""; values[i+2] = 1; } FindSumWithCast(values); FindSumWithAs(values); FindSumWithLinq(values); } static void FindSumWithCast(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = 0; foreach (object o in values) { if (o is int) { int x = (int) o; sum += x; } } sw.Stop(); Console.WriteLine("Cast: {0} : {1}", sum, (long) sw.ElapsedMilliseconds); } static void FindSumWithAs(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = 0; foreach (object o in values) { int? x = o as int?; if (x.HasValue) { sum += x.Value; } } sw.Stop(); Console.WriteLine("As: {0} : {1}", sum, (long) sw.ElapsedMilliseconds); } static void FindSumWithLinq(object[] values) { Stopwatch sw = Stopwatch.StartNew(); int sum = values.OfType<int>().Sum(); sw.Stop(); Console.WriteLine("LINQ: {0} : {1}", sum, (long) sw.ElapsedMilliseconds); } }

    Read the article

  • When is someone else's code I use from the internet "mine"?

    - by robault
    I'm building a library from methods that I've found on the internet. Some are free to use or modify with no requirements, others say that if I leave a comment in the code it's okay to use, others say when I use the code I have to attribute the use of someone's code in my application (in the credits for my app I guess). What I've been doing is reorganizing classes, renaming methods, adding descriptions (code comments), renaming the parameters and names inside the methods to something meaningful, optimizing loops if applicable, changing return types, adding try/catch/throw blocks, adding parameter checks and cleaning up resources in the methods. For example; I didn't come up with the algorithm for blurring a Bitmap but I've taken the basic example of iterating through the pixels and turned it into a decent library method (applying the aforementioned modifications). I understand how to go about building it now myself but I didn't actually hit the keystrokes to make it and I couldn't have come up with it before learning from their example. What about code people get in answers on Stackoverflow or examples from Codeproject? At what point can I drop their requirements because at n% their code became mine? FWIW I intend on using the libraries to create products that I will sell.

    Read the article

  • Strange performance behaviour for 64 bit modulo operation

    - by codymanix
    The last three of these method calls take approx. double the time than the first four. The only difference is that their arguments doesn't fit in integer anymore. But should this matter? The parameter is declared to be long, so it should use long for calculation anyway. Does the modulo operation use another algorithm for numbersmaxint? I am using amd athlon64 3200+, winxp sp3 and vs2008. Stopwatch sw = new Stopwatch(); TestLong(sw, int.MaxValue - 3l); TestLong(sw, int.MaxValue - 2l); TestLong(sw, int.MaxValue - 1l); TestLong(sw, int.MaxValue); TestLong(sw, int.MaxValue + 1l); TestLong(sw, int.MaxValue + 2l); TestLong(sw, int.MaxValue + 3l); Console.ReadLine(); static void TestLong(Stopwatch sw, long num) { long n = 0; sw.Reset(); sw.Start(); for (long i = 3; i < 20000000; i++) { n += num % i; } sw.Stop(); Console.WriteLine(sw.Elapsed); } EDIT: I now tried the same with C and the issue does not occur here, all modulo operations take the same time, in release and in debug mode with and without optimizations turned on: #include "stdafx.h" #include "time.h" #include "limits.h" static void TestLong(long long num) { long long n = 0; clock_t t = clock(); for (long long i = 3; i < 20000000LL*100; i++) { n += num % i; } printf("%d - %lld\n", clock()-t, n); } int main() { printf("%i %i %i %i\n\n", sizeof (int), sizeof(long), sizeof(long long), sizeof(void*)); TestLong(3); TestLong(10); TestLong(131); TestLong(INT_MAX - 1L); TestLong(UINT_MAX +1LL); TestLong(INT_MAX + 1LL); TestLong(LLONG_MAX-1LL); getchar(); return 0; } EDIT2: Thanks for the great suggestions. I found that both .net and c (in debug as well as in release mode) does't not use atomically cpu instructions to calculate the remainder but they call a function that does. In the c program I could get the name of it which is "_allrem". It also displayed full source comments for this file so I found the information that this algorithm special cases the 32bit divisors instead of dividends which was the case in the .net application. I also found out that the performance of the c program really is only affected by the value of the divisor but not the dividend. Another test showed that the performance of the remainder function in the .net program depends on both the dividend and divisor. BTW: Even simple additions of long long values are calculated by a consecutive add and adc instructions. So even if my processor calls itself 64bit, it really isn't :( EDIT3: I now ran the c app on a windows 7 x64 edition, compiled with visual studio 2010. The funny thing is, the performance behavior stays the same, although now (I checked the assembly source) true 64 bit instructions are used.

    Read the article

  • C# performance varying due to memory

    - by user1107474
    Hope this is a valid post here, its a combination of C# issues and hardware. I am benchmarking our server because we have found problems with the performance of our quant library (written in C#). I have simulated the same performance issues with some simple C# code- performing very heavy memory-usage. The code below is in a function which is spawned from a threadpool, up to a maximum of 32 threads (because our server has 4x CPUs x 8 cores each). This is all on .Net 3.5 The problem is that we are getting wildly differing performance. I run the below function 1000 times. The average time taken for the code to run could be, say, 3.5s, but the fastest will only be 1.2s and the slowest will be 7s- for the exact same function! I have graphed the memory usage against the timings and there doesnt appear to be any correlation with the GC kicking in. One thing I did notice is that when running in a single thread the timings are identical and there is no wild deviation. I have also tested CPU-bound algorithms and the timings are identical too. This has made us wonder if the memory bus just cannot cope. I was wondering could this be another .net or C# problem, or is it something related to our hardware? Would this be the same experience if I had used C++, or Java?? We are using 4x Intel x7550 with 32GB ram. Is there any way around this problem in general? Stopwatch watch = new Stopwatch(); watch.Start(); List<byte> list1 = new List<byte>(); List<byte> list2 = new List<byte>(); List<byte> list3 = new List<byte>(); int Size1 = 10000000; int Size2 = 2 * Size1; int Size3 = Size1; for (int i = 0; i < Size1; i++) { list1.Add(57); } for (int i = 0; i < Size2; i = i + 2) { list2.Add(56); } for (int i = 0; i < Size3; i++) { byte temp = list1.ElementAt(i); byte temp2 = list2.ElementAt(i); list3.Add(temp); list2[i] = temp; list1[i] = temp2; } watch.Stop(); (the code is just meant to stress out the memory) I would include the threadpool code, but we used a non-standard threadpool library. EDIT: I have reduced "size1" to 100000, which basically doesn't use much memory and I still get a lot of jitter. This suggests it's not the amount of memory being transferred, but the frequency of memory grabs?

    Read the article

  • Improving performance on data pasting 2000 rows with validations

    - by Lohit
    I have N rows (which could be nothing less than 1000) on an excel spreadsheet. And in this sheet our project has 150 columns like this: Now, our application needs data to be copied (using normal Ctrl+C) and pasted (using Ctrl+V) from the excel file sheet on our GUI sheet. Copy pasting 1000 records takes around 5-6 seconds which is okay for our requirement, but the problem is when we need to make sure the data entered is valid. So we have to validate data in each row generate appropriate error messages and format the data as per requirement. So we need to at runtime parse and evaluate data in each row. Now all the formatting of data and validations come from the back-end database and we have it in a data-table (dtValidateAndFormatConditions). The conditions would be around 50. So you can see how slow this whole process becomes since N X 150 X 50 operations are required to complete this whole process. Initially it took approximately 2-3 minutes but now i have reduced it to 20 - 30 seconds. However i have increased the speed by making an expression parser of my own - and not by any algorithm, is there any other way i can improve performance, by using Divide and Conquer or some other mechanism. Currently i am not really sure how to go about this. Here is what part of my code looks like: public virtual void ValidateAndFormatOnCopyPaste(DataTable DtCopied, int CurRow) { foreach (DataRow dRow in dtValidateAndFormatConditions.Rows) { string Condition = dRow["Condition"]; string FormatValue = Value = dRow["Value"]; GetValidatedFormattedData(DtCopied,ref Condition, ref FormatValue ,iRowIndex); Condition = Parse(Condition); dRow["Condition"] = Condition; FormatValue = Parse(FormatValue ); dRow["Value"] = FormatValue; } } The above code gets called row-wise like this: public override void ValidateAndFormat(DataTable dtChangedRecords, CellRange cr) { int iRowStart = cr.Row, iRowEnd = cr.Row + cr.RowCount; for (int iRow = iRowStart; iRow < iRowEnd; iRow++) { ValidateAndFormatOnCopyPaste(dtChangedRecords,iRow); } } Please know my question needs a more algorithmic solution than code optimization, however any answers containing code related optimizations will be appreciated as well. (Tagged Linq because although not seen i have been using linq in some parts of my code).

    Read the article

  • AS 400 Performance from .Net iSeries Provider

    - by Nathan
    Hey all, First off, I am not an AS 400 guy - at all. So please forgive me for asking any noobish questions here. Basically, I am working on a .Net application that needs to access the AS400 for some real-time data. Although I have the system working, I am getting very different performance results between queries. Typically, when I make the 1st request against a SPROC on the AS400, I am seeing ~ 14 seconds to get the full data set. After that initial call, any subsequent calls usually only take ~ 1 second to return. This performance improvement remains for ~ 20 mins or so before it takes 14 seconds again. The interesting part with this is that, if the stored procedure is executed directly on the iSeries Navigator, it always returns within milliseconds (no change in response time). I wonder if it is a caching / execution plan issue but I can only apply my SQL SERVER logic to the AS400, which is not always a match. Any suggestions on what I can do to recieve a more consistant response time or simply insight as to why the AS400 is acting in this manner when I was using the iSeries Data Provider for .Net? Is there a better access method that I should use? Just in case, here's the code I am using to connect to the AS400 Dim Conn As New IBM.Data.DB2.iSeries.iDB2Connection(ConnectionString) Dim Cmd As New IBM.Data.DB2.iSeries.iDB2Command("SPROC_NAME_HERE", Conn) Cmd.CommandType = CommandType.StoredProcedure Using Conn Conn.Open() Dim Reader = Cmd.ExecuteReader() Using Reader While Reader.Read() 'Do Something End While Reader.Close() End Using Conn.Close() End Using

    Read the article

  • MySQL performance - 100Mb ethernet vs 1Gb ethernet

    - by Rob Penridge
    Hi All I've just started a new job and noticed that the analysts computers are connected to the network at 100Mbps. The ODBC queries we run against the MySQL server can easily return 500MB+ and it seems at times when the servers are under high load the DBAs kill low priority jobs as they are taking too long to run. My question is this... How much of this server time is spent executing the request, and how much time is spent returning the data to the client? Could the query speeds be improved by upgrading the network connections to 1Gbps? (Updated for the why): The database in question was built to accomodate reporting needs and contains massive amounts of data. We usually work with subsets of this data at a granular level in external applications such as SAS or Excel, hence the reason for the large amounts of data being transmitted. The queries are not poorly structured - they are very simple and the appropriate joins/indexes etc are being used. I've removed 'query' from the Title of the post as I realised this question is more to do with general MySQL performance rather than query related performance. I was kind of hoping that someone with a Gigabit connection may be able to actually quantify some results for me here by running a query that returns a decent amount of data, then they could limit their connection speed to 100Mb and rerun the same query. Hopefully this could be done in an environment where loads are reasonably stable so as not to skew the results. If ethernet speed can improve the situation I wanted some quantifiable results to help argue my case for upgrading the network connections. Thanks Rob

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >