Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 885/1257 | < Previous Page | 881 882 883 884 885 886 887 888 889 890 891 892  | Next Page >

  • Why hasn't anybody started a hosted continuous integration service?

    - by Teflon Ted
    There's a dozen services that provide hosted version control, hosted ticket tracking, hosted project management, and combinations of all of the above, there's even hosted web-based IDEs. But nobody's yet offered a hosted continuous integration service; at least that I can find. The concept seems simple enough: I register and provide the URL to my source code repository, it grabs my code and builds it via ant/rake/whatever, then runs the suite of tests and some metrics (code coverage, performance, etc.). Is there some prohibitive barrier to entry I'm not considering?

    Read the article

  • storing huge amount of records into classic asp cache object is SLOW

    - by aspm
    we have some nasty legacy asp that is performing like a dog and i narrowed it down to because we are trying to store 15K+ records into the application cache object. but that's not the killer. before it stores it, it converts the ADO stream to XML then stores it. this conversion of the huge record set to XML spikes the CPU and causes all kinds of havoc on users when it's happening. and unfortunately we do this XML conversion to read the cache a lot, causing site wide performance problems. i don't have the resources to convert everything to .net. so that's out. but i need to obviously use caching, but int his case the caching is hurting instead of helping. is there a more effecient way to store this data instead of doing this xml conversion to/from every time we read/update the cache?

    Read the article

  • [Symfony] Admin generator and i18n

    - by David
    I have read lots of questions about i18n, but I haven't found any about performance. I have a simple backend app listing the contents of an ads table. These ads have a category, that is translated in some languages (it's defined as i18n in the Doctrine schema). So, when I add a "table_method" in my generator.yml to include de Category table, it reduces the number of queries, but there are yet some of them referencing i18n translation tables. So, if I add the category Translation table to the query, it reduces even more the queries BUT it increases the processing time considerably. Why this time penalty? Just because of the translation table? And why isn't the filter using this method to avoid so many translation queries as well? I mean, if I want to filter by category, it is making one query per category to include the translation table. Why??

    Read the article

  • CakePHP repeats same queries

    - by Rytis
    I have a model structure: Category hasMany Product hasMany Stockitem belongsTo Warehouse, Manufacturer. I fetch data with this code, using containable to be able to filter deeper in the associated models: $this->Category->find('all', array( 'conditions' => array('Category.id' => $category_id), 'contain' => array( 'Product' => array( 'Stockitem' => array( 'conditions' => array('Stockitem.warehouse_id' => $warehouse_id), 'Warehouse', 'Manufacturer', ) ) ), ) ); Data structure is returned just fine, however, I get multiple repeating queries like, sometimes hundreds of such queries in a row, based on dataset. SELECT `Warehouse`.`id`, `Warehouse`.`title` FROM `beta_warehouses` AS `Warehouse` WHERE `Warehouse`.`id` = 2 Basically, when building data structure Cake is fetching data from mysql over and over again, for each row. We have datasets of several thousand rows, and I have a feeling that it's going to impact performance. Is it possible to make it cache results and not repeat same queries?

    Read the article

  • What FIX implementation do you recommend for use with .NET

    - by Ajaxx
    I am reviewing implementation choices for FIX when using .NET. A few obvious choices come to mind, but I want to know if there are other options, better choices or if we've made the same decision as a lot of you. QuickFIX - Stable, C++ implementation - so you've got unmanaged code to interop with. FIX4NET - C# implementation - seems to have some gaps in its implementation. DIY - Chime in here if you've made your own FIX engine Let me throw in some caveats here. I'm not looking for sub 100 microsecond processing. Performance is a requirement, but not so much that it's driving my decisions. A solid product that is stable, performs well and is flexible enough to deal with vendor specific dialects is the sweet spot. The more we can do in .NET the better.

    Read the article

  • Is select() Ok to implement single socket read/write timeout ?

    - by chmike
    I have an application processing network communication with blocking calls. Each thread manages a single connection. I've added a timeout on the read and write operation by using select prior to read or write on the socket. Select is known to be inefficient when dealing with large number of sockets. But is it ok, in term of performance to use it with a single socket or are there more efficient methods to add timeout support on single sockets calls ? The benefit of select is to be portable.

    Read the article

  • How do I calculate a good hash code for a list of strings?

    - by Ian Ringrose
    Background: I have a short list of strings. The number of strings is not always the same, but are nearly always of the order of a “handful” In our database will store these strings in a 2nd normalised table These strings are never changed once they are written to the database. We wish to be able to match on these strings quickly in a query without the performance hit of doing lots of joins. So I am thinking of storing a hash code of all these strings in the main table and including it in our index, so the joins are only processed by the database when the hash code matches. So how do I get a good hashcode? I could: Xor the hash codes of all the string together Xor with multiply the result after each string (say by 31) Cat all the string together then get the hashcode Some other way So what do people think? (If you care we are using .NET and SqlServer)

    Read the article

  • Detect numbers and process them ?

    - by Madhup
    Hi, I am trying to detect the numbers written on a grid and then process them using the iPhone camera. What i have found till yet are some good examples like: http://blog.damiles.com/?p=93 http://cmgresearch.blogspot.com/2010/01/augmented-reality-on-iphone-how-to_01.html Although I am able to draw the numbers on the overlay view to a good extent but still not able to detect what these numbers are. What I don't wanna do is to go through the whole AI process: training the system, providing the system whole set of values and then process them, because this is so much troublesome for me as well for the performance of my application. So guys having any idea or work arround for this please help. Thanks, Madhup

    Read the article

  • Which is the best API/Library to use when accessing a WebCam in .Net?

    - by Doctor Jones
    Which is the best API to use when accessing a WebCam in .Net? (I know they can be webcam specific, I am willing to buy a new webcam if it means better results). I want to write a desktop application that will take video from a webcam and store it in MPEG4 formats (DivX, Xvid, etc...). I would also like to access bitmap stills from the device so I can do image comparison between frames. I have tried various libraries, and none have really been a great fit (some have performance issues (very inconsistent framerates), some have image quality limitations, some just crash out for seemingly no reason. I want to get high quality video (as high as I can get) and a decent framerate. My webcam is more than up to the job and I was hoping that there would be a nice Managed .Net library around that would help my cause. Are webcam APIs all just incredibly bad?

    Read the article

  • (NOT) NULL for NVARCHAR columns

    - by Anders Abel
    Allowing NULL values on a column is normally done to allow the absense of a value to be represented. When using NVARCHAR there is aldready a possibility to have an empty string, without setting the column to NULL. In most cases I cannot see a semantical difference between an NVARCHAR with an empty string and a NULL value for such a column. Setting the column as NOT NULL saves me from having to deal with the possibility of NULL values in the code and it feels better to not have to different representations of "no value" (NULL or an empty string). Will I run into any other problems by setting my NVARCHAR columns to NOT NULL. Performance? Storage size? Anything I've overlooked on the usage of the values in the client code?

    Read the article

  • When to use Hibernate?

    - by Ramo
    Hi All, I was asked in an interview this question so I answered with the following: -Better Performance: - Efficient queries. - 1st and 2nd level caching. - Good caching gives better scalability. - Good Database Portability: - Changing the DB is as easy as changing the dialect configuration. - Increased Developer Productivity: - Think only in object terms not in query language terms. But I also feel that systems fall in one of the below categories, and Hibernate may not be suited for all these cases, I'm interested in your thoughts about this, do you agree with me? please let me know when would use HB in the following case and why. Write Only Systems: Read Only Systems: Write Mostly Systems: Read Mostly Systems: Regards Ramo

    Read the article

  • Measuring the time to create and destroy a simple object

    - by portoalet
    From Effective Java 2nd Edition Item 7: Avoid Finalizers "Oh, and one more thing: there is a severe performance penalty for using finalizers. On my machine, the time to create and destroy a simple object is about 5.6 ns. Adding a finalizer increases the time to 2,400 ns. In other words, it is about 430 times slower to create and destroy objects with finalizers." How can one measure the time to create and destroy an object? Do you just do: long start = System.nanoTime(); SimpleObject simpleObj = new SimpleObject(); simpleObj.finalize(); long end = System.nanoTime(); long time = end - start;

    Read the article

  • Stored Procedure or calculations via IQueryable?

    - by Shawn Mclean
    This is a question that is based on choosing performance over design practices. If I have a method that will be executed many times a second; public static IQueryable<IPerson> InRadius(this IQueryable<IPerson> query, Coordinate center, double radius) { return (from u in query where CallHeavyMathFormula(u, center, radius) select u); } This extension method for IQueryable generates a SQL that does some heavy maths calculation (Cosine, Sine, etc). This would mean the application sends 1-2KB of sql to the server per call. I've heard of placing all application logic, in your application. I also would like to change to a database such as azure or one of those scalable databases in the future. How do I handle something like this? Should I leave it as it is now or write stored procedures? How do applications like twitter or facebook do it?

    Read the article

  • iPhone. How to intercept system dialogs?

    - by Sjakelien
    My app offers the user the opportunity to put an event in his native calendar. For that, I refer to an online webcal:// URL. Since the underlying .ics file is quite big (containing quite a few events), it sometimes (also depending on the network performance)takes a while before the "Do you want to subscribe"-dialog sequence kicks in. I would like to give the user some feedback in the mean time, like a spinner, or a changing graphic, for him to know that something is going to happen. Question: how does my app know, that the "Do you want to subscribe"-dialog has been shown, and that the user has chosen either a Cancel of OK button in that dialog, so I can stop the spinner?

    Read the article

  • asynchronous calls in asp.net

    - by lockedscope
    in this sample, two threads created; a worker thread created by BeginInvoke and an I/O completion thread created by SendAsync method. but another author in his UnsafeQueueNativeOverlapped example, don't recommend this. i want to use SendAsync or ...Async in an asp.net page and i want to use PageAsyncTask. however, its BeginEventHandler requires AsyncResult to be returned which SendAsync does not return. afaik, event based async pattern is the most recommended way so how could we call SendAsync or any ...Async methods without creating two threads and hurting the performance?

    Read the article

  • "Single NSMutableArray" vs. "Multiple C-arrays" --Which is more Efficient/Practical?

    - by RexOnRoids
    Situation: I have a DAY structure. The DAY structure has three variables or attributes: a Date (NSString*), a Temperature (float), and a Rainfall (float). Problem: I will be iterating through an array of about 5000 DAY structures and graphing a portion of these onto the screen using OpenGL. Question: As far as drawing performance, which is better? I could simply create an NSMutableArray of DAY structures (NSObjects) and iterate on the array on each draw call -- which I think would be hard on the CPU. Or, I could instead manually manage three different C-Arrays -- One for the Date String (2-Dimensional), One for the temperature (1-Dimensional) and One for the Rainfall (1-Dimensional). I could keep track of the current Day by referencing the current index of the iterated C-Arrays.

    Read the article

  • Is both approach same ?

    - by Harikrishna
    I have one datatable which is not bindided and records are coming from the file by parsing it in the datatable dynamically every time. Now there is three columns in the datatable Marks1,Marks2 and FinalMarks. And their types is decimal. Now for making addition of columns Marks1 and Marks2 's records and store it into FinalMarks column,For that what I do is : datatableResult.Columns["FinalMarks"].Expression="Marks1+Marks2"; It's works properly. It can be done in other way also is foreach (DataRow r in datatableResult.Rows) { r["FinalMarks"]=Convert.ToDecimal(r["Marks1"])+Convert.ToDecimal(r["Marks2"]); } Now I don't know that which is the best way to do this means performance wise. Is first approach same as second approach in background means is both approach same or what?

    Read the article

  • How to synchronize static method in java.

    - by Summer_More_More_Tea
    Hi there: I come up with this question when implementing singleton pattern in Java. Even though the example listed blow is not my real code, yet very similar to the original one. public class ConnectionFactory{ private static ConnectionFactory instance; public static synchronized ConnectionFactory getInstance(){ if( instance == null ){ instance = new ConnectionFactory(); } return instance; } private ConnectionFactory(){ // private constructor implementation } } Because I'm not quite sure about the behavior of a static synchronized method, I get some suggestion from google -- do not have (or as less as possible) multiple static synchronized methods in the same class. I guess when implementing static synchronized method, a lock belongs to Class object is used so that multiple static synchronized methods may degrade performance of the system. Am I right? or JVM use other mechanism to implement static synchronized method? What's the best practice if I have to implement multiple static synchronized methods in a class? Thank you all! Kind regards!

    Read the article

  • Can I tell Borland C++ Builder to copy a file somewhere else after it is built?

    - by MrVimes
    I have two computers. One is intended to be left 'free' for high-performance activities (such as playing games) The other is my 'all purpose' computer where I install all the apps I use for creating things, and so on. On the second computer I use Codegear C++ Builder to work on an app that I use on the first computer. If I have BCB compile to comp 1 it is hopeless. It becomes unresponsive. It compiles locally very quickly. So what I do is compile locally and then copy the exe to the other machine. Well, I'm all for streamlining processes, so I want a way to compile on PC2 and use on PC1 without any intermediate steps. So is it possible to have BCB do the compiling on PC2 and create a local exe file, then copy the file to PC 1?

    Read the article

  • Large static arrays are slowing down class load, need a better/faster lookup method

    - by Visualize
    I have a class with a couple static arrays: an int[] with 17,720 elements a string[] with 17,720 elements I noticed when I first access this class it takes almost 2 seconds to initialize, which causes a pause in the GUI that's accessing it. Specifically, it's a lookup for Unicode character names. The first array is an index into the second array. static readonly int[] NAME_INDEX = { 0x0000, 0x0001, 0x0005, 0x002C, 0x003B, ... static readonly string[] NAMES = { "Exclamation Mark", "Digit Three", "Semicolon", "Question Mark", ... The following code is how the arrays are used (given a character code). [Note: This code isn't a performance problem] int nameIndex = Array.BinarySearch<int>(NAME_INDEX, code); if (nameIndex > 0) { return NAMES[nameIndex]; } I guess I'm looking at other options on how to structure the data so that 1) The class is quickly loaded, and 2) I can quickly get the "name" for a given character code. Should I not be storing all these thousands of elements in static arrays?

    Read the article

  • iPhone , core data, whether NSManagedObject use lazy load mechanism when it was create ?

    - by Robin
    Hi, all, I have use core data in app, I have definite a class that most like as follows: @interface Master : NSManagedObject { } @property (nonatomic, retain) NSSet* Details; .... the entity Master contains a property 'Details' that is relate to another table, this is typical Master-Details relationship, I trace the app , but I find a issue that the property 'Details' value was construct even it never be invoked ..... but I consider that the core data 'should' use some lazy mechanism to improve performance, or maybe I miss some configure step ? because the Master entity contains at least five 'Child' table properties , I have to consider this problem before use the core data .... any help ? thanks for your time!

    Read the article

  • Is there any advantage to having more than 16gb ram on a Windows Dev machine?

    - by Robert Kozak
    Assuming a machine (Dual Quad Core Xeon (2.26GHz) with 24GB RAM) running Windows Server 2008 and Hyper-V. How many VMs can I expect to run at the same time with good performance. Is this overkill? Can you really have too much RAM? Assuming 2GB per VM thats around 16GB for the VMs with 8GB left over for the Main OS and Hyper-V. Sound about right? Edit: Tried to make the question sound less like bragging. Was never my intention. Its a hard question to write.

    Read the article

  • Reducing template bloat with inheritance

    - by benoitj
    Does anyone have experience reducing template code bloat using inheritance? i hesitate rewriting our containers this way: class vectorBase { public: int size(); void clear(); int m_size; void *m_rawData; //.... }; template< typename T > class vector : public vectorBase { void push_back( const T& ); //... }; I should keep maximum performance while reducing compile time I'm also wondering why stl implementations do not uses this approach Thanks for your feedbacks

    Read the article

  • dotTrace cant create deploy folder on Windows Azure Web Sites (WAWS)

    - by Orhan Maden
    I get the error message 'Can't create deploy folder.' when I try to profile a remote websites on WAWS. Actions taken: Downloaded and installed the dotTrace Profiler 5.5 from the JetBrains website Downloaded the dotTrace.Performance.Remote version 5.5.0 from Nuget Published the website to WAWS via Visual Studio 2013 Started the dotTrace application as Administrator Connected to the remote _https://subdomain.azurwebsites.net/AgentService.asmx. See image: http://1drv.ms/1nF5Cyh Selected the w3wp process and pressed Run Got the error message 'Can't create deploy folder'. See image: http://1drv.ms/U5h35A I'm running dotTrace in trial mode at the moment. Swift help is much appreciated. Orhan

    Read the article

  • Is there a library / tool to query MySQL data files (MyISAM / InnoDB) without the server? (the SQLit

    - by MGW
    Oftentimes I want to query my MySQL data directly without a server running or without having access to the server (but having read / write rights to the files). Is there a tool or maybe even a library around to query MySQL data files like it is possible with SQLite? I'm specifically looking for InnoDB and MyISAM support. Performance is not a factor. I don't have any knowledge about MySQL internals, but I presume it should be possible to do and not too hard to get the specific code out? Thank you for any suggestions!

    Read the article

< Previous Page | 881 882 883 884 885 886 887 888 889 890 891 892  | Next Page >