Search Results

Search found 13249 results on 530 pages for 'performance tuning'.

Page 472/530 | < Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >

  • Technology and language for a stable Digital Audio Workstation development

    - by Kill KRT
    Hi, I'm designing a cross platform (Windows/Linux/OS X) application, something like a digital audio workstation. I'd like to create a software where users have a fully featured sequencer (multiple tracks with automation) and where it is possible to create instruments using a visual language (as Pure Data/Max MSP). Ehm... I know that I've already posted a question about a related issue... But in order to decide which technology I should use, I think I'd better to make more investigation. I'm a quite experted user of audio trackers (Renoise, Protracker,...) and sequencers (FL Studio, Cubase 5), but I didn't ever try to develop even a basic audio tracker. I know just the basic theory of mixing sound and know how basically a DSP works. My questions are: Where I can find a good tutorial/guide/book about this issue? Do you think using C# (with NAudio) could dramatically reduce performance? I know C++ would be the best choice, but I find C# so elegant and easy to build and port, while C++ is so powerful and fast, but there are too #define and bad things for my taste! ;-) Thank you.

    Read the article

  • SQL Server full text query across multiple tables - why so slow?

    - by Mikey Cee
    Hi. I'm trying to understand the performance of an SQL Server 2008 full-text query I am constructing. The following query, using a full-text index, returns the correct results immediately: SELECT O.ID, O.Name FROM dbo.EventOccurrence O WHERE FREETEXT(O.Name, 'query') ie, all EventOccurrences with the word 'query' in their name. And the following query, using a full-text index from a different table, also returns straight away: SELECT V.ID, V.Name FROM dbo.Venue V WHERE FREETEXT(V.Name, 'query') ie. all Venues with the word 'query' in their name. But if I try to join the tables and do both full-text queries at once, it 12 seconds to return: SELECT O.ID, O.Name FROM dbo.EventOccurrence O INNER JOIN dbo.Event E ON O.EventID = E.ID INNER JOIN dbo.Venue V ON E.VenueID = V.ID WHERE FREETEXT(E.Name, 'search') OR FREETEXT(V.Name, 'search') Here is the execution plan: http://uploadpad.com/files/query.PNG From my reading, I didn't think it was even possible to make a free text query across multiple tables in this way, so I'm not sure I am understanding this correctly. Note that if I remove the WHERE clause from this last query then it returns all results within a second, so it's definitely the full-text that is causing the issue here. Can someone explain (i) why this is so slow and (ii) if this is even supported / if I am even understanding this correctly. Thanks in advance for your help.

    Read the article

  • Events and references pattern

    - by serhio
    In a project I have the following relation between BO and GUI By e.g. G could represent a graphic with time lines, C a TimeLine curve, P - points of that curve and T the time that represents each point. Each GUI object is associated with the BO corresponding object. When T changes GUI P captures the Changed event and changes its location. So, when G should be modified, it modifies internally its objects and as result T changes, P moves and the GuiG visually changes, everything is OK. But there is an inconvenient of this architecture... BO should not be recreated, because this will breack the link between BO and GUIO. In particular, GUI P should always have the same reference of T. If in a business logic I do by e.g. P1.T = new T(this.T + 10) GUI_P1 will not move anymore, because it wait an event from the reference of former P1.T object, that does not belongs to P1 anymore. So the solution was to always modify the existing objects, not to recreate it. But here is an other inconvenient: performance. Say I have a ready newC object that should replace the older one. Instead of doing G1.C = newC I should do foreach T in foreach P in C replace with T from P from newC. Is there an other more optimal way to do it?

    Read the article

  • How to make std::vector's operator[] compile doing bounds checking in DEBUG but not in RELEASE

    - by Edison Gustavo Muenz
    I'm using Visual Studio 2008. I'm aware that std::vector has bounds checking with the at() function and has undefined behaviour if you try to access something using the operator [] incorrectly (out of range). I'm curious if it's possible to compile my program with the bounds checking. This way the operator[] would use the at() function and throw a std::out_of_range whenever something is out of bounds. The release mode would be compiled without bounds checking for operator[], so the performance doesn't degrade. I came into thinking about this because I'm migrating an app that was written using Borland C++ to Visual Studio and in a small part of the code I have this (with i=0, j=1): v[i][j]; //v is a std::vector<std::vector<int> > The size of the vector 'v' is [0][1] (so element 0 of the vector has only one element). This is undefined behaviour, I know, but Borland is returning 0 here, VS is crashing. I like the crash better than returning 0, so if I can get more 'crashes' by the std::out_of_range exception being thrown, the migration would be completed faster (so it would expose more bugs that Borland was hiding).

    Read the article

  • Field Members vs Method Variables?

    - by Braveyard
    Recently I've been thinking about performance difference between class field members and method variables. What exactly I mean is in the example below : Lets say we have a DataContext object for Linq2SQL class DataLayer { ProductDataContext context = new ProductDataContext(); public IQueryable<Product> GetData() { return context.Where(t=>t.ProductId == 2); } } In the example above, context will be stored in heap and the GetData method variables will be removed from Stack after Method is executed. So lets examine the following example to make a distinction : class DataLayer { public IQueryable<Product> GetData() { ProductDataContext context = new ProductDataContext(); return context.Where(t=>t.ProductId == 2); } } (*1) So okay first thing we know is if we define ProductDataContext instance as a field, we can reach it everywhere in the class which means we don't have to create same object instance all the time. But lets say we are talking about Asp.NET and once the users press submit button the post data is sent to the server and the events are executed and the posted data stored in a database via the method above so it is probable that the same user can send different data after one another.If I know correctly after the page is executed, the finalizers come into play and clear things from memory (from heap) and that means we lose our instance variables from memory as well and after another post, DataContext should be created once again for the new page cycle. So it seems the only benefit of declaring it publicly to the whole class is the just number one text above. Or is there something other? Thanks in advance... (If I told something incorrect please fix me.. )

    Read the article

  • How to scale JPEG images with a non-standard sampling factor in Java?

    - by HRJ
    I am using Java AWT for scaling a JPEG image, to create thumbnails. The code works fine when the image has a normal sampling factor ( 2x2,1x1,1x1 ) However, an image which has this sampling factor ( 1x1, 1x1, 1x1 ) creates problem when scaled. The colors get corrupted though the features are recognizable. The original and the thumbnail: The code I am using is roughly equivalent to: static BufferedImage awtScaleImage(BufferedImage image, int maxSize, int hint) { // We use AWT Image scaling because it has far superior quality // compared to JAI scaling. It also performs better (speed)! System.out.println("AWT Scaling image to: " + maxSize); int w = image.getWidth(); int h = image.getHeight(); float scaleFactor = 1.0f; if (w > h) scaleFactor = ((float) maxSize / (float) w); else scaleFactor = ((float) maxSize / (float) h); w = (int)(w * scaleFactor); h = (int)(h * scaleFactor); // since this code can run both headless and in a graphics context // we will just create a standard rgb image here and take the // performance hit in a non-compatible image format if any Image i = image.getScaledInstance(w, h, hint); image = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB); Graphics2D g = image.createGraphics(); g.drawImage(i, null, null); g.dispose(); i.flush(); return image; } (Code courtesy of this page ) Is there a better way to do this? Here's a test image with sampling factor of [ 1x1, 1x1, 1x1 ].

    Read the article

  • Magic function `bash ` not found

    - by inspectorG4dget
    I have a bunch of simulations that I want to run on a high-performance cluster, on which I should make reservations to get computing time. Since the reservations are bounded by time, I am developing an automation script that I can scp into the cluster and run. This script will then download the relevant simulation files, run them, and upload the results. Part of this automation script is in bash (cp, scp, etc) and the rest is in python. In order to develop this automation, I am using an IPython notebook. So far, I've coded all the python automation stuff in my IPython notebook and am trying to write the bash part of it now. However, it seems that the magic %%bash doesn't work in my IPython notebook. I get the following error when I have this code in my cell: Cell %%bash echo hi Error File "<ipython-input-22-62ec98e35224>", line 3 echo hi ^ SyntaxError: invalid syntax On a whim, I tried this: Cell %%bash print "hi" Error hi ERROR: Magic function `bash` not found. So I tried this with %%system, %%! and %%shell. But none of those work; they all give me the same error. Why is this happening? How can I fix this? Metadata: IPython 0.13.dev Python 2.7.1 Mac OS X Lion

    Read the article

  • Architecture for database analytics

    - by David Cournapeau
    Hi, We have an architecture where we provide each customer Business Intelligence-like services for their website (internet merchant). Now, I need to analyze those data internally (for algorithmic improvement, performance tracking, etc...) and those are potentially quite heavy: we have up to millions of rows / customer / day, and I may want to know how many queries we had in the last month, weekly compared, etc... that is the order of billions entries if not more. The way it is currently done is quite standard: daily scripts which scan the databases, and generate big CSV files. I don't like this solutions for several reasons: as typical with those kinds of scripts, they fall into the write-once and never-touched-again category tracking things in "real-time" is necessary (we have separate toolset to query the last few hours ATM). this is slow and non-"agile" Although I have some experience in dealing with huge datasets for scientific usage, I am a complete beginner as far as traditional RDBM go. It seems that using column-oriented database for analytics could be a solution (the analytics don't need most of the data we have in the app database), but I would like to know what other options are available for this kind of issues.

    Read the article

  • What effects has working in rotating shifts on programming teams?

    - by eKek0
    I work in a bank, and the boss now want's that we, the programming team, work on rotating shifts. He wants that sometimes we work from 7am to 3pm, and sometimes on 11.30am to 7.30pm. He says that we will be more productive working this way, because he has worked with teams just like that and he just knows that. Nobody of the team wants this change, but we don't know how to effectively reject this new rule. I was trying to find some empirical (or almost) evidence about how rotating shifts affects performance of programming teams, and I couldn't. I had read something about rotating shifts, but not exactly about the effect of this on programming teams. Do you know any research about rotating shifts on programming teams? Did you have any experience with this kind of work? EDIT: Other teams of the company, like the database administrators team, the help desk team, the communication team or the network administrators team are already working in rotating shifts, and they don't like this but they do it anyway. I think the boss want that we work on rotating shifts too because of them, but since only we do programming I think the effects of rotating shifts could be, at least, different for us.

    Read the article

  • Generating a field setter and getter with ASM

    - by ng
    I would like to avoid reflection in an open source project I am developing. Here I have classes like the following. public class PurchaseOrder { @Property private Customer customer; @Property private String name; } I scan for the @Property annotation to determine what I can set and get from the PurchaseOrder reflectively. There are many such classes all using java.lang.reflect.Field.get() and java.lang.reflect.Field.set(). Ideally I would like to generate for each property an invoker like the following. public interface PropertyAccessor<S, V> { public void set(S source, V value); public V get(S source); } Now when I scan the class I can create a static inner class of PurchaseOrder like so. static class customer_Field implements PropertyAccessor<PurchaseOrder, Customer> { public void set(PurchaseOrder order, Customer customer) { order.customer = customer; } public Customer get(PurchaseOrder order) { return order.customer; } } With these I totally avoid the cost of reflection. I can now set and get from my instances with native performance. Can anyone tell me how I would do this. A code example would be great. I have searched the net for a good example but can find nothing like this. The ASM examples are pretty poor also. The key here is that I have an interface that I can pass around. So I can have various implementations, perhaps one with Java Reflection as a default, one with ASM, and maybe one with Javassist? Any help would be greatly appreciated.

    Read the article

  • Simply doing modelType.ToString() isn't sufficient, How can i use it via Activator.CreateInstance?

    - by programmerist
    public class MyController { public object CreateByEnum(DataModelType modeltype) { string enumText = modeltype.ToString(); // will return for example "Company" Type classType = Type.GetType(enumText); // the Type for Company class object t = Activator.CreateInstance(classType); // create an instance of Company class return t; } } public class CompanyView { public static List<Personel> GetPersonel() { MyController controller = new MyController(); _Company company = controller.CreateByEnum(DataModelType.Company) as _Company; return company.GetPersonel(); } } public enum DataModelType { xyz, klm, tucyz, Company } Yes, I agree Activator.CreateInstance() is very useful. Unfortunately, I need to pass in the correct type. That means building the correct string to pass to Type.GetType(). If I trace through the call to Controller.CreatebyEnum() in the code I posted above, simply doing modelType.ToString() isn't sufficient, even for the case of DataModelType.Company. My solution'll be maintenance bottleneck. What would be better is something that takes the results of modelType.ToString() and then recursively searches through all the types found in all the assemblies loaded in the current AppDomain. According to MSDN, Type.GetType() only searches the current calling assembly, and mscorlib.dll. How can i do that? . i need best performance?

    Read the article

  • Best XML format for log events in terms of tool support for data mining and visualization?

    - by Thorbjørn Ravn Andersen
    We want to be able to create log files from our Java application which is suited for later processing by tools to help investigate bugs and gather performance statistics. Currently we use the traditional "log stuff which may or may not be flattened into text form and appended to a log file", but this works the best for small amounts of information read by a human. After careful consideration the best bet has been to store the log events as XML snippets in text files (which is then treated like any other log file), and then download them to the machine with the appropriate tool for post processing. I'd like to use as widely supported an XML format as possible, and right now I am in the "research-then-make-decision" phase. I'd appreciate any help both in terms of XML format and tools and I'd be happy to write glue code to get what I need. What I've found so far: log4j XML format: Supported by chainsaw and Vigilog. Lilith XML format: Supported by Lilith Uninvestigated tools: Microsoft Log Parser: Apparently supports XML. OS X log viewer: plus there is a lot of tools on http://www.loganalysis.org/sections/parsing/generic-log-parsers/ Any suggestions?

    Read the article

  • Flex web application: prevent framerate drop when window is invisible

    - by JayPea
    So there's been a new "feature" in the flash player since version 10.1, which reduces the player's framerate to 2 fps when the application window is out of view. This is good news for performance, but it can break some functionality, such as the Timer class. I have an application which uses a Timer to display a countdown. Given the nature of the application, it is required for the Timer to complete its countdown even if the user is not there to see it. Imagine that you need to give the user only 10 seconds to perform a task. If the user minimizes the window halfway through the counter, they can take as much time as they want and still have 5 seconds left when they return to the window. This apparently can not be avoided with the newer flash players. In Air applications there is the backgroundFrameRate property which can be set to prevent this behavior, but this is part of the WindowedApplication class, so it seems that it is not available in a web application. Does anyone know a way to keep a constant frame rate even when the window is not visible? Thanks

    Read the article

  • How to efficiently convert String, integer, double, datetime to binary and vica versa?

    - by Ben
    Hi, I'm quite new to C# (I'm using .NET 4.0) so please bear with me. I need to save some object properties (their properties are int, double, String, boolean, datetime) to a file. But I want to encrypt the files using my own encryption, so I can't use FileStream to convert to binary. Also I don't want to use object serialization, because of performance issues. The idea is simple, first I need to somehow convert objects (their properties) to binary (array), then encrypt (some sort of xor) the array and append it to the end of the file. When reading first decrypt the array and then somehow convert the binary array back to object properties (from which I'll generate objects). I know (roughly =) ) how to convert these things by hand and I could code it, but it would be useless (too slow). I think the best way would be just to get properties' representation in memory and save that. But I don't know how to do it using C# (maybe using pointers?). Also I though about using MemoryStream but again I think it would be inefficient. I am thinking about class Converter, but it does not support toByte(datetime) (documentation says it always throws exception). For converting back I think the only options is class Converter. Note: I know the structure of objects and they will not change, also the maximum String length is also known. Thank you for all your ideas and time. EDIT: I will be storing only parts of objects, in some cases also parts of different objects (a couple of properties from one object type and a couple from another), thus I think that serialization is not an option for me.

    Read the article

  • Why would this Lua optimization hack help?

    - by Ian Boyd
    i'm looking over a document that describes various techniques to improve performance of Lua script code, and i'm shocked that such tricks would be required. (Although i'm quoting Lua, i've seen similar hacks in Javascript). Why would this optimization be required: For instance, the code for i = 1, 1000000 do local x = math.sin(i) end runs 30% slower than this one: local sin = math.sin for i = 1, 1000000 do local x = sin(i) end They're re-declaring sin function locally. Why would this be helpful? It's the job of the compiler to do that anyway. Why is the programmer having to do the compiler's job? i've seen similar things in Javascript; and so obviously there must be a very good reason why the interpreting compiler isn't doing its job. What is it? i see it repeatedly in the Lua environment i'm fiddling in; people redeclaring variables as local: local strfind = strfind local strlen = strlen local gsub = gsub local pairs = pairs local ipairs = ipairs local type = type local tinsert = tinsert local tremove = tremove local unpack = unpack local max = max local min = min local floor = floor local ceil = ceil local loadstring = loadstring local tostring = tostring local setmetatable = setmetatable local getmetatable = getmetatable local format = format local sin = math.sin What is going on here that people have to do the work of the compiler? Is the compiler confused by how to find format? Why is this an issue that a programmer has to deal with? Why would this not have been taken care of in 1993? i also seem to have hit a logical paradox: Optimizatin should not be done without profiling Lua has no ability to be profiled Lua should not be optimized

    Read the article

  • Google Adwords API response parse

    - by Yun Ling
    I am trying to figure out how to parse the Adword API query response without exceptions and one issue that i came across is that sometimes, the data itself contains comma besides the comma between each column. Say i do a query on Adroup, campaign and impression by using <reportDefinition xmlns="https://adwords.google.com/api/adwords/cm/v201209"> <selector> <fields>CampaignName</fields> <fields>AdgroupName</fields> <fields>Impressions</fields> <predicates> <field>Status</field> <operator>IN</operator> <values>ENABLED</values> <values>PAUSED</values> </predicates> </selector> <reportName>Custom Adgroup Performance Report</reportName> <reportType>ADGROUP_PERFORMANCE_REPORT</reportType> <dateRangeType>LAST_7_DAYS</dateRangeType> <downloadFormat>CSV</downloadFormat> </reportDefinition> Since my campaign has comma within the string like below: "Adroup,Campaign,Impressions, Premiun Beer, Beer, Chicago, 1000" where the adgroup is "premium beer" and campaign is "Beer,Chicago". that will cause an issue if we parse this information by using comma. Does anyone know how to solve this problem?

    Read the article

  • How does someone without a CS degree get an interview in a sluggish economy?

    - by Anon
    I've been programming off and on since 4th grade (for about 20 years). Technology is one of my passions but after working in the field for a couple years out of High School, I spent nine months and $15,000 getting an accredited certificate in music performance instead of CS. I've been doing lots of self study but I think a CS degree is overkill for most line of business applications. Even so, HR departments can't be expected to know that... How does one get their foot in the proverbial door without a degree, especially in a smaller "fly-over country" market? ...or... Where can I get the cheapest/easiest degree that will pass muster (including testing out of as much as possible)? Don't get me wrong, I'm down with learning new things but I don't necessarily need the expense or coaching to motivate me. EDIT Consolidating good answers: Networking/User Groups Portfolio/Open Source Contributions Look for hybrid jobs (How I got my start :) ) Seek un-elitist companies/hiring managers. (Play the numbers game) Start my own business. (This is a bit challenging for a family man but a very good answer. My reason for searching is to reduce my commute thereby allowing more time to cultivate income on the side) Avail myself of political subsidies to constituents in the teachers' unions ;) .

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • Drawbacks with using Class Methods in Objective C.

    - by RickiG
    Hi I was wondering if there are any memory/performance drawbacks, or just drawbacks in general, with using Class Methods like: + (void)myClassMethod:(NSString *)param { // much to be done... } or + (NSArray*)myClassMethod:(NSString *)param { // much to be done... return [NSArray autorelease]; } It is convenient placing a lot of functionality in Class Methods, especially in an environment where I have to deal with memory management(iPhone), but there is usually a catch when something is convenient? An example could be a thought up Web Service that consisted of a lot of classes with very simple functionality. i.e. TomorrowsXMLResults; TodaysXMLResults; YesterdaysXMLResults; MondaysXMLResults; TuesdaysXMLResults; . . . n I collect a ton of these in my Web Service Class and just instantiate the web service class and let methods on this class call Class Methods on the 'Results' Classes. The classes are simple but they handle large amount of Xml, instantiate lots of objects etc. I guess I am asking if Class Methods lives or are treated different on the stack and in memory than messages to instantiated objects? Or are they just instantiated and pulled down again behind the scenes and thus, just a way of saving a few lines of code?

    Read the article

  • Is there any appreciable difference between if and if-else?

    - by Drew
    Given the following code snippets, is there any appreciable difference? public boolean foo(int input) { if(input > 10) { doStuff(); return true; } if(input == 0) { doOtherStuff(); return true; } return false; } vs. public boolean foo(int input) { if(input > 10) { doStuff(); return true; } else if(input == 0) { doOtherStuff(); return true; } else { return false; } } Or would the single exit principle be better here with this piece of code... public boolean foo(int input) { boolean toBeReturned = false; if(input > 10) { doStuff(); toBeReturned = true; } else if(input == 0) { doOtherStuff(); toBeReturned = true; } return toBeReturned; } Is there any perceptible performance difference? Do you feel one is more or less maintainable/readable than the others?

    Read the article

  • How can I handle dynamic calculated attributes in a model in Django?

    - by bullfish
    In Django I calculate the breadcrumb (a list of fathers) for an geographical object. Since it is not going to change very often, I am thinking of pre calculating it once the object is saved or initialized. 1.) What would be better? Which solution would have a better performance? To calculate it at _init_ or to calculate it when the object is saved (the object takes about 500-2000 characters in the DB)? 2.) I tried to overwrite the _init_ or save() methods but I don't know how to use attributes of the just saved object. Accessing *args, **kwargs did not work. How can I access them? Do I have to save, access the father and then save again? 3.) If I decide to save the breadcrumb. Whats the best way to do it? I used http://www.djangosnippets.org/snippets/1694/ and have crumb = PickledObjectField(). Thats the method to calculate the attribute crumb() def _breadcrumb(self): breadcrumb = [ ] x = self while True: x = x.father try: if hasattr(x, 'country'): breadcrumb.append(x.country) elif hasattr(x, 'region'): breadcrumb.append(x.region) elif hasattr(x, 'city'): breadcrumb.append(x.city) else: break except: break breadcrumb.reverse() return breadcrumb Thats my save-Method: def save(self,*args, **kwargs): # how can I access the father ob the object? father = self.father # does obviously not work father = kwargs['father'] # does not work either # the breadcrumb gets calculated here self.crumb = self._breadcrumb(father) super(GeoObject, self).save(*args,**kwargs) Please help me out. I am working on this for days now. Thank you.

    Read the article

  • High Runtime for Dictionary.Add for a large amount of items

    - by aaginor
    Hi folks, I have a C#-Application that stores data from a TextFile in a Dictionary-Object. The amount of data to be stored can be rather large, so it takes a lot of time inserting the entries. With many items in the Dictionary it gets even worse, because of the resizing of internal array, that stores the data for the Dictionary. So I initialized the Dictionary with the amount of items that will be added, but this has no impact on speed. Here is my function: private Dictionary<IdPair, Edge> AddEdgesToExistingNodes(HashSet<NodeConnection> connections) { Dictionary<IdPair, Edge> resultSet = new Dictionary<IdPair, Edge>(connections.Count); foreach (NodeConnection con in connections) { ... resultSet.Add(nodeIdPair, newEdge); } return resultSet; } In my tests, I insert ~300k items. I checked the running time with ANTS Performance Profiler and found, that the Average time for resultSet.Add(...) doesn't change when I initialize the Dictionary with the needed size. It is the same as when I initialize the Dictionary with new Dictionary(); (about 0.256 ms on average for each Add). This is definitely caused by the amount of data in the Dictionary (ALTHOUGH I initialized it with the desired size). For the first 20k items, the average time for Add is 0.03 ms for each item. Any idea, how to make the add-operation faster? Thanks in advance, Frank

    Read the article

  • How can I optimize the SELECT statement running on an Oracle database?

    - by Elvis Lou
    I have a SELECT statement in ORACLE: SELECT COUNT(DISTINCT ds1.endpoint_msisdn) multiple30, dss1.service, dss1.endpoint_provisioning_id, dss1.company_scope, Nvl(x.subscription_status, dss1.subscription_status) subscription_status FROM daily_summary ds1 join daily_summary ds2 ON ds1.endpoint_msisdn = ds2.endpoint_msisdn, daily_summary_static dss1, daily_summary_static dss2, (SELECT NULL subscription_status FROM dual UNION ALL SELECT -2 subscription_status FROM dual) x WHERE ds1.summary_ts >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND ds1.summary_ts <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss1.last_active >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND dss1.last_active <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss2.last_active >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND dss2.last_active <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss1.service <> dss2.service AND ( dss1.company_scope = 2 OR dss1.company_scope = 5 ) AND ( dss2.company_scope = 2 OR dss2.company_scope = 5 ) AND dss1.company_scope = dss2.company_scope AND ds1.endpoint_noc_id = dss1.endpoint_noc_id AND ds1.endpoint_host_id = dss1.endpoint_host_id AND ds1.endpoint_instance_id = dss1.endpoint_instance_id AND ds2.endpoint_noc_id = dss2.endpoint_noc_id AND ds2.endpoint_host_id = dss2.endpoint_host_id AND ds2.endpoint_instance_id = dss2.endpoint_instance_id AND dss1.endpoint_provisioning_id = dss2.endpoint_provisioning_id AND Least(1, ds1.total_actions) = 1 AND Least(1, ds2.total_actions) = 1 GROUP BY dss1.service, dss1.endpoint_provisioning_id, dss1.company_scope, Nvl(x.subscription_status, dss1.subscription_status); This query took about 26 minutes to return in my environment, but if I remove the section: dss1.last_active >= to_date('10-04-2012','dd-mm-yyyy') - 30 AND dss1.last_active <= to_date('10-04-2012','dd-mm-yyyy') AND dss2.last_active >= to_date('10-04-2012','dd-mm-yyyy') - 30 AND dss2.last_active <= to_date('10-04-2012','dd-mm-yyyy') AND it only took 20 seconds to run. We have index on the column last_active, I don't know why the section slow down the performance so much? any ideas?

    Read the article

  • Doctrine - get the offset of an object in a collection (implementing an infinite scroll)

    - by dan
    I am using Doctrine and trying to implement an infinite scroll on a collection of notes displayed on the user's browser. The application is very dynamic, therefore when the user submits a new note, the note is added to the top of the collection straightaway, besides being sent (and stored) to the server. Which is why I can't use a traditional pagination method, where you just send the page number to the server and the server will figure out the offset and the number of results from that. To give you an example of what I mean, imagine there are 20 notes displayed, then the user adds 2 more notes, therefore there are 22 notes displayed. If I simply requests "page 2", the first 2 items of that page will be the last two items of the page currently displayed to the user. Which is why I am after a more sophisticated method, which is the one I am about to explain. Please consider the following code, which is part of the server code serving an AJAX request for more notes: // $lastNoteDisplayedId is coming from the AJAX request $lastNoteDisplayed = $repository->findBy($lastNoteDisplayedId); $allNotes = $repository->findBy($filter, array('createdAt' => 'desc')); $offset = getLastNoteDisplayedOffset($allNotes, $lastNoteDisplayedId); // retrieve the page to send back so that it can be appended to the listing $notesPerPage = 30 $notes = $repository->findBy( array(), array('createdAt' => 'desc'), $notesPerPage, $offset ); $response = json_encode($notes); return $response; Basically I would need to write the method getLastNoteDisplayedOffset, that given the whole set of notes and one particoular note, it can give me its offset, so that I can use it for the pagination of the previous Doctrine statement. I know probably a possible implementation would be: getLastNoteDisplayedOffset($allNotes, $lastNoteDisplayedId) { $i = 0; foreach ($allNotes as $note) { if ($note->getId() === $lastNoteDisplayedId->getId()) { break; } $i++; } return $i; } I would prefer not to loop through all notes because performance is an important factor. I was wondering if Doctrine has got a method itself or if you can suggest a different approach.

    Read the article

  • Research and replace Word Rtf

    - by Perello
    I'm working on an application which has a workflow for postal mails. These postal mails are generated according to my application business rules. Models are in html or Rtf and it works perfectly as long the user do not create the rtf with word. This is not within the specs, but my hierarchy would welcome a Word compatibility if it don't involve too much work, and it would please and ease the life of our customer. The Rtf models have tags which are replaced by application values. In most RTF, tags are not splitted, so the search and replace works perfectly. I wish to be handle word with few modifications. Example data : [[FooBuzz]] in most rtf it's not splited. In word 2003 : {\rtlch\fcs1 \af0 \ltrch\fcs0 \insrsid5517131 [[}{\rtlch\fcs1 \af0 \ltrch\fcs0 \insrsid2708730 FooBuzz}{\rtlch\fcs1 \af0 \ltrch\fcs0 \insrsid5517131 ]]} And their word (word 2007) splitted also Foo{garbage inside} Buzz. So i wish to be able to handle common RTF perfectly, and detect tags even if they are splitted. I have 2 constraints. First no regression, second it has to stay simple. Performance is not an issue here. I'm using symfony 1.4. The actual relevant research code part : $regExpression = '/\[\[([^\[\]]*)\]\]/'; preg_match_all($regExpression, $sTemplate, $outKeys);

    Read the article

< Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >