Search Results

Search found 1282 results on 52 pages for 'overhead'.

Page 12/52 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Is there a version control system that can show changes to a specific method or function?

    - by chesles
    Sometimes it would be nice to be able to say something like: (git|svn|hg|etc) diff Foo.c:main (git|svn|hg|etc) log log Foo.c:main to see the changes made to a specific function within a source file since the last commit, or the complete history of changes. My question is two-fold: Does something exist that does this? Would such a tool be practical? It would have to do some simple parsing of the code at each revision in order to compare different versions of the function; would the overhead be too much for it to be efficient?

    Read the article

  • Are there studies about the disadvantages of using issue tracking systems? [closed]

    - by user1062120
    I don't like issue tracking systems because: It takes too much time to describe issues in it. This discourage its usage. You create a place to keep your bugs. And if there is a place for them, people usually don't care too much about fixing a bug cause they can put it there so that someday someone can fix it (or not). With time, the bug lists gets so long that nobody can deal with it anymore, taking up a lot of our time. I prefer handling issues using post-its on a white board, face-to-face conversations and killing important bugs as soon as they appear. I don't care too much to keep track of bug history because I don't think that it is worth the overhead. Am I alone here? Are there studies (book/article/whatever) about the disadvantages (or great advantages) of using issue tracking systems?

    Read the article

  • What's a standard productive vs total office hours ratio? [migrated]

    - by marianov
    So it goes like this: we are keeping track of tasks using Redmine. We log time spent doing tasks, but at the end of the week if we add up all the time spent at those tasks there is no way a person has spent 40hs working. I think that's correct because offices have overhead (reading emails, politics, coffee, distractions). What would be a normal productive time vs total time spent ratio? Other areas in the organization just measure time spent in the office (with the rfid badges that open the door) but we don't like that approach and we are trying to convince Auditing to measure us using redmine instead.

    Read the article

  • Child to Parent linking - bad idea?

    - by Thraka
    I have a situation where my parent knows about it's child (duh) but I want the child to be able to reference the parent. The reason for this is that I want the child to have the ability to designate itself as most important or least important when it feels like it. When the child does this, it moves it to the top or bottom of the parent's children. In the past I've used a WeakReference property on the child to refer back tot he parent, but I feel that adds an annoying overhead, but maybe it's just the best way to do it. Is this just a bad idea? How would you implement this ability differently?

    Read the article

  • Does low latency code sometimes have to be "ugly"?

    - by user997112
    (This is mainly aimed at those who have specific knowledge of low latency systems, to avoid people just answering with unsubstantiated opinions). Do you feel there is a trade-off between writing "nice" object orientated code and writing very fast low latency code? For instance, avoiding virtual functions in C++/the overhead of polymorphism etc- re-writing code which looks nasty, but is very fast etc? It stands to reason- who cares if it looks ugly (so long as its maintainable)- if you need speed, you need speed? I would be interested to hear from people who have worked in such areas.

    Read the article

  • Which countries have suitable laws for game development companies? [on hold]

    - by yoni0505
    Which countries are most suitable for game companies? By suitable I mean: Their laws let the business be more profitable. (for example: low taxes) Have less bureaucracy. (for example: creating a company, employment laws) Living there isn't expensive. (for example: rent and food prices) etc... In short - maximum revenue with minimum overhead. What other things do I have to consider when choosing the place to be in? Are there any articles about this subject? (I couldn't find any)

    Read the article

  • what are the difficulties of operating system multithreading?

    - by ghedas
    I am reading a book that compares two ways of implementing threads, Middleware Threads and OS Threads. I have a question about these sentences: "A difficulty of operating system multithreading, however, is performance overhead. Since it is the operating system that is involved in switching threads, this involves system calls. These are generally more expensive than thread operations executed at the user level, which is where the transactional middleware is operating."

    Read the article

  • WCF chunking/streaming - make it transparent for client

    - by bybor
    While developing WCF service i've faced problem of transferring large data as method params ( 4 Mb of raw size, not considering transfer/message overhead). The solution for this problem is to use chunking or streaming, but all the samples i've seen assume client is aware of used method and uses available block size for sending/receiving portions of data, and the problem (for me) is that it's not possible to call just one method, like SaveData(DataInformation info) but write wrapper method which will instead iterate smth like SaveDataChunk(byte[] buffer) Could it be somehow made transparent for client, just calling 'SaveData'?

    Read the article

  • PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data

    - by belvoir
    Background: I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP. I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse. How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5. Best Solution? From what I can piece together the most practical solution is as follows: Create a TRIGGER to write all DML activity to a rotating CSV log file Perform whatever transformations are required Use the native DW data pump tool to efficiently pump the transformed CSV into the DW Why this approach? TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable CSV easy and fast to transform Easy to pump CSV into the DW Alternatives considered .... Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log) Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner Using the WAL Has anyone done this before? Want to share your thoughts?

    Read the article

  • Clever ways of implementing different data structures in C & data structures that should be used mor

    - by Yktula
    What are some clever (not ordinary) ways of implementing data structures in C, and what are some data structures that should be used more often? For example, what is the most effective way (generating minimal overhead) to implement a directed and cyclic graph with weighted edges in C? I know that we can store the distances in an array as is done here, but what other ways are there to implement this kind of a graph?

    Read the article

  • DynamicMethod for ConstructorInfo.Invoke, what do I need to consider?

    - by Lasse V. Karlsen
    My question is this: If I'm going to build a DynamicMethod object, corresponding to a ConstructorInfo.Invoke call, what types of IL do I need to implement in order to cope with all (or most) types of arguments, when I can guarantee that the right type and number of arguments is going to be passed in before I make the call? Background I am on my 3rd iteration of my IoC container, and currently doing some profiling to figure out if there are any areas where I can easily shave off large amounts of time being used. One thing I noticed is that when resolving to a concrete type, ultimately I end up with a constructor being called, using ConstructorInfo.Invoke, passing in an array of arguments that I've worked out. What I noticed is that the invoke method has quite a bit of overhead, and I'm wondering if most of this is just different implementations of the same checks I do. For instance, due to the constructor matching code I have, to find a matching constructor for the predefined parameter names, types, and values that I have passed in, there's no way this particular invoke call will not end up with something it should be able to cope with, like the correct number of arguments, in the right order, of the right type, and with appropriate values. When doing a profiling session containing a million calls to my resolve method, and then replacing it with a DynamicMethod implementation that mimics the Invoke call, the profiling timings was like this: ConstructorInfo.Invoke: 1973ms DynamicMethod: 93ms This accounts for around 20% of the total runtime of this profiling application. In other words, by replacing the ConstructorInfo.Invoke call with a DynamicMethod that does the same, I am able to shave off 20% runtime when dealing with basic factory-scoped services (ie. all resolution calls end up with a constructor call). I think this is fairly substantial, and warrants a closer look at how much work it would be to build a stable DynamicMethod generator for constructors in this context. So, the dynamic method would take in an object array, and return the constructed object, and I already know the ConstructorInfo object in question. Therefore, it looks like the dynamic method would be made up of the following IL: l001: ldarg.0 ; the object array containing the arguments l002: ldc.i4.0 ; the index of the first argument l003: ldelem.ref ; get the value of the first argument l004: castclass T ; cast to the right type of argument (only if not "Object") (repeat l001-l004 for all parameters, l004 only for non-Object types, varying l002 constant from 0 and up for each index) l005: newobj ci ; call the constructor l006: ret Is there anything else I need to consider? Note that I'm aware that creating dynamic methods will probably not be available when running the application in "reduced access mode" (sometimes the brain just won't give up those terms), but in that case I can easily detect that and just calling the original constructor as before, with the overhead and all.

    Read the article

  • WCF Timeout issue - should there even be a socket connection?

    - by stiank81
    I have a .Net application which is split into a client and server side. The communication between them is handled using WCF. I'm not using the automagic service references, but instead I've built the connection manually like described in the Screencast by Miguel Castro. Summarized this means that I create a console application on the server side that holds ServiceHost objects for the different services: var myServiceHost = new System.ServiceModel.ServiceHost(typeof(MyService), new Uri("net.tcp://localhost:8002")); myServiceHost.Open(); And on the client side I have service proxies creating channels using the ChannelFactory: IMyService proxy = new ChannelFactory<IMyService>("MyServiceEndpoint").CreateChannel(); The client and server side share the service contract defined in the interface IMyService. And another advantage is that I get minimal App.config files - without all the autogenerated stuff created through the Service References. Example from client side: <?xml version="1.0"?> <configuration> <system.serviceModel> <client> <endpoint address="net.tcp://localhost:8002/MyEndpoint" binding="netTcpBinding" contract="IMyService" name="MyServiceEndpoint"/> </client> </system.serviceModel> </configuration> So - to my problem. I create the proxy once, and it holds a channel all the way through the application. However, if I leave the application without use for a few minutes the channel has timed out, and I get the following exception: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:00:59.9979998'. How do I prevent this? I'm assuming I need to specify a higher timeout in my configuration? But I don't want it to ever time out. But on the other hand - I don't want a socket connection! Do I need one? Thought I could go connection less with WCF... What's the permanent solution and best practice on solving this? Set timeout to "never".. Create a new channel for each request? I'm assuming there is some overhead creating the channel?.. Increase the timeout to e.g. 5minutes and create new channel if the connection did timeout? Make it connection less somehow? (Without the overhead of creating channels..) Something else...

    Read the article

  • optimizing operating systems to provide maximum informix performance.

    - by Frank Developer
    Are there any Informix-specific guides for optimizing any operating system where an ifx engine is running? For example, in Linux, strip-down to a bare minimum all unecessary binaries, daemons, utilities, tune kernel parameters, optimize raw and cooked devices (hdparm). Someday, maybe, informix can create its own proprietary PICK-like O/S. The general idea is for the OS where ifx sits on have the smallest footprint, lowest overhead impact on ifx and provide optimized ifx performance.

    Read the article

  • How do I/O operations block?

    - by someguy
    I am specifically referring to InputStream (Java SE) and its implementations. How is blocking performed? I'm a little worried that they use a "busy-waiting" mechanism, as it would produce a lot of overhead. I believe they do it another way, but I'm just asking to be certain.

    Read the article

  • Is there a way to reuse an XmlReader?

    - by uriDium
    I have a process that uses an XmlReader. I have already done a lot to squeeze maximum performance out of it. So far we have had huge gains from using the Reader as opposed to XmlDoc or DataSet.GetXml(). We expect to get XML many times a second and I would like to avoid the overhead of recreating the reader every time. I have already cached the XmlReaderSettings but is there anyway to reuse the XmlReader or do I need to recreate it every time?

    Read the article

  • Getting a Jabber status via Python

    - by Teegijee
    I'm developing a website using the Django framework, and I need to retrieve Jabber (okay, Google Talk) statuses for a user. Most of the Jabber python libraries seem like an incredible amount of overkill (and overhead) for a simple task. Is there any simple way to do this? I know very little about XMPP/Jabber, though of course I'm willing to learn. Do you need to be an authenticated and "friended" user to retrieve another user's status?

    Read the article

  • Regex on memcached key?

    - by Mickey Cheong
    Hi, Is it possible to grab a list of memcached key based on some regex? I understand that one solution is to store the key in the database and grab the list when I need to delete those keys. This means that, it is going to incur additional cost to the db. I was wondering if there is another way to do it without DB overhead. Cheers, Mickey

    Read the article

  • Is there an analog for the VI '.' command to repeat-last-typed-text

    - by Don
    I've used emacs for decades and always wondered, but kept on coding, if there was a way to type in something, them move the cursor and insert the same text, like the VI . command. Instead what I do is to type the text, set the mark, backup, copy the region, go to the next spot (often just C-n, down one line) and then pre-arg yank, C-u C-y. It's the overhead of set mark, backup and copy region that makes me just go ahead and retype the thing.

    Read the article

  • Would it be possible to speed up Android Emulator by removing unnecessary apps\

    - by Stan
    I am using Android SDK 1.6 and developing some simple apps. It seems everytime Android Emulator loads every default apps, like message music browser etc... I guess this cause the booting process slow (takes 1 minute overhead for me to test the code every time). Would it be possible to take these apps out and just have my apps on the emulator? My purpose is to have a faster boot up time on emulator. Thanks.

    Read the article

  • Timing the Linux Kernel boot-time optimisation

    - by CVS-2600Hertz-wordpress-com
    I am trying to optimise the boot-up time of linux on an embedded device (not PC) Currently to profile the boot-up sequence, I have enabled the timing info on printk logs. Is this the most optimum way? If not, how do i profile the boot-up sequence (with timing) with minimum overhead? PS: I have a terminal (of the device) over a serial-connection & I use TeraTerm over windows-XP to access it.

    Read the article

  • Lightweight PHP5 based template class/system

    - by Wizzard
    Hi there. Looking at using a template system for a new project, it's only a small site and don't want to use the overhead and 'complexity' of smarty. I don't really like template systems that force you to make use of another language just to make it easier for designers (apparently). Something like this http://www.namepros.com/code/517342-php5-template-class.html is what Im looking at but something which is a bit more robust and proven.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >