Search Results

Search found 1282 results on 52 pages for 'overhead'.

Page 26/52 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Working with the Objective-C/Cocoa flat namespace

    - by Stephen Blinkhorn
    I've not found anything that addresses my specific name space question as yet. I am working on some AudioUnit plug-ins featuring Cocoa based GUIs. The plug-ins use a common library of user interface classes (sliders, buttons etc) which are simply added to each Xcode project. When I recompile and distribute updates it is pretty much guaranteed that at least one user interface class will have been updated since the last release. If the user launches an older plug-in before an updated plug-in then the old Cocoa classes are already loaded into the run time and the plug-in attempts to use the older implementations - often resulting in a failure one way or another. I know frameworks are the intended solution but the overhead and backwards compatibility issues are not ideal. I prefix all class names where possible but what options do I have to ensure that each plug-in contains unique class names for the shared user interface classes?

    Read the article

  • Why should I reuse XmlHttpRequest objects?

    - by Xavi
    From what I understand, it's a best practice to reuse XmlHttpRequest objects whenever possible. Unfortunately, I'm having a hard time understanding why. It seems like trying to reuse XHR objects can increase code complexity, introduce possible browser incompatibilities, and lead to other subtle bugs. After researching this question for a while, I did come up with a list of possible explanations: Fewer objects created means less garbage collecting Reusing XHR objects reduces the chance of memory leaks The overhead of creating a new XHR object is high The browser is able to perform some sort of network optimization under hood But I'm not sure if any of these reasons are actually valid. Any light you can shed on this question would be much appreciated.

    Read the article

  • Python / Django : emulating a multidimensional layer on a MySQL database

    - by Sébastien Piquemal
    Hi, I'm working on a Django project where I need to provide a lot of different visualizations on the same data (for example average of a value for each month, for each year / for a location, etc...). I have been using an OLAP database once in college, and I thought that it would fit my needs, but it appears that it is much too heavy for what I need. Actually the volume of data is not very big, so I don't need any optimization, just a way to present different visualizations of the same data without having to write 1000 times the same code. So, to recap, I need a python library: to emulate a multidimensional database (OLAP style would be nice because I think it is quite convenient : star structure, and everything) non-intrusive, because I can't modify anything on the existing MySQL database easy-to-use, because otherwise there's no point in replacing some overhead by another.

    Read the article

  • Symfony 1.4: Is it possible to prevent escaping of a redirect URL?

    - by Tom
    Hi, If I do a redirect in action as normal: $this->redirect('@mypage?apple=1&banana=2&orange=3'); ... Symfony produces the correct URL: /something/something?apple=1&banana=2&orange=3 However, the following gets escaped for some bizarre reason: $string = 'apple=1&banana=2&orange=3'; $this->redirect('@mypage?'.$string); ... and the following URL is produced: /something/something?apple=1&amp;banana=2&amp;orange=3 Is there a way to avoid this escaping and have the ampersands appear correctly in the URL? I've tried everything I can think of and it's driving me mad. I need this for a situation where I'm pulling a saved query as a string from the database and would just like to latch it onto the URL. I'm aware that I could generate an array from the string and then generate a brand new URL from the array, but it just seems like a lot of overhead because of this silly escaping. Thanks.

    Read the article

  • QuestionOrAnswer model?

    - by Mark
    My site has Listings. Users can ask Questions about listings, and the author of the listing can respond with an Answer. However, the Answer might need clarification, so I've made them recursive (you can "answer" an answer). So how do I set up the database? The way I have it now looks like this (in Django-style models): class QuestionOrAnswer(Model): user = ForeignKey(User, related_name='questions') listing = ForeignKey(Listing, related_name='questions') parent = models.ForeignKey('self', null=True, blank=True, related_name='children') message = TextField() But what bugs me is that listing is now an attribute of the answers as well (it doesn't need to be). What happens if the database gets mangled and an answer belongs to a different listing than its parent question? That just doesn't make any sense. We can separate it with polymorphism: QuestionOrAnswer user message created updated Question(QuestionOrAnswer) shipment Answer(QuestionOrAnswer) parent = ForeignKey(QuestionOrAnswer) And that ought to work, but now ever question and answer is split into 2 tables. Is it worth this overhead for clearly defined models?

    Read the article

  • crunching multiple js files during development

    - by Yaron Naveh
    I'm writing a backbone.js app. I have multiple js, css and html template files. I also have a script to crunch them into a single file so it is faster to download. How should I work during development: Add a listener to the file system and after every change compile the files so I can see it in a browser. This implies a 1-2 seconds overhead before I can see what I did, which is annoying for html fine-tuning. Somehow browse using the multiple files during development and only crunch before going to production. This means I need to have a separate index.html for dev and prod. What's your take?

    Read the article

  • What is the fastest way to display an image in QT on X11 without OpenGL?

    - by msh
    I need to display a raw image in a QT widget. I'm running X11 on a framebuffer, so OpenGL is not available. Both the image and the framebuffer are in the same format - RGB565, but I can change it to any other format if needed. I don't need blending or scaling. I just need to display pixels as is. I'm using QPainter::drawImage, but it converts QImage to QPixmap and this conversion seems to be very slow. Also it is backed by Xrender and I think there is unnecessary overhead required to support blending in Xrender which I don't really need Is there any better way? If it is not available in QT, I can use Xlib or any other library or protocol. I can modify the driver, X server or anything else.

    Read the article

  • Can I create support multiple database transactions on a single connection?

    - by draezal
    I have created a HyperSQL Database. I was just wondering whether I could run multiple transactions on a single connection. I didn't want to spawn a new connection for each transaction due to the overhead associated with this. Looking at some similar questions the suggestion appeared to be to create a pool of database connections and then block waiting for one to become available. This is a workable, but not desirable solution. Background Info (if this is relevant to the answer). My application will create a new thread when some request comes in. This request will require a database transaction. Then some not insignificant time later this transaction will be committed. Any advice appreciated :)

    Read the article

  • Corba sequence<octet> a lot slower than using a socket

    - by Totonga
    I have a corba releated question. In my Java app I use typedef sequence Data; Now I played around with this Data vector. If I am right with the Corba specification sequence will either be converted to xs:base64Binary or xs:hexBinary. It should be an Opaque type and so it should not use any marshalling. I tried different idl styles: void Get(out Data d); Data Get(); but what I see is that moving the data using Corba is a lot slower than using a socket directly. I am fine with a little overhead but it looks for me like tha data is still marshalled. Do I need to somehow configure my orb to suppress the marshalling or did I miss something.

    Read the article

  • Faster way to clone.

    - by AngryHacker
    I am trying to optimize a piece of code that clones an object: #region ICloneable public object Clone() { MemoryStream buffer = new MemoryStream(); BinaryFormatter formatter = new BinaryFormatter(); formatter.Serialize(buffer, this); // takes 3.2 seconds buffer.Position = 0; return formatter.Deserialize(buffer); // takes 2.1 seconds } #endregion Pretty standard stuff. The problem is that the object is pretty beefy and it takes 5.4 seconds (according ANTS Profiler - I am sure there is the profiler overhead, but still). Is there a better and faster way to clone?

    Read the article

  • Finding matches between multiple JavaScript Arrays

    - by Chris Barr
    I have multiple arrays with string values and I want to compare them and only keep the matching results that are identical between ALL of them. Given this example code: var arr1 = ['apple', 'orange', 'banana', 'pear', 'fish', 'pancake', 'taco', 'pizza']; var arr2 = ['taco', 'fish', 'apple', 'pizza']; var arr3 = ['banana', 'pizza', 'fish', 'apple']; I would like to to produce the following array that contains matches from all given arrays: ['apple', 'fish', 'pizza'] I know I can combine all the arrays with var newArr = arr1.concat(arr2, arr3); but that just give me an array with everything, plus the duplicates. Can this be done easily without needing the overhead of libraries such as underscore.js? (Great, and now i'm hungry too!) EDIT I suppose I should mention that there could be an unknown amount of arrays, I was just using 3 as an example.

    Read the article

  • Any ideas for developing a Risc Processor friendly string allocator?

    - by Richard Fabian
    I'm working on some tools to enable high throughput data-oriented development, and one thing that I've not got an immediate answer for is how you go about allocating strings quickly. On risc processors you've got another problem of implementation that the CPU doesn't like branching, which is what I'm trying to minimise or avoid. Also, cache coherence is important on most CPUs, so that's gotta be influential in the design too. So, how would you go about reducing the overhead for a generic string allocator? Sometimes it's easier to solve a more explicit problem, so any ideas for string sizes of 5-30?

    Read the article

  • Does urllib2.urlopen() actually fetch the page?

    - by beagleguy
    hi all, I was condering when I use urllib2.urlopen() does it just to header reads or does it actually bring back the entire webpage? IE does the HTML page actually get fetch on the urlopen call or the read() call? handle = urllib2.urlopen(url) html = handle.read() The reason I ask is for this workflow... I have a list of urls (some of them with short url services) I only want to read the webpage if I haven't seen that url before I need to call urlopen() and use geturl() to get the final page that link goes to (after the 302 redirects) so I know if I've crawled it yet or not. I don't want to incur the overhead of having to grab the html if I've already parsed that page. thanks!

    Read the article

  • Python / Django : emulating a multidimensionnal layer on a mySql database

    - by Sébastien Piquemal
    Hi, I'm working on a Django project where I need to provide a lot of different visualizations on the same data (for example average of a value for each month, for each year / for a location, etc ...). I have been using OLAP database once in college, and I thought that it would fit my needs, but it appears that it is much to heavy for what I need. Actually the volume of data is not very big, so I don't need any optimization, just a way to present different visualizations of the same data without having to write 1000 times the same code. So let's recap : I need a python library : to emulate a multidimensional database (OLAP style would be nice because I think it is quite convenient : stat structure, and everything) non-intrusive, because I can't modify anything on the existing mysql database easy-to-use, because otherwise there's no point in replacing some overhead by another.

    Read the article

  • Remove items from SWT tables

    - by Dima
    This is more of an answer I'd like to share for the problem I was chasing for some time in RCP application using large SWT tables. The problem is the performance of SWT Table.remove(int start, int end) method. It gives really bad performance - about 50msec per 100 items on my Windows XP. But the real show stopper was on Vista and Windows 7, where deleting 100 items would take up to 5 seconds! Looking into the source code of the Table shows that there are huge amount of windowing events flying around in this call.. That brings the windowing system to its knees. The solution was to hide the damn thing during this call: table.setVisible(false); table.remove(from, to); table.setVisible(true); That does wonders - deleting 500 items on both XP & Windows7 takes ~15msec, which is just an overhead for printing out time stamps I used. nice :)

    Read the article

  • resizing arrays when close to memory capacity

    - by user548928
    So I am implementing my own hashtable in java, since the built in hashtable has ridiculous memory overhead per entry. I'm making an open-addressed table with a variant of quadratic hashing, which is backed internally by two arrays, one for keys and one for values. I don't have the ability to resize though. The obvious way to do it is to create larger arrays and then hash all of the (key, value) pairs into the new arrays from the old ones. This falls apart though when my old arrays take up over 50% of my current memory, since I can't fit both the old and new arrays in memory at the same time. Is there any way to resize my hashtable in this situation Edit: the info I got for current hashtable memory overheads is from here How much memory does a Hashtable use? Also, for my current application, my values are ints, so rather than store references to Integers, I have an array of ints as my values.

    Read the article

  • slow php command line performance - is this normal or do I have an install problem?

    - by Frank Schwieterman
    I have a simple PHP app that prints 'hello world'. When I run it from the command line it takes 6 seconds. Is this normal? It seems to take 1 seconds before "hello world" prints, then 5 seconds after. I assume this is overhead of the interpreter. I am running PHP version 5.2.12 on Windows Server 2008 R2. Could this be an install issue, or is it typical? I did a manual install of PHP then added whatever components were needed to run Drupal. The only PHP addon I remember adding was MDB2, CGI support is there too. I am used to a Lua project I run from the command line, hundreds of lines of code that will run in under a second. I have some unit tests I run from the command line, and already with just a few they are very slow. I run them from Netbeans and the tests are still very slow.

    Read the article

  • Store return value of function in reference C++

    - by Ruud v A
    Is it valid to store the return value of an object in a reference? class A { ... }; A myFunction() { A myObject; return A; } //myObject goes out of scope here void mySecondFunction() { A& mySecondObject = myFunction(); } Is it possible to do this in order to avoid copying myObject to mySecondObject? myObject is not needed anymore and should be exactly the same as mySecondObject so it would in theory be faster just to pass ownership of the object from one object to another. (This is also possible using boost shared pointer but that has the overhead of the shared pointer.) Thanks in advance.

    Read the article

  • Keep local MS SQL 2008 DB table and remote SQL Azure DB table in sync

    - by Boomerangertanger
    Hi there, I have a dedicated server which hosts a Windows Service which does a lot of very heavy load stuff and populates a number of SQL Server database tables. However, of all the database tables it populates and works with, I want only one to be synchronised with a remote SQL Azure DB table. This is because this table holds what I called Resolved data, which is the end result of the Windows Service's work. I would like to keep a SQL Azure database table in sync with this database table. As far as I understand, my options are: Move everything onto Azure (but that involves a massive development overhead and risk) Have another Windows Service on the dedicated server which essentially looks at changed records since the last update and then manually update the SQL Azure table

    Read the article

  • Is it OK to re-create many SQL connections (SQL 2008)

    - by Mr. Flibble
    When performing many inserts into a database I would usually have code like this: using (var connection = new SqlConnection(connStr)) { connection.Open(); foreach (var item in items) { var cmd = new SqlCommand("INSERT ...") cmd.ExecuteNonQuery(); } } I now want to shard the database and therefore need to choose the connection string based on the item being inserted. This would make my code run more like this foreach (var item in items) { connStr = GetConnectionString(item); using (var connection = new SqlConnection(connStr)) { connection.Open(); var cmd = new SqlCommand("INSERT ...") cmd.ExecuteNonQuery(); } } Which basically means it's creating a new connection to the database for each item. Will this work or will recreating connections for each insert cause terrible overhead?

    Read the article

  • How can i assign a two dimensional array into other temporary two dimensional array.....?? in C Programming..

    - by AGeek
    Hi I am trying to store the contents of two dimensional array into a temporary array.... How is it possible... I don't want looping over here, as it would add an extra overhead.. Any pointer notation would be good. struct bucket { int nStrings; char strings[MAXSTRINGS][MAXWORDLENGTH]; }; void func() { char **tArray; int tLenArray = 0; for(i=0; i<TOTBUCKETS-1; i++) { if(buck[i].nStrings != 0) { tArray = buck[i].strings; tLenArray = buck[i].nStrings; } } } The error here i am getting is:- [others@centos htdocs]$ gcc lexorder.c lexorder.c: In function âlexSortingâ: lexorder.c:40: warning: assignment from incompatible pointer type Please let me know if this needs some more explanaition...

    Read the article

  • Is there performance to be gained by moving storage allocation local to a member function to its cla

    - by neuviemeporte
    Suppose I have the following C++ class: class Foo { double bar(double sth); }; double Foo::bar(double sth) { double a,b,c,d,e,f a = b = c = d = e = f = 0; /* do stuff with a..f and sth */ } The function bar() will be called millions of times in a loop. Obviously, each time it's called, the variables a..f have to be allocated. Will I gain any performance by making the variables a..f members of the Foo class and just initializing them at the function's point of entry? On the other hand, the values of a..f will be dereferenced through this-, so I'm wondering if it isn't actually a possible performance degradation. Is there any overhead to accessing a value through a pointer? Thanks!

    Read the article

  • Php efficiency question --> Database call vs. File Write vs. Calling C++ executable

    - by JP19
    Hi, What I wish to achieve is - log all information about each and every visit to every page ofmy website (like ip address, browser, referring page, etc). Now this is easy to do. What I am interested is doing this in a way so as to cause minimum overhead (runtime) in the php scripts. What is the best approach for this efficiency-wise: 1) Log all information to a database table 2) Write to a file (from php directly) 3) Call a C++ executable, that will write this info to a file in parallel [so the script can continue execution without waiting for the file write to occur ...... is this even possible] I may be trying to optimize unnecessarily/prematurely, but still - any thoughts / ideas on this would be appreciated. (I think efficiency of file write/logging can really be a concern if I have say 100 visits per minute...) Thanks & Regards, JP

    Read the article

  • Reason for monolithic data files

    - by Ali Lown
    Primarily this seems to be a technique used by games, where they have all the sounds in one file, textures in another etc. With these files commonly reaching the GB size. What is the reason behind doing this over maintaining it all in subdirectories as small files - one per texture which many small games use this, with the monolithic system being favoured by larger companies? Is there some file system overhead with lots of small files? Are they trying to protect their property - although most just seem to be a compressed file with a new extension?

    Read the article

  • Is re-using a Command and Connection object in ado.net a legitimate way of reducing new object creat

    - by Neil Trodden
    The current way our application is written, involves creating a new connection and command object in every method that access our sqlite db. Considering we need it to run on a WM5 device, that is leading to hideous performance. Our plan is to use just one connection object per-thread but it's also occurred to us to use one global command object per-thread too. The benefit of this is it reduces the overhead on the garbage collector created by instantiating objects all over the place. I can't find any advice against doing this but wondered if anyone can answer definitively if this is a good or bad thing to do, and why?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >