Search Results

Search found 1282 results on 52 pages for 'overhead'.

Page 30/52 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Fast, cross-platform timer?

    - by dsimcha
    I'm looking to improve the D garbage collector by adding some heuristics to avoid garbage collection runs that are unlikely to result in significant freeing. One heuristic I'd like to add is that GC should not be run more than once per X amount of time (maybe once per second or so). To do this I need a timer with the following properties: It must be able to grab the correct time with minimal overhead. Calling core.stdc.time takes an amount of time roughly equivalent to a small memory allocation, so it's not a good option. Ideally, should be cross-platform (both OS and CPU), for maintenance simplicity. Super high resolution isn't terribly important. If the times are accurate to maybe 1/4 of a second, that's good enough. Must work in a multithreaded/multi-CPU context. The x86 rdtsc instruction won't work.

    Read the article

  • Why is numpy c extension slow?

    - by Bitwise
    I am working on large numpy arrays, and some native numpy operations are too slow for my needs (for example simple operations such as "bitwise" A&B). I started looking into writing C extensions to try and improve performance. As a test case, I tried the example given here, implementing a simple trace calculation. I was able to get it to work, but was surprised by the performance: for a (1000,1000) numpy array, numpy.trace() was about 1000 times faster than the C extension! This happens whether I run it once or many times. Is this expected? Is the C extension overhead that bad? Any ideas how to speed things up?

    Read the article

  • .NET web service call slower when performed asynchronously

    - by joelt
    I have an ASP.NET site, and some pages need to call a web service. I used Visual Studio's "Add Web Reference" to auto-generate classes and methods for the web service. When I call the service synchronously, i.e. objService.MethodName("param1"), a call might take a second or so. When I call it asynchronously, i.e., objService.BeginMethodName("param1", AddressOf MyCallback, Nothing), it typically takes about 6 seconds. When debugging, it appears that the bulk of the time is spent waiting between the completion of the BeginMethodName call and the beginning of MyCallback. Does the thread switching really incur that much overhead? Is there another reason for this?

    Read the article

  • Optimal way to store and pass a date to Javascript

    - by user1493115
    I need to store a date-time value in MySQL and subsequently display it on a webpage. Due to its flexibility I usually chose to store a Unix timestamp in the database and convert it with PHP's date() to the desired format. This time however I would like to use MySQL's datetime field (mostly due to 2038) and apply the browser's timezone (hence I cannot simply format it on the server and pass the string to the client). I thought of storing the date as UTC datetime in the database and send it as well-defined format to the client, where it will be further processed. Here I would like to avoid a Unix timestamp but everything else might add additional overhead in processing. Is there any best practice as far as date processing is concerned in a MySQL, PHP, JQuery environment? Thanks.

    Read the article

  • Efficient job progress update in web application

    - by Endru6
    Hi, Creating a web application (Django in my case, but I think the question is more general) that is administrating a cluster of workers doing queued jobs, there is a need to track each jobs progress. When I've done it using database UPDATE (PostgreSQL in this case), it severely hits the database performance, because each UPDATE creates a new row in a table, and in my case only vacuuming DB removes obsolete rows. Having 30 jobs running and reporting progress every 1 minute DB may require vacuuming (and it means huge slow downs on a front end side for all the employees working with the system) every 10 days. Because the progress information isn't critical, ie. it doesn't have to be persistent, how would you do the progress updates from jobs without using an overhead database implies? There are 30 worker servers, each doing 1 or 2 jobs simultaneously, 1 front end server which serves a web application to users, and 1 database server.

    Read the article

  • Should I convert data arrays to objects in PHP in a CouchDB/Ajax app?

    - by karlthorwald
    I find myself converting between array and object all the time in PHP application that uses couchDB and Ajax. Of course I am also converting objects to JSON and back (for sometimes couchdb but mostly Ajax), but this is not so much disturbing my workflow. At the present I have php objects that are returned by the CouchDB modules I use and on the other hand I have the old habbit to return arrays like array("error"="not found","data"=$dataObj) from my functions. This leads to a mixed occurence of real php objects and nested arrays and I cast with (object) or (array) if necessary. The worst thing is that I know more or less by heart what a function returns, but not what type (array or object), so I often run into type errors. My plan is now to always cast arrays to objects before returning from a function. Of course this implies a lot of refactoring. Is this the right way to go? What about the conversion overhead? Other ideas or tips?

    Read the article

  • Securing database keys for client-side processing

    - by danp
    I have a tree of information which is sent to the client in a JSON object. In that object, I don't want to have raw IDs which are coming from the database. I thought of making a hash of the id and a field in the object (title, for example) or a salt, but I'm worried that this might have a serious effect on processing overhead. SELECT * FROM `things` where md5(concat(id,'some salt')) = md5('1some salt'); Is there a standard practice for obscuring IDs in this kind of situation?

    Read the article

  • Namespacing technique in JavaScript, recommended? performant? issues to be aware of?

    - by Bjartr
    In a project I am working on I am structuring my code as follows MyLib = { AField:0, ASubNamespace:{ AnotherField:"value", AClass:function(param) { this.classField = param; this.classFunction = function(){ // stuff } } }, AnotherClass:function(param) { this.classField = param; this.classFunction = function(){ // stuff } } } and so on like that to do stuff like: var anInstance = new MyLib.ASubNamespace.AClass("A parameter."); Is this the right way to go about achieving namespacing? Are there performance hits, and if so, how drastic? Do performance degradations stack as I nest deeper? Are there any other issues I should be aware of when using this structure? I care about every little bit of performance because it's a library for realtime graphics, so I'm taking any overhead very seriously.

    Read the article

  • Deployment process

    - by Balaji
    We are having a massive system having around 15 servers hosting .Net WCF services, mvc application etc. When we do a deployment (out of office hours) we have to uninstall and install everything on the live servers. This takes lot of time and if something goes wrong we have to rollback everything. can you please suggest something different to this? like Deply into a other environment (whenever you like) and switch the URL to point to new servers [This comes with the overhead of cost of maintaining 2 copies of production (active and passive)] any other ideas please.

    Read the article

  • Should I put a try-finally block after every Object.Create?

    - by max
    I have a general question about best practice in OO Delphi. Currently, I put try-finally blocks anywhere I create an object to free that object after usage (to avoid memory leaks). E.g.: aObject := TObject.Create; try aOBject.AProcedure(); ... finally aObject.Free; end; instead of: aObject := TObject.Create; aObject.AProcedure(); .. aObject.Free; Do you think it is good practice, or too much overhead? And what about the performance?

    Read the article

  • C++ STL: Array vs Vector: Raw element accessing performance

    - by oh boy
    I'm building an interpreter and as I'm aiming for raw speed this time, every clock cycle matters for me in this (raw) case. Do you have any experience or information what of the both is faster: Vector or Array? All what matters is the speed I can access an element (opcode receiving), I don't care about inserting, allocation, sorting, etc. I'm going to lean myself out of the window now and say: Arrays are at least a bit faster than vectors in terms of accessing an element i. It seems really logical for me. With vectors you have all those security and controlling overhead which doesn't exist for arrays. (Why) Am I wrong? No, I can't ignore the performance difference - even if it is so small - I have already optimized and minimized every other part of the VM which executes the opcodes :)

    Read the article

  • In-memory data structure that supports boolean querying

    - by sanity
    I need to store data in memory where I map one or more key strings to an object, as follows: "green", "blue" -> object1 "red", "yellow" -> object2 I need to be able to efficiently receive a list of objects, where the strings match some boolean criteria, such as: ("red" OR "green") AND NOT "blue" I'm working in Java, so the ideal solution would be an off-the-shelf Java library. I am, however, willing to implement something from scratch if necessary. Anyone have any ideas? I'd rather avoid the overhead of an in-memory database if possible, I'm hoping for something comparable in speed to a HashMap (or at least the same order of magnitude).

    Read the article

  • Difference between the Document classes

    - by takoi
    I've been reading the javadocs trying to grasp around the swing Document API but I cant get something sensible out of it because there's so many classes: Document, StyledDocument, AbstractDocument, DefaultStyledDocument, PlainDocument, HTMLDocument, and someone mentioned DocumentFilter. This question is more on a general basis so can someone give an overview of the differences between the implementations and when the different interfaces and abstracts are for? For my specific case what I want to achieve is a data structure that will: hold three lines of text only. And attributes must not be per line or document. I will have a couple of thousand of these in some other structure so overhead is important. Anything that i can use for this or is it better to extend something? If so, what?

    Read the article

  • What can we do to make XML processing faster?

    - by adpd
    We work on an internal corporate system that has a web front-end as one of its interfaces. The front-end (Java + Tomcat + Apache) communicates to the back-end (proprietary system written in a COBOL-like language) through SOAP web services. As a result, we pass large XML files back and forth. We believe that this architecture has a significant impact on performance due to the large overhead of XML transportation and parsing. Unfortunately, we are stuck with this architecture. How can we make this XML set-up more efficient? Any tips or techniques are greatly appreciated.

    Read the article

  • Multiple row insertion in a faster way

    - by Cyborgo
    Hi, I am using rails and I am trying to insert new rows into the table. The data comes from a text file and I created hash with one of the attribute as key. Then I find all the keys that are currently not in the table. Then I need to insert all of them into the table with all the attributes that I saved in hash. newvals.each{ |x| author = Author.create(:first => x, :second => hash[x][0], :third => hash[x][1]) author.save } This has lot of overhead as it makes an SQL update one at time. The newals array is usually quite big around the size of 20-30k. I know create can insert multiple objects at once, but I cant seem to figure out how. Is there any way to get this done. Thanks.

    Read the article

  • Gradient boosting predictions in low-latency production environments?

    - by lockedoff
    Can anyone recommend a strategy for making predictions using a gradient boosting model in the <10-15ms range (the faster the better)? I have been using R's gbm package, but the first prediction takes ~50ms (subsequent vectorized predictions average to 1ms, so there appears to be overhead, perhaps in the call to the C++ library). As a guideline, there will be ~10-50 inputs and ~50-500 trees. The task is classification and I need access to predicted probabilities. I know there are a lot of libraries out there, but I've had little luck finding information even on rough prediction times for them. The training will happen offline, so only predictions need to be fast -- also, predictions may come from a piece of code / library that is completely separate from whatever does the training (as long as there is a common format for representing the trees).

    Read the article

  • Android: Scrollable (bitmap) screen

    - by somin
    I am currently implementing a view in Android that involves using a larger than the screen size bitmap as a background and then having drawables drawn ontop of this. This is so as to simulate a "map" that can be scrolled horizontally aswell as vertically. Which is done by using a canvas and then drawing to this the full "map" bitmap, then putting the other images on top as an overlay and then drawing only the viewable bit of this to screen. Overriding the touch events to redraw the screen on a scroll/fling. I'm sure this probably has a huge ammount of overhead (by creating a canvas of the full image whilst using(drawing) only a fifth of it) and could be done in a different way as to the explained, but I was just wondering what people would do in this situation, and perhaps examples? If you need more info just let me know, Thanks, Simon

    Read the article

  • I want to make a wrapped acces type for certain internals of one of classes and I have some performa

    - by Alex
    I am writing an abstract matrix class (and some concrete subclasses) for use on very differing hardwares/architectures, etc. and I want to write a row and column type that provides a transparent reference to the rows and columns of the matrix. However, I want to tune for performance, so I'd like this class to be essentially a compiler construct. In other words, I'm willing to sacrifice some dev time to making the overhead of these classes as small as possible. I assume all (small) methods would want to be virtual? Keep the structure small? Any other suggestions?

    Read the article

  • try finally block around every Object.Create?

    - by max
    Hi, I have a general question about best practice in OO Delphi. Currently, I but a try finally block around everywhere, where I create an object, to free that object after usage (to avoid memory leaks). E.g.: aObject := TObject.Create; try aOBject.AProcedure(); ... finally aObject.Free; end; instead of: aObject := TObject.Create; aObject.AProcedure(); .. aObject.Free; To you think, it is good practice, or too much overhead? And what about the performance?

    Read the article

  • Implementing prompts in text-input

    - by AntonAL
    Hi, I have a form with some text-inputs: login, password. If user sees this form the first time, input-texts should "contain" prompts, like "enter login", "enter password". If user clicks text-input, it's prompt should disappear to allow typing. I have seen various examples, that uses background image with prerendered text on it. Those images are appearing with following jQuery: $("form > :text").focus(function(){ // hide image }).blur(function(){ // show image, if text-input is still empty if ( $(this).val() == "" ) // show image with prompt }); This approach has following problems: localization is impossible need to pre-render images for various textual prompts overhead with loading images How do you overcomes such a problems ?

    Read the article

  • How does Array.ForEach() compare to standard for loop in C#?

    - by DaveN59
    I pine for the days when, as a C programmer, I could type: memset( byte_array, '0xFF' ); and get a byte array filled with 'FF' characters. So, I have been looking for a replacement for this: for (int i=0; i < byteArray.Length; i++) { byteArray[i] = 0xFF; } Lately, I have been using some of the new C# features and have been using this approach instead: Array.ForEach<byte>(byteArray, b => b = 0xFF); Granted, the second approach seems cleaner and is easier on the eye, but how does the performance compare to using the first approach? Am I introducing needless overhead by using Linq and generics? Thanks, Dave

    Read the article

  • Why do dicts of defaultdict(int)'s use so much memory? (and other simple python performance question

    - by dukhat
    import numpy as num from collections import defaultdict topKeys = range(16384) keys = range(8192) table = dict((k,defaultdict(int)) for k in topKeys) dat = num.zeros((16384,8192), dtype="int32") print "looping begins" #how much memory should this use? I think it shouldn't use more that a few #times the memory required to hold (16384*8192) int32's (512 mb), but #it uses 11 GB! for k in topKeys: for j in keys: dat[k,j] = table[k][j] print "done" What is going on here? Furthermore, this similar script takes eons to run compared to the first one, and also uses an absurd quantity of memory. topKeys = range(16384) keys = range(8192) table = [(j,0) for k in topKeys for j in keys] I guess python ints might be 64 bit ints, which would account for some of this, but do these relatively natural and simple constructions really produce such a massive overhead? I guess these scripts show that they do, so my question is: what exactly is causing the high memory usage in the first script and the long runtime and high memory usage of the second script and is there any way to avoid these costs?

    Read the article

  • String manipulation appears to be inefficient

    - by user2964780
    I think my code is too inefficient. I'm guessing it has something to do with using strings, though I'm unsure. Here is the code: genome = FASTAdata[1] genomeLength = len(genome); # Hash table holding all the k-mers we will come across kmers = dict() # We go through all the possible k-mers by index for outer in range (0, genomeLength-1): for inner in range (outer+2, outer+22): substring = genome[outer:inner] if substring in kmers: # if we already have this substring on record, increase its value (count of num of appearances) by 1 kmers[substring] += 1 else: kmers[substring] = 1 # otherwise record that it's here once This is to search through all substrings of length at most 20. Now this code seems to take pretty forever and never terminate, so something has to be wrong here. Is using [:] on strings causing the huge overhead? And if so, what can I replace it with? And for clarity the file in question is nearly 200mb, so pretty big.

    Read the article

  • IIS 7&rsquo;s Sneaky Secret to Get COM-InterOp to Run

    - by David Hoerster
    Originally posted on: http://geekswithblogs.net/DavidHoerster/archive/2013/06/17/iis-7rsquos-sneaky-secret-to-get-com-interop-to-run.aspxIf you’re like me, you don’t really do a lot with COM components these days.  For me, I’ve been ‘lucky’ to stay in the managed world for the past 6 or 7 years. Until last week. I’m running a project to upgrade a web interface to an older COM-based application.  The old web interface is all classic ASP and lots of tables, in-line styles and a bunch of other late 90’s and early 2000’s goodies.  So in addition to updating the UI to be more modern looking and responsive, I decided to give the server side an update, too.  So I built some COM-InterOp DLL’s (easily through VS2012’s Add Reference feature…nothing new here) and built a test console line app to make sure the COM DLL’s were actually built according to the COM spec.  There’s a document management system that I’m thinking of whose COM DLLs were not proper COM DLLs and crashed and burned every time .NET tried to call them through a COM-InterOp layer. Anyway, my test app worked like a champ and I felt confident that I could build a nice façade around the COM DLL’s and wrap some functionality internally and only expose to my users/clients what they really needed. So I did this, built some tests and also built a test web app to make sure everything worked great.  It did.  It ran fine in IIS Express via Visual Studio 2012, and the timings were very close to the pure Classic ASP calls, so there wasn’t much overhead involved going through the COM-InterOp layer. You know where this is going, don’t you? So I deployed my test app to a DEV server running IIS 7.5.  When I went to my first test page that called the COM-InterOp layer, I got this pretty message: Retrieving the COM class factory for component with CLSID {81C08CAE-1453-11D4-BEBC-00500457076D} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). It worked as a console app and while running under IIS Express, so it must be permissions, right?  I gave every account I could think of all sorts of COM+ rights and nothing, nada, zilch! Then I came across this question on Experts Exchange, and at the bottom of the page, someone mentioned that the app pool should be running to allow 32-bit apps to run.  Oh yeah, my machine is 64-bit; these COM DLL’s I’m using are old and are definitely 32-bit.  I didn’t check for that and didn’t even think about that.  But I went ahead and looked at the app pool that my web site was running under and what did I see?  Yep, select your app pool in IIS 7.x, click on Advanced Settings and check for “Enable 32-bit Applications”. I went ahead and set it to True and my test application suddenly worked. Hope this helps somebody out there from pulling out your hair.

    Read the article

  • Print SSRS Report / PDF automatically from SQL Server agent or Windows Service

    - by Jeremy Ramos
    Originally posted on: http://geekswithblogs.net/JeremyRamos/archive/2013/10/22/print-ssrs-report--pdf-from-sql-server-agent-or.aspxI have turned the Web upside-down to find a solution to this considering the least components and least maintenance as possible to achieve automated printing of an SSRS report. This is for the reason that we do not have a full software development team to maintain an app and we have to minimize the support overhead for the support team.Here is my setup:SQL Server 2008 R2 in Windows Server 2008 R2PDF format reports generated by SSRS Reports subscriptions to a Windows File ShareNetwork printerColoured reports with logo and brandingI have found and tested the following solutions to no avail:ProsConsCalling Adobe Acrobat Reader exe: "C:\Program Files (x86)\Adobe\Reader 11.0\Reader\acroRd32.exe" /n /s /o /h /t "C:\temp\print.pdf" \\printserver\printername"Very simple optionAdobe Acrobat reader requires to launch the GUI to send a job to a printer. Hence, this option cannot be used when printing from a service.Calling Adobe Acrobat Reader exe as a process from a .NET console appA bit harder than above, but still a simple solutionSame as cons abovePowershell script(Start-Process -FilePath "C:\temp\print.pdf" -Verb Print)Very simple optionUses default PDF client in quiet mode to Print, but also requires an active session.    Foxit ReaderVery simple optionRequires GUI same as Adobe Acrobat Reader Using the Reporting Services Web service to run and stream the report to an image object and then passed to the printerQuite complexThis is what we're trying to avoid  After pulling my hair out for two days, testing and evaluating the above solutions, I ended up learning more about printers (more than ever in my entire life) and how printer drivers work with PostScripts. I then bumped on to a PostScript interpreter called GhostScript (http://www.ghostscript.com/) and then the solution starts to get clearer and clearer.I managed to achieve a solution (maybe not be the simplest but efficient enough to achieve the least-maintenance-least-components goal) in 3-simple steps:Install GhostScript (http://www.ghostscript.com/download/) - this is an open-source PostScript and PDF interpreter. Printing directly using GhostScript only produces grayscale prints using the laserjet generic driver unless you save as BMP image and then interpret the colours using the imageInstall GSView (http://pages.cs.wisc.edu/~ghost/gsview/)- this is a GhostScript add-on to make it easier to directly print to a Windows printer. GSPrint automates the above  PDF -> BMP -> Printer Driver.Run the GSPrint command from SQL Server agent or Windows Service:"C:\Program Files\Ghostgum\gsview\gsprint.exe" -color -landscape -all -printer "printername" "C:\temp\print.pdf"Command line options are here: http://pages.cs.wisc.edu/~ghost/gsview/gsprint.htmAnother lesson learned is, since you are calling the script from the Service Account, it will not necessarily have the Printer mapped in its Windows profile (if it even has one). The workaround to this is by adding a local printer as you normally would and then map this printer to the network printer. Note that you may need to install the Printer Driver locally in the server.So, that's it! There are many ways to achieve a solution. The key thing is how you provide the smartest solution!

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >