Search Results

Search found 10536 results on 422 pages for 'cpu usage'.

Page 286/422 | < Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >

  • Slow insert speed in Postgresql memory tablespace

    - by Prashant
    Hi, I have a requirement where I need to store the records at rate of 10,000 records/sec into a database (with indexing on a few fields). Number of columns in one record is 25. I am doing a batch insert of 100,000 records in one transaction block. To improve the insertion rate, I changed the tablespace from disk to RAM.With that I am able to achieve only 5,000 inserts per second. I have also done the following tuning in the postgres config: Indexes : no fsync : false logging : disabled Other information: - Tablespace : RAM - Number of columns in one row : 25 (mostly integers) - CPU : 4 core, 2.5 GHz - RAM : 48 GB I am wondering why a single insert query is taking around 0.2 msec on average when database is not writing anything on disk (as I am using RAM based tablespace). Is there something I am doing wrong? Help appreciated. Prashant

    Read the article

  • Getting the battery current values for the Android Phone

    - by themangoman
    I am trying to collect power usage statistics for the Android G1 Phone. I am interested in knowing the values of Voltage and Current, and then able to collect statistics as reported in this PDF. I am able to get the value of Battery voltage through registering for an intent receiver to receive the Broadcast for ACTION_BATTERY_CHANGED. But the problem is that Android does not expose the value of current through this SDK interface. One way I tried is via sysfs interface, where I can view the battery current value from adb shell, using the following command $cat /sys/class/power_supply/battery/batt_current 449 But that too works only if the phone is connected via USB interface. If I disconnect the phone, I see the value of batt_current as '0'. I am not sure why the value of current reported is zero. It should be more than zero, right? Any suggestion / pointers for getting battery current value? Also please correct me if I am wrong.

    Read the article

  • Memory Efficient file append

    - by lboregard
    i have several files whose content need to be merged into a single file. i have the following code that does this ... but it seems rather inefficient in terms of memory usage ... would you suggest a better way to do it ? the Util.MoveFile function simply accounts for moving files across volumes private void Compose(string[] files) { string inFile = ""; string outFile = "c:\final.txt"; using (FileStream fsOut = new FileStream(outFile + ".tmp", FileMode.Create)) { foreach (string inFile in files) { if (!File.Exists(inFile)) { continue; } byte[] bytes; using (FileStream fsIn = new FileStream(inFile, FileMode.Open)) { bytes = new byte[fsIn.Length]; fsIn.Read(bytes, 0, bytes.Length); } //using (StreamReader sr = new StreamReader(inFile)) //{ // text = sr.ReadToEnd(); //} // write the segment to final file fsOut.Write(bytes, 0, bytes.Length); File.Delete(inFile); } } Util.MoveFile(outFile + ".tmp", outFile); }

    Read the article

  • Small web-framework like Sinatra, Ramaze etc in .NET

    - by Christian W
    Are there any similar frameworks like Sinatra, Ramaze etc in .NET? I'm in theory after a framework that let's me create an entire webapp with just one classfile (conceptually) like Sinatra. I'm going to use it for something work-internal, where ASP.NET MVC is too "big" (and I get confused by it's usage) and I have WebForms up to my ears right now (doing a big webforms based project, currently hating it ;) ) Any suggestions? Oh, and I need to be able to host it in IIS. I would go for IronRuby with Sinatra, but I can't find a step-by-step tut for setting it up in IIS ;)

    Read the article

  • What are the real-world benefits of declarative-UI languages such as XAML and QML?

    - by Stu Mackellar
    I'm currently evaluating QtQuick (Qt User Interface Creation Kit) which will be released as part of Qt 4.7. QML is the JavaScript-based declarative language behind QtQuick. It seems to be a very powerful concept, but I'm wondering if anybody that's made extensive use of other, more mature declarative-UI languages like XAML in WPF or Silverlight can give any insight into the real-world benefits that can be gained from this style of programming. Various advantages are often cited: Speed of development Forces separation between presentation and logic Better integration between coders and designers UI changes don't require re-compilation Also, are there any downsides? A few potential areas of concern spring to mind: Execution speed Memory usage Added complexity Are there any other considerations that should be taken into account?

    Read the article

  • What happens after a packet is captured?

    - by Rayne
    Hi all, I've been reading about what happens after packets are captured by NICs, and the more I read, the more I'm confused. Firstly, I've read that traditionally, after a packet is captured by the NIC, it gets copied to a block of memory in the kernel space, then to the user space for whatever application that then works on the packet data. Then I read about DMA, where the NIC directly copies the packet into memory, bypassing the CPU. So is the NIC - kernel memory - User space memory flow still valid? Also, do most NIC (e.g. Myricom) use DMA to improve packet capture rates? Secondly, does RSS (Receive Side Scaling) work similarly in both Windows and Linux systems? I can only find detailed explanations on how RSS works in MSDN articles, where they talk about how RSS (and MSI-X) works on Windows Server 2008. But the same concept of RSS and MSI-X should still apply for linux systems, right? Thank you. Regards, Rayne

    Read the article

  • LISP: Keyword parameters, supplied-p

    - by echox
    At the moment I'm working through "Practical Common Lisp" from Peter Seibel. In the chapter "Practical: A Simple Database" (http://www.gigamonkeys.com/book/practical-a-simple-database.html) Seibel explains keyword parameters and the usage of a supplied-parameter with the following example: (defun foo (&key a (b 20) (c 30 c-p)) (list a b c c-p)) Results: (foo :a 1 :b 2 :c 3) ==> (1 2 3 T) (foo :c 3 :b 2 :a 1) ==> (1 2 3 T) (foo :a 1 :c 3) ==> (1 20 3 T) (foo) ==> (NIL 20 30 NIL) So if I use &key at the beginning of my parameter list, I have the possibility to use a list of 3 parameters name, default value and the third if the parameter as been supplied or not. Ok. But looking at the code in the above example: (list a b c c-p) How does the lisp interpreter know that c-p is my "supplied parameter"?

    Read the article

  • Is this an F# quotations bug?

    - by ControlFlow
    [<ReflectedDefinition>] let rec x = (fun() -> x + "abc") () The sample code with the recursive value above produces the following F# compiler error: error FS0432: [<ReflectedDefinition>] terms cannot contain uses of the prefix splice operator '%' I can't see any slicing operator usage in the code above, looks like a bug... :) Looks like this is the problem with the quotation via ReflectedDefinitionAttribute only, normal quotation works well: let quotation = <@ let rec x = (fun() -> x + "abc") () in x @> produces expected result with the hidden Lazy.create and Lazy.force usages: val quotation : Quotations.Expr<string> = LetRecursive ([(x, Lambda (unitVar, Application (Lambda (unitVar0, Call (None, String op_Addition[String,String,String](String, String), [Call (None, String Force[String](Lazy`1[System.String]), [x]), Value ("abc")])), Value (<null>)))), (x, Call (None, Lazy`1[String] Create[String](FSharpFunc`2[Unit,String]), [x])), (x, Call (None, String Force[String](Lazy`1[String]), [x]))], x) So the question is: is this an F# compiler bug or not?

    Read the article

  • Using numeric_limits::max() in constant expressions

    - by FireAphis
    Hello, I would like to define inside a class a constant which value is the maximum possible int. Something like this: class A { ... static const int ERROR_VALUE = std::numeric_limits<int>::max(); ... } This declaration fails to compile with the following message: numeric.cpp:8: error: 'std::numeric_limits::max()' cannot appear in a constant-expression numeric.cpp:8: error: a function call cannot appear in a constant-expression I understand why this doesn't work, but two things look weird to me: It seems to me a natural decision to use the value in constant expressions. Why did the language designers decide to make max() a function thus not allowing this usage? The spec claims in 18.2.1 that For all members declared static const in the numeric_limits template, specializations shall define these values in such a way that they are usable as integral constant expressions. Doesn't it mean that I should be able to use it in my scenario and doesn't it contradict the error message? Thank you.

    Read the article

  • What the performance impact of enabling WebSphere PMI

    - by Andrew Whitehouse
    I am currently looking at some JProfiler traces from our WebSphere-based application, and am noticing that a significant amount of CPU time is being spent in the class com.ibm.io.async.AsyncLibrary.getCompletionData2. I am guessing, but I am wondering whether this is PMI-related (and we do have this enabled). My knowledge of PMI is limited, as this is managed by another team. Is it expected that PMI can have this sort of impact? (If so) Is the only option to turn it off completely? Or are there some types of data capture that have a particularly high overhead?

    Read the article

  • Output Path property issue on build server

    - by Jay Clarke
    I am working in the .NET 3.5 framework. I have a project that builds fine locally. I can build it on our build server when the source files are posted there. However when I am running the build process through Visual Studio 2010 I get a warning that says "C:\WINDOWS\Microsoft.NET\Framework64\v3.5\Microsoft.Common.targets: The OutputPath property is not set for this project. Please check to make sure that you have specified a valid Configuration/Platform combination. Configuration='DEV' Platform='Any CPU'" Any suggestions anyone has or if you need additional information please let me know. I have been struggling with this for a couple of days now. Thanks in advance for your help. Jay Clarke

    Read the article

  • pb with callback in the python optparse module

    - by PierrOz
    Hi Guys, I'm playing with Python 2.6 and its optparse module. I would like to convert one of my arguments to a datetime through a callback but it fails. Here is the code: def parsedate(option, opt_str, value, parser): option.date = datetime.strptime(value, "%Y/%m/%d") def parse_options(args): parser = OptionParser(usage="%prog -l LOGFOLDER [-e]", version="%prog 1.0") parser.add_option("-d", "--date", action="callback", callback="parsedate", dest="date") global options (options, args) = parser.parse_args(args) print option.date.strftime() if __name__ == "__main__": parse_options(sys.argv[1:]) I get an error File: optparse.py in _check_callback "callback not callable". I guess I'm doing something wrong in the way I define my callback but what ? and why ? Can anyone help ?

    Read the article

  • Mac OS X, Can't start/stop MySQL via System Preferences

    - by Steve Kuo
    I downloaded and installed MySQL 5.1.47 for OS X 10.6 using the DMG archive: mysql-5.1.47-osx10.6-x86_64.dmg I also installed MySQL.prefPane and MySQLStartupItem.pkg. MySQL.prefPane is a Preference Pane. The problem is, whenever I attempt to start/stop MySQL from the Preference Pane, System Preferences just hangs. It runs at about 50% CPU forever, eventually I have for force quit System Preferences. The same thing happens if I toggle "Automatically Start MySQL Server on Startup". Basically the MySQL Preference Pane is not functional. Note that I have no problem starting MySQL from the command line: sudo /usr/local/mysql/bin/mysqld_safe I have tried reinstalling MySQL and the Preference Pane. I'm using the standard installation location, nothing out of the ordinary. Every time the MySQL Preference Pane just hangs. I'm doing this on a Macbook Pro (Intel) running OS X 10.6.3. There are no old versions of MySQL on this machine.

    Read the article

  • Adding scope variable to an constructor

    - by Lupus
    I'm trying to create a class like architecture on javascript but stuck on a point. Here is the code: var make = function(args) { var priv = args.priv, cons = args.cons, pub = args.pub; return function(consParams) { var priv = priv, cons = args.cons; cons.prototype.constructor = cons; cons.prototype = $.extend({}, pub); if (!$.isFunction(cons)) { throw new Hata(100001); } return new cons(consParams); } }; I'm trying to add the priv variable on the returned function objects's scope and object scope of the cons.prototype but I could not make it; Here is the usage of the make object: var myClass = make({ cons: function() { alert(this.acik); }, pub: { acik: 'acik' }, priv: { gizli: 'gizli' } }) myObj = myClass(); PS: Please forgive my english...

    Read the article

  • Functional Programming - Lots of emphasis on recursion, why?

    - by peakit
    I am getting introduced to Functional Programming [FP] (using Scala). One thing that is coming out from my initial learnings is that FPs rely heavily on recursion. And also it seems like, in pure FPs the only way to do iterative stuff is by writing recursive functions. And because of the heavy usage of recursion seems the next thing that FPs had to worry about were StackoverflowExceptions typically due to long winding recursive calls. This was tackled by introducing some optimizations (tail recursion related optimizations in maintenance of stackframes and @tailrec annotation from Scala v2.8 onwards) Can someone please enlighten me why recursion is so important to functional programming paradigm? Is there something in the specifications of functional programming languages which gets "violated" if we do stuff iteratively? If yes, then I am keen to know that as well. PS: Note that I am newbie to functional programming so feel free to point me to existing resources if they explain/answer my question. Also I do understand that Scala in particular provides support for doing iterative stuff as well.

    Read the article

  • using addListener with WordPress audio player

    - by Jacob
    Hi, I'm trying to add a listener for the stop event in the word press audio player but usage seems to be undocumented. I'm hoping someone who knows a little flash can look at the code and tell me how it works: In the code at http://tools.assembla.com/1pixelout/browser/audio-player/trunk/source/classes/Application.as I see a snippet with this: ExternalInterface.call("AudioPlayer.onStop", _options.playerID); I was hoping that would let me capture the event in javascript with ("player" is the ID of my player) AudioPlayer.addListener("player", "AudioPlayer.onStop", function() { alert('stopped'); }); But my javascript function never seems to get called

    Read the article

  • High PageIOLatch_SH Waits with High Drive Idle times

    - by Marty Trenouth
    We are experiencing high volume of PageIOLatch_SH waits on our database (row counts in the Billions). However it seems that our drive Idle time Percentage hovers around 50-60 percent. CPU usage is nill. The Database Tuning Advisor gives no suggestions for optimization. The query plan (actual) from the single stored procedure used on the database puts the majority of the expense on index seek (yeah I know these should be optimial) operations. Anyone have suggestions of how to increase throughput?

    Read the article

  • Using Subversion in Xcode

    - by Kevin L.
    It seems that all of the initial Google results for "using subversion with xcode" are actually just tutorials for installing and configuring svn and Xcode, as opposed to actually using the two (i.e. interacting with svn via Xcode's GUI). Is anyone aware of a good guide that teaches the tricks and pitfalls of working with svn via Xcode's GUI? Something that bridges the gap between the most excellent Version Control with Subversion book and the Xcode IDE (as in pure Xcode GUI without any terminal command use)? Edit: We all love our terminal commands, and we all love Eclipse but (and I mean this in the nicest possible way) neither is really the point of the question. I’d prefer to use svn via Xcode’s IDE instead of via terminal just as I prefer (well, for this case) to code in Xcode’s IDE instead of using vim and gcc. Apple engineers spent a good bit of time implementing that SCM menu in Xcode; someone has to have seen a usage guide somewhere.

    Read the article

  • UIImages on UITableView?

    - by babu Kong
    what is the best method to display about 300 png images into a UITableView.. i dont wanna display them at the same time... i have 3 tableViewControllers that will each display about 100 imgaes.. (its for a catalog so the images are important to display) i used [uiimage imageNamed:] but that method caches the images and they dont get released so the memory usage is big.... is there any way to release the cache when the nav controller pushes a different view controller? i also tried [uiimage alloc] initWithContentsOfFile] but the images wont display.... any help?

    Read the article

  • Stack memory in Android

    - by Matt
    I'm writing an app that has a foreground service, content provider, and a Activity front end that binds to the service and gets back a List of objects using AIDL. The service does work and updates a database. If I leave the activity open for 4-8+ hours, and go to the "Running Services" section under settings on the phone (Nexus One) an unusually large amount of memory being used is shown (~42MB). I figure there is a leak. When I check the heap memory i get Heap size:~18MB, ~2MB allocated, ~16MB free. Analyzing the hprof in Eclipse MAT seems fine, which leads me to theorize that memory is leaking on the stack. Is this even possible? If it is, what can I do to stop or investigate the leak? Is the reported memory usage on the "Running Services" section of android even correct (I assume it is)? Another note: I have been unable to reproduce this issue when the UI is not up (with only the service running)

    Read the article

  • Raspberry Pi cluster, neuron networks and brain simulation

    - by jokoon
    Since the RBPI (Raspberry Pi) has very low power consumption and very low production price, it means one could build a very big cluster with those. I'm not sure, but a cluster of 100000 RBPI would take little power and little room. Now I think it might not be as powerful as existing supercomputers in terms of FLOPS or others sorts of computing measurements, but could it allow better neuronal network simulation ? I'm not sure if saying "1 CPU = 1 neuron" is a reasonable statement, but it seems valid enough. So does it mean such a cluster would more efficient for neuronal network simulation, since it's far more parallel than other classical clusters ?

    Read the article

  • Resources for Programmatic Rendering of Topology Maps

    - by bn
    Servus, Do you know of any frameworks, APIS, languages, or other resources that are well suited for drawing topology maps that allow a user to interact with objects on the map? I am not constrained by language choice and the program can be web-based, or stand-alone. I thought I would check before rolling my own. My goal is not to draw cartographic maps, but more like this picture: http://www.fineconnection.com/files/images/GraphicalNM.PNG, or if you are familiar with Edward Tufte's books, the data-visualization mechanisms he describes such as a map of a metro or subway. Also, if you have had any experience rendering these types of user interfaces or usage of underlying datastructures, I would be grateful to hear any thoughts you have on the subject, advice, any "gotchas." Thank you very for your time, -bn

    Read the article

  • Lua API for TokyoTyrant

    - by jideel
    Hi SO folks, I didn't managed to find an Lua client/api for TokyoTyrant. Such Api exists for TokyoCabinet, but not for TT. And Perl and Ruby API exists for TT. TT provides a native binary protocol, a memcached-compatible protocol, and an HTTP-oriented protocol. So my questions are : 1/ Do you think using the memcached (using luamemcached) or the HTTP protocol (using luaSocket) is "enough" for most / simple usage, and so a native Lua api is not necessary ? (the app is a simple uuid storage/distributor) ? 2/ Does it make sense to not use TokyoTyrant, but only TokyoCabinet, and use Lua at the application level to provide network and concurrent access to TC, using, say, Copas (Copas is , from their website, "a dispatcher based on coroutines that can be used by TCP/IP servers." ? Thanks.

    Read the article

  • Porting from GAE to TomCat or another servlet server

    - by bach
    Hi guys, I'm unhappy from GAE because - One can't have a global variable and the 'synchronize' keyword. Instead one have to catch a basically DB transcational exception and retry in a while loop - which will eat all my free CPU time and will start costing me money as I reach the google's qouata. Is it safe to use synchronize inside a doPost() in tomcat? (i guess that it's ok as long as all the servlets are running on on 1 VM?). If not in all tomcat configurations, how do I configure tomcat to make it safe? How can I convert a GAE app to my own tomcat server? - How to install DataNucleus Access Platform on tomcat? Best regards

    Read the article

  • How to temporarily disable read-only 2nd level cache hibernate strategy in Grails ?

    - by fabien7474
    In my grails application, some of my domain classes will never be changed by Users. However, some maintenance work is sometimes necessary, and administrator should be able to create/edit few instances from time to time (let's say twice a year). I would like to set a read-only 2nd level cache strategy for these domain classes (static mapping = { cache usage: 'read-only' } ) AND I would like to be able to 'disable' (in very particular situations) the read-only strategy in order to udate some instances via Grails scaffolding edit view. Is it possible? What do you advise me to do? EDIT: The solution I am implementing is a mix of Pascal and Burt answers (see comments). Both answers are great and helpful. So I got a dilemna for choosing the accepted answer! Anyway, thank you.

    Read the article

< Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >