Search Results

Search found 10536 results on 422 pages for 'cpu usage'.

Page 289/422 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • Run web.py as daemon.

    - by mamcx
    I have a simple web.py program to load data. In the server I don't want to install apache or any webserver. I try to put it as a background service with http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/ And subclassing: (from http://www.jejik.com/files/examples/daemon.py) class Daemon: def start(self): """ Start the daemon """ ... PID CHECKS.... # Start the daemon self.daemonize() self.run() #My code class WebService(Daemon): def run(self): app.run() if __name__ == "__main__": if DEBUG: app.run() else: service = WebService(os.path.join(DIR_ACTUAL,'ElAdministrador.pid')) if len(sys.argv) == 2: if 'start' == sys.argv[1]: service.start() elif 'stop' == sys.argv[1]: service.stop() elif 'restart' == sys.argv[1]: service.restart() else: print "Unknown command" sys.exit(2) sys.exit(0) else: print "usage: %s start|stop|restart" % sys.argv[0] sys.exit(2) However, the web.py software not load (ie: The service no listen) If I call it directly (ie: No using the daemon code) work fine.

    Read the article

  • SCons: How to use the same builders for multiple variants (release/debug) of a program

    - by OK
    The SCons User Guide tells about the usage of Multiple Construction Environments to build build multiple versions of a single program and gives the following example: opt = Environment(CCFLAGS = '-O2') dbg = Environment(CCFLAGS = '-g') o = opt.Object('foo-opt', 'foo.c') opt.Program(o) d = dbg.Object('foo-dbg', 'foo.c') dbg.Program(d) Instead of manually assigning different names to the objects compiled with different environments, VariantDir() / variant_dir sounds like a better solution... But if I place the Program() builder inside the SConscript: Import('env') env.Program('foo.c') How can I export different environments to the same SConscript file? opt = Environment(CCFLAGS = '-O2') dbg = Environment(CCFLAGS = '-g') SConscript('SConscript', 'opt', variant_dir='release') #'opt' --> 'env'??? SConscript('SConscript', 'dbg', variant_dir='debug') #'dbg' --> 'env'??? Unfortunately the discussion in the SCons Wiki does not bring more insight to this topic. Thanks for your input!

    Read the article

  • Compiling a C program with a specific architecture

    - by Marplesoft
    I was recently fighting some problems trying to compile an open source library on my Mac that depended on another library and got some errors about incompatible library architectures. Can somebody explain the concept behind compiling a C program for a specific architecture? I have seen the -arch compiler flag before and have seen values passed to it such as ppc, i386 and x86_64 which I assume maps to the CPU "language", but my understanding stops there. If one program uses a particular architecture, do all libraries that it loads need to be on the same architecture as well? How can I tell what architecture a given program/process is running under?

    Read the article

  • How to Access Data in ZODB

    - by Eric
    I have a Plone site that has a lot of data in it and I would like to query the database for usage statistics; ie How many cals with more than 1 entries, how many blogs per group with entries after a given date, etc. I want to run the script from the command line... something like so: bin/instance [script name] I've been googling for a while now but can't find out how to do this. Also, can anybody provide some help on how to get user specific information. Information like, last logged in, items created. Thanks! Eric

    Read the article

  • Django admin causes high load for one model...

    - by Joe
    In my Django admin, when I try to view/edit objects from one particular model class the memory usage and CPU rockets up and I have to restart the server. I can view the list of objects fine, but the problem comes when I click on one of the objects. Other models are fine. Working with the object in code (i.e. creating and displaying) is ok, the problem only arises when I try to view an object with the admin interface. The class isn't even particularly exotic: class Comment(models.Model): user = models.ForeignKey(User) thing = models.ForeignKey(Thing) date = models.DateTimeField(auto_now_add=True) content = models.TextField(blank=True, null=True) approved = models.BooleanField(default=True) class Meta: ordering = ['-date'] Any ideas? I'm stumped. The only reason I could think of might be that the thing is quite a large object (a few kb), but as I understand it, it wouldn't get loaded until it was needed (correct?).

    Read the article

  • Google Toolbox For Mac with Core Data on iPhone results in error

    - by JaanusSiim
    I have set up my project for using Google Toolbox for Mac as described on official wiki. And everything is working as expected. For core data usage I have created a 'database' class that uses for final application SQLite storage (this is done based on Xcode template code). For unit tests I have created separate init method for 'database' to use in memory storage (storage url is [NSURL URLWithString:@"memory://store"] and type NSInMemoryStoreType). Without adding my model file (*.xcdatamodel) to unit tests target, test fail in expected place with message: executeFetchRequest:error: A fetch request must have an entity. If I add model file to the test target, then test is executed as expected (core data part looks OK), but after tests execution I get: RunIPhoneUnitTest.sh: line 123: 9487 Segmentation fault "$TARGET_BUILD_DIR/$EXECUTABLE_PATH" -RegisterForSystemEvents Command /bin/sh failed with exit code 139 This problem does not looks directly related to core data, but only happens if model file is added to target. Any pointers on resolving this issue would be appreciated!

    Read the article

  • Get Hardware Information for HWs that is not installed

    - by Isaac
    I am pretty sure how to retrieve hardware information with WMI classes. but WMIs have a big limitation: They Just can get information for installed hardwares. I need to retrieve information about CPU (model,speed,etc..),Video Card, Sound Card, USB Ports, etc. I found a really good software (HWiNFO) that can do this even the drivers for hardware parts is not installed. It seems that HWiNFO uses a internal database to give a name for each hardware part. So is there any free library/DLL/component that can do this in Windows XP or higher Note: Although HWiNFO SDK seems good, it's not free. So it doesn't exist! ;) I need a free library.

    Read the article

  • haystack's RealTimeSearchIndex causes django to hang on data entry

    - by lsc
    I'm using django-haystack and a xapian backend with real time indexing (haystack.indexes.RealTimeSearchIndexing) of model data and it works fine on my Ubuntu server. However, it causes django to hang upon data entry when I deployed the app on a RHEL5 server. Everything is hunky dory if I switch to a standard SearchIndex. Running ./manage.py rebuild_index manually works fine too. The major differences between the two setups would be the versions of Python (2.4.3 vs 2.6.4) and the xapian (1.0.4-1 vs 1.0.15). Any suggestions on what may be the problem? Nothing interesting appears in the logs, and I've tried different databases (mysql, sqlite3) and deployment methods (mod_python, wsgi) with no luck yet. I have noted the warning on the haystack docs stating that RealTimeSearchIndex is only handled gracefully with a Solr backend, however I'm running a very traffic site with only occasional writes so I'm fine with some CPU overheads on writes.

    Read the article

  • Why would you want a case sensitive database?

    - by Khorkrak
    What are some reasons for choosing a case sensitive collation over a case insensitive one? I can see perhaps a modest performance gain for the DB engine in doing string comparisons. Is that it? If your data is set to all lower or uppercase then case sensitive could be reasonable but it's a disaster if you store mixed case data and then try to query it. You have then apply a lower() function on the column so that it'll match the corresponding lower case string literal. This prevents index usage in every dbms. So wondering why anyone would use such an option.

    Read the article

  • Spring/Hibernate/Junit example of testing DAO against HSQLDB

    - by Ryan P.
    Hi guys, I'm working on trying to implement a JUnit test to check the functionality of a DAO. (The DAO will create/read a basic object/table relationship in HSQLDB). The trouble I'm having is the persistence of the DAO (for the non-test code) is being completed through an in-house solution using Spring/Hibernate, which eliminates the usual *.hbm.xml templates that most examples I have found contain. Beacuse of this, I'm having some trouble understanding how to setup a JUnit test to implement the DAO to create/read (just very basic funtionality) to an in-memory HSQLDB. I have found a few examples, but the usage of the in-house persistence means I can't extend some of the classes the examples show (I can't seem to get the application-context.xml setup properly). Can anyone suggest any projects/examples I could take a look at (or any documentation) to further my understanding of the best way to implement this test functionality? I feel like this should be really simple, but I keep running into problems implementing the examples I have found. Thanks in advance!

    Read the article

  • Generate HTML To PDF Control for the .NET application

    - by Karan
    Has anyone used any open source or paid .NET Control which does the conversion job from html to pdf file? At the moment, i am using Winnovative convertor control. But it has a performance limitation during the generation of bulk pages (like more than 1000) in the pdf. The limitation comes when we use bigger images in the html content. From last 4 months i've been working on the winnovative control and found plenty of major bugs in it. For a small application and usage. winnovative is good but not for the level where application will be used by thousands of clients. Please suggest.

    Read the article

  • Illegal instruction in Assembly

    - by Natasha
    I really do not understand why this simple code works fine in the first attempt but when putting it in a procedure an error shows: NTVDM CPU has encountered an illegal instruction CS:db22 IP:4de4 OP:f0 ff ff ff ff The first code segment works just fine: .model small .stack 100h .code start: mov ax,@data mov ds,ax mov es,ax MOV AH,02H ;sets cursor up MOV BH,00H MOV DH,02 MOV DL,00 INT 10H EXIT: MOV AH,4CH INT 21H END However This generates an error: .model small .stack 100h .code start: mov ax,@data mov ds,ax mov es,ax call set_cursor PROC set_cursor near MOV AH,02H ;sets cursor up MOV BH,00H MOV DH,02 MOV DL,00 INT 10H RET set_cursor ENDP EXIT: MOV AH,4CH INT 21H END Note: Nothing is wrong with windows config. I have tried many sample codes that work fine Thanks

    Read the article

  • Default value list for pipeline param in Powershell

    - by fatcat1111
    I have a Powershell script that reads values off of the pipeline: PARAM ( [Parameter(ValueFromPipeline = $true)] $s ) PROCESS { echo "* $s" } Works just fine: PS my.ps1 foo * foo I would like the script to have list of default values, as the most common usage will always use the same values and storing them in the default will be most convenient. I did the usual assignment: PARAM ( [Parameter(ValueFromPipeline = $true)] $s = 'bar' ) PROCESS { echo "* $s" } Again, works just fine: PS my.ps1 * bar PS my.ps1 foo * foo However when setting the default to be a list, I get back something entirely reasonable but not at all what I want: PARAM ( [Parameter(ValueFromPipeline = $true)] $s = @('bar', 'bat', 'boy') ) PROCESS { echo "* $s" } Result: PS my.ps1 * bar bat boy I expected: PS my.ps1 * bar * bat * boy How can I get one call in to the Process loop for each default value? (This is somewhat different than getting one call in to Process, and wrapping the current body of in a big foreach loop over $s).

    Read the article

  • Is it possible to cache JSP bytecode to avoid recompiles w/ Tomcat?

    - by Computer Guru
    Hi, Is there any way of caching the bytecode for JSP webapps/ In particular, using Tomcat as the Java servlet? I'm getting really fed up of Tomcat taking up all the CPU for 10 minutes while it compiles 4 different webapps every time I restart it.... I'm already using Jikes to "speed up" the compiles, but it's really killing me. The code does not change unless the webapp is upgraded (very rarely), and I cannot believe that there is no way to cache the compiled java bytecode instead of recompiling it each and every time. I'd appreciate any advice on the matter!

    Read the article

  • writing a fast parser in python

    - by panzi
    I've written a hands-on recursive pure python parser for a some file format (ARFF) we use in one lecture. Now running my exercise submission is awfully slow. Turns out by far the most time is spent in my parser. It's consuming a lot of CPU time, the HD is not the bottleneck. I wonder what performant ways are there to write a parser in python? I'd rather not rewrite it in C. I tried to use jython, but that decreased performance a lot! The files I parse are partially huge ( 150 MB) with very long lines. My current parser only needs a look-ahead of one character. I'd post the source here but I don't know if that's such a good idea. After all the submission deadline has not jet ended. But then, the focus in this exercise is not the parser. You can choose whatever language you want to use and there already is a parser for Java.

    Read the article

  • Current state of client-side XSLT

    - by Casey
    Last I heard, Blizzard was one of the few companies to put client-side XSLT into practice (2008). Is this still the case in 2011, or are more people now exploring this technique in production?  It seems that modern browsers (IE9, FF4, Chrome) and client processing power are primed to exploit this standard for tangible savings in server CPU power and bandwidth on large scale properties. Am I missing something? The negative aspects I'm aware of include * additional rendering time * additional assets required on uncached page load * additional layer of complexity * noticably less developer experience than server-side template techniques The benefits I perceive include * distributed template composition (offloaded on the client) * caching of common template fragments offloaded on the client * logical separation of document structure and data * well-documented web standard supported by all modern browsers Finally, although I know it's impossible to predict the future, I am curious to know opinions on whether or not client-side XSLT's day will come. With interest in HTML5 driving users to upgrade their browsers and developers to explore new techniques, I would say yes. How about you? Thanks in advance, Casey

    Read the article

  • mysql query performance help

    - by Stefano
    Hi I have a quite large table storing words contained in email messages mysql> explain t_message_words; +----------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+---------+------+-----+---------+----------------+ | mwr_key | int(11) | NO | PRI | NULL | auto_increment | | mwr_message_id | int(11) | NO | MUL | NULL | | | mwr_word_id | int(11) | NO | MUL | NULL | | | mwr_count | int(11) | NO | | 0 | | +----------------+---------+------+-----+---------+----------------+ table contains about 100M rows mwr_message_id is a FK to messages table mwr_word_id is a FK to words table mwr_count is the number of occurrencies of word mwr_word_id in message mwr_message_id To calculate most used words, I use the following query SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id ORDER BY word_count DESC LIMIT 100; that runs almost forever (more than half an hour on the test server) mysql> show processlist; +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- | Id | User | Host | db | Command | Time | State | Info +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- processlist | 41 | root | localhost:3148 | tst_db | Query | 1955 | Copying to tmp table | SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id | +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- 3 rows in set (0.00 sec) Is there anything I can do to "speed up" the query (apart from adding more ram, more cpu, faster disks)? thank you in advance stefano

    Read the article

  • Does pdksh (public domain korn shell) support associative arrays?

    - by George Jempty
    I recently ran up against a wall doing some bash shell programming where an associative array would have solved my problems. I googled about features of the Korn shell and learned that it supports associative arrays, so I installed Cygwin's pdksh (public domain korn shell). However, when trying to create an associative array in the prescribed manner (typeset -A varName), I received the following errors, so I'm beginning to suspect pdksh does not support associative arrays. ./find_actions.ksh: line 2: typeset: -A: invalid option typeset: usage: typeset [-afFirtx] [-p] name[=value] ... Guess I'll be considering Perl instead, but I really wanted a good excuse to learn a dialect/language new to me

    Read the article

  • Solving iPhone/iPad out of memory issues

    - by Joonas Trussmann
    I have a strange issue where I'm scrolling through a paged UIScrollView which displays the pages of a PDF document (using Quartz 2D and CATiledLayer). When I page through memory allocation looks fine with it going up with a few initial pages and then keeping it steady as it obviously releases the memory kept for earlier pages. Upon hitting page x (not a certain PDF page or a certain number per se) memory usage goes from a couple of megs to 308 megs and the app crashes. So my question is: how to best try to find what's causing this? The object alloc tool in instruments shows the memory as simply going to malloc. (in huge chunks).

    Read the article

  • Memory footprint of a parsed XML file in Classic ASP?

    - by Pete Duncanson
    Anyone know of a way to find out the amount of memory/size of a XMLDocument once it has parsed a XML file? I've been doing "beer mat" calculations so far but have been asked to come up with some more legit numbers through monitoring some how. I need to create about 1500 XML files (via FreeThreadedXMl-DOM object), which verge between 3-9K in size and store them in Application vars but our SysAdmin is worried about us gobbling up too much memory. Other than the crude method of booting up a fresh IIS instance and then loading everything in and monitoring before and after memory usage in Task Manager I can't think of a way of doing it with a bit more accuracy.

    Read the article

  • Windows disassembler: looking for a tool...

    - by SigTerm
    Hello. I'm looking for a (preferably free) tool that can produce "proper" disassembly listing from a (non-.NET) windows PE file (*.exe or *.dll). Important requirement: it should be possible to run the listing through a windows assembler (nasm, masm or whatever) and get working exe again (not necessarily identical to original one, but it should behave in the same way). Intended usage is adding new subroutines into existing code, when source is not available. Ideally, tool should be able to detect function/segment boundaries, API calls, and generate proper labels for jumps (I can live without labels for loops/jumps, though, but function boundary detection would be nice), and keep program resources/segments in place. I'm already aware of IdaPRO(not free), OllyDBG (useful for in-place hacking, doesn't generate disassembly listing, AFAIK), ndisasm (output isn't suitable for assembler), dumpbin (useful, but AFAIK, output isn't suitable for assembler) and "proxy dll" technique. Ideas? Or maybe there is a book/tutorial that explains some kind of alternative approach?

    Read the article

  • Prevent CSS Validation for just 1 line

    - by Jaxidian
    I have a problem practically identical to this question, but I'm looking for a different solution. Instead of turning it off globally, I'd like to just disable it for a single line. I know I have seen many examples where various techniques are used to suppress different warnings, and I am looking for one that I can put in my CSS to suppress this one. Examples of ways to suppress warnings and such #pragma warning disable 659 or [SuppressMessage("Microsoft.Usage", "CA2214:DoNotCallOverridableMethodsInConstructors", Justification = "I have a good reason.")]. The CSS I want it to be quiet about has some CSS3 stuff in it which is why it's understandably complaining: .round { border-radius: 5px; -webkit-border-radius: 5px; -moz-border-radius: 5px; } So any idea how to make my Error 1 Validation (CSS 2.1): 'border-radius' is not a known CSS property name error go away? I'd rather not lose all of my CSS validations but I do want it to ignore this one "problem".

    Read the article

  • x86 assembler question

    - by b-gen-jack-o-neill
    Hi, I have 2 simple, but maybe tricky questions. Let´s say I have assembler instruction: MOV EAX,[ebx+6*7] - what I am curious is, does this instruction really actually translates into opcode as it stands,so computation of code in brackets is encoded into opcode, or is this just pseudo intruction for compiler, not CPU, so that compiler before computes the value in brackets using add mul and so, store outcome in some reg and than uses MOV EAX,reg with computed value? Just to be clear, I know the output will be the same. I am interested in execution. Second is about LEA instruction. I know what it does, but I am more interested wheather its real instruction, so compiles does not further change it, just make it into opcode as it stands, or just pseudo code for compiler to, again, first compute adress and than store it.

    Read the article

  • out of memory issue in asp .net mvc application

    - by SARAVAN
    Hi, I have got one wierd issue. I am working on an asp .net mvc application. I have a refresh button that build some data and view models in the controller code, and returns the partial view back. Well this refresh does work good the very first time. But when i try to click my refresh button again, a javascript alert comes saying "out of memory at line 56" I checked my task manager to see on whats happening. I ahve a 3GB memory and when this error alert shows up the used memory is 1.41 GB. Its normal usage as it looks like. But I dont know why it shows the javascript error alert. This problem happens in my local workstation where I am doing developement of this application. Any thoughts or comments to trouble shoot or solve this issue is appreciated. I ma using IE7.

    Read the article

  • Recreation of MySQL DB using "mysql mydb < mydb.sql" is really slow when the table has tens of milli

    - by Jian Lin
    It seems that a MySQL database that has a table with tens of millions of records will get a big INSERT INTO statement when the following mysqldump some_db > some_db.sql is done to back up the database. (is it 1 insert statement that handles all the records?) So when reconstructing the DB using mysql some_db < some_db.sql then the CPU is hardly busy (about 1.8% usage by the mysql process... I don't see a mysqld either?) and also the hard disk doesn't seem to be too busy... Last time, the whole restore process took 5 hours. Is there a way to make it faster? Such as, when doing mysqldump, can it break the INSERT statement into shorter ones, so that the mysql doesn't need to parse the line so hard when restoring the DB?

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >