Search Results

Search found 5701 results on 229 pages for 'cpu cooler'.

Page 176/229 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Disable update on battery percentage

    - by Kris B
    I have a service that performs background updates. I want to give the user the the option to disable the updates when their battery percentage reaches a certain level. From my research, I'm going to use a receiver in the onCreate method of my Service class, eg: public class MainService extends Service { @Override public void onCreate() { this.registerReceiver(this.BatInfoReceiver, new IntentFilter(Intent.ACTION_BATTERY_CHANGED)); } private BroadcastReceiver BatInfoReceiver = new BroadcastReceiver(){ @Override public void onReceive(Context arg0, Intent intent) { int level = intent.getIntExtra("level", 0); } }; } I'm assuming the best practice is to leave the service running and check the battery level in the service and not perform the CPU intensive code based on the percentage? I don't actually stop the service itself and start it up again, based on the battery percentage?

    Read the article

  • Creating a sub site in SharePoint takes a very long time

    - by denni
    Hi, I am working in a MOSS 2007 project and have customized many parts of it. There is a problem in the production server where it takes a very long time (more than 15 minutes, sometimes fails due to timeouts) to create a sub site (even with the built-in site templates). While in the development server, it only takes 1 to 2 minutes. Both servers are having same configuration with 8 cores CPU and 8 GIGs RAM. Both are using separate database servers with the same configuration. The content db size is around 100 GB. More than a hundred subsites are there. What could be the reason why in the other server it will take so much time? Is there any configuration or something else I need to take care? Thanks a lot, all helps are appreciated.

    Read the article

  • MS DOS function like ipconfig to get system performance specs?

    - by JustADude
    I am aware of MSINFO32, but I'm wondering if there is a MS DOS command similar to ipconfig in order to get system specifications? I would like for the system specifications to be displayed in the MS DOS prompt. I would like to see at least: CPU RAM BUS speed Thanks for any insights. Edit: I am unable to install any other software, so just have to use existing DOS programming commands to extract this information. Thank you again. 2nd Edit: Whoops. Using Windows XP and Windows Vista.

    Read the article

  • MPI_Bsend and MPI_Isend. How do they work ?

    - by GBBL
    Hi, using buffered send and non blocking send I was wondering how and if they implement a new level of parallelism in my application eventually generating a thread. Imagine that a slave process generates a large amount of data and want to send it to the master. My idea was to start a buffered or non blocking send then immediately begin to compute the next result. Just when I would have to send the new data I wold check if I can reuse the buffer. This would introduce a new level of parallelism in my application between CPU and communication. Does anybody knows how this is done in MPI ? Does MPI generate a new thread to handle the Bsend or Isend ? Thanks.

    Read the article

  • SqlAlchemy hangs after adding record in MS SQL

    - by Patrick
    I'm running SQLAlchemy on Jython and trying to connect to a MS SQL database using jTDS with windows authentication. I can query and delete just fine but when I try to insert new values it will hang when I commit. int 'before add' session.add(newVal) print 'after add' session.commit() print 'after commit' I see the first two print statements but not the last. My CPU maxes out and I can't even query the table directly using the MS SQL Management Studio. When I kill the Jython java process I can query again but the new values haven't been added. Strangely enough I can insert values directly using an SQL command: insert_sql = "INSERT INTO my_table (my_value) VALUES ('test_value')" session.execute(insert_sql) session.commit() Any ideas what I'm doing wrong?

    Read the article

  • JBoss 4.0.5 startup takes 15 minutes deploying a single war file

    - by dkblinux98
    This instance of JBoss deploys several war files. The rest of the JBoss startup takes about 5 minutes or less. But when it gets to one particular war file, startup just hangs with no further output to the jboss log. It waits there for about 15 minutes and then suddenly the war starts deploying. The rest of the JBoss startup is then fine. What I want to know is what steps do you recommend I take to diagnose the cause of this condition? It is not possible to upgrade this site to a newer version of JBoss nor java (currently 1.5.0.7). It is running on 32-bit CentOS 5.3 Linux on 3 xen-based virtual servers in a load balanced configuration. The code is common to all three servers via an nfs share. This same issue was seen, however, when the 3 servers were physical and the code was local to each server. The servers are each 2 cpu, 4GB RAM servers.

    Read the article

  • How do I declare a C# Web User Control but stop it from initializing?

    - by Scott Stafford
    I have a C#/ASP.NET .aspx page that declares two controls that each represents the content of one tab. I want a query string argument (e.g., ?tab=1) to determine which of the two controls is activated. My problem is, they both go through the initialization events and populate their child controls, wasting CPU resources and slowing the response time. Is it possible to deactivate them somehow so they don't go through any initialization? My .aspx page looks like this: <% if (TabId == 0) { %> <my:usercontroltabone id="ctrl1" runat="server" /> <% } else if (TabId == 1) { %> <my:usercontroltabtwo id="ctrl2" runat="server" /> <% } %> And that part works fine. I assumed the that <%'s would have meant the control wouldn't actually be declared and so wouldn't initialize, but that isn't so...

    Read the article

  • Take advantage of multiple cores executing SQL statements

    - by willvv
    I have a small application that reads XML files and inserts the information on a SQL DB. There are ~ 300 000 files to import, each one with ~ 1000 records. I started the application on 20% of the files and it has been running for 18 hours now, I hope I can improve this time for the rest of the files. I'm not using a multi-thread approach, but since the computer I'm running the process on has 4 cores I was thinking on doing it to get some improvement on the performance (although I guess the main problem is the I/O and not only the processing). I was thinking on using the BeginExecutingNonQuery() method on the SqlCommand object I create for each insertion, but I don't know if I should limit the max amount of simultaneous threads (nor I know how to do it). What's your advice to get the best CPU utilization? Thanks

    Read the article

  • C/C++ function definitions without assembly

    - by Jack
    Hi, I always thought that functions like printf() are in the last step defined using inline assembly. That deep into stdio.h is burried some asm code that actually tells CPU what to do. Something like in dos, first mov bagining of the string to some memory location or register and than call some int. But since x64 version of Visual Studio doesent support inline assembler at all, it made me think that there are really no assembler-defined functions in C/C++. So, please, how is for example printf() defined in C/C++ without using assembler code? What actually executes the right software interrupt? Thanks.

    Read the article

  • optimizing oracle query

    - by deming
    I'm having a hard time wrapping my head around this query. it is taking almost 200+ seconds to execute. I've pasted the execution plan as well. SELECT user_id , ROLE_ID , effective_from_date , effective_to_date , participant_code , ACTIVE FROM CMP_USER_ROLE E WHERE ACTIVE = 0 AND (SYSDATE BETWEEN effective_from_date AND effective_to_date OR TO_CHAR(effective_to_date,'YYYY-Q') = '2010-2') AND participant_code = 'NY005' AND NOT EXISTS ( SELECT 1 FROM CMP_USER_ROLE r WHERE r.USER_ID= E.USER_ID AND r.role_id = E.role_id AND r.ACTIVE = 4 AND E.effective_to_date <= (SELECT MAX(last_update_date) FROM CMP_USER_ROLE S WHERE S.role_id = r.role_id AND S.role_id = r.role_id AND S.ACTIVE = 4 )) Explain plan ----------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 37 | 154 (2)| 00:00:02 | |* 1 | FILTER | | | | | | |* 2 | TABLE ACCESS BY INDEX ROWID | USER_ROLE | 1 | 37 | 30 (0)| 00:00:01 | |* 3 | INDEX RANGE SCAN | N_USER_ROLE_IDX6 | 27 | | 3 (0)| 00:00:01 | |* 4 | FILTER | | | | | | | 5 | HASH GROUP BY | | 1 | 47 | 124 (2)| 00:00:02 | |* 6 | TABLE ACCESS BY INDEX ROWID | USER_ROLE | 159 | 3339 | 119 (1)| 00:00:02 | | 7 | NESTED LOOPS | | 11 | 517 | 123 (1)| 00:00:02 | |* 8 | TABLE ACCESS BY INDEX ROWID| USER_ROLE | 1 | 26 | 4 (0)| 00:00:01 | |* 9 | INDEX RANGE SCAN | N_USER_ROLE_IDX5 | 1 | | 3 (0)| 00:00:01 | |* 10 | INDEX RANGE SCAN | N_USER_ROLE_IDX2 | 957 | | 74 (2)| 00:00:01 | -----------------------------------------------------------------------------------------------------

    Read the article

  • Can JPA do batch update | put | write | insert as pm.makePersistentAll() does in GAE/J

    - by Kenyth
    I searched through multiple discussions here. Can someone just give me a quick and direct answer? And if with JPA you can't do a batch update, what if I don't use transaction, and just use the following flow: em = emf.getEntityManager // do some query // make some data modification em.persist(..) // do some query // make some data modification em.persist(..) // do some query // make some data modification em.persist(..) ... em.close() How does this compare to batch update with regard to performance, and compare to a single transaction commit, measured by RPC calls to datastore server, CPU cycles per request, or so. Does every call to em.persist(..) before em.close() trigger a RPC call to the datastore server? Thanks very much for any response!

    Read the article

  • Why use threading data race will occur, but will not use gevent

    - by onlytiancai
    My test code is as follows, using threading, count is not 5,000,000 , so there has been data race, but using gevent, count is 5,000,000, there was no data race . Is not gevent coroutine execution will atom "count + = 1", rather than split into a one CPU instruction to execute? # -*- coding: utf-8 -*- import threading use_gevent = True use_debug = False cycles_count = 100*10000 if use_gevent: from gevent import monkey monkey.patch_thread() count = 0 class Counter(threading.Thread): def __init__(self, name): self.thread_name = name super(Counter, self).__init__(name=name) def run(self): global count for i in xrange(cycles_count): if use_debug: print '%s:%s' % (self.thread_name, count) count = count + 1 counters = [Counter('thread:%s' % i) for i in range(5)] for counter in counters: counter.start() for counter in counters: counter.join() print 'count=%s' % count

    Read the article

  • "Inlining" (kind of) functions at runtime in C

    - by fortran
    Hi, I was thinking about a typical problem that is very JIT-able, but hard to approach with raw C. The scenario is setting up a series of function pointers that are going to be "composed" (as in maths function composition) once at runtime and then called lots and lots of times. Doing it the obvious way involves many virtual calls, that are expensive, and if there are enough nested functions to fill the CPU branch prediction table completely, then the performance with drop considerably. In a language like Lisp, I could probably process the code and substitute the "virtual" call by the actual contents of the functions and then call compile to have an optimized version, but that seems very hacky and error prone to do in C, and using C is a requirement for this problem ;-) So, do you know if there's a standard, portable and safe way to achieve this in C? Cheers

    Read the article

  • strategy to allocate/free lots of small objects

    - by aaa
    hello I am toying with certain caching algorithm, which is challenging somewhat. Basically, it needs to allocate lots of small objects (double arrays, < 256 elements), with objects accessible through mapped value, map[key] = array. time to initialized array may be quite large, generally more than 10 thousand cpu cycles. By lots I mean around gigabyte in total. objects may need to be popped/pushed as needed, generally in random places, one object at a time. lifetime of an object is generally long, minutes or more, however, object may be subject to allocation/deallocation several times during duration of program. What would be good strategy to avoid memory fragmentation, while still maintaining reasonable allocate deallocate speed? I am using C++, so I can use new and malloc. Thanks. I know there a similar questions on website, http://stackoverflow.com/questions/2156745/efficiently-allocating-many-short-lived-small-objects, are somewhat different, thread safety is not immediate issue for me.

    Read the article

  • Does SetThreadPriority cause thread reschedulling?

    - by Suma
    Consider following situation, assuming single CPU system: thread A is running with a priority THREAD_PRIORITY_NORMAL, signals event E thread B with a priority THREAD_PRIORITY_LOWEST is waiting for an event E (Note: at this point the thread is not scheduled because it is runnable, but A is higher priority and runnable as well) thread A calls SetThreadPriority(B, THREAD_PRIORITY_ABOVE_NORMAL) Is thread B re-scheduled immediately to run, or is thread A allowed to continue until current time-slice is over, and B is scheduled only once a new time-slice has begun? I would be interested to know the answer for WinXP, Vista and Win7, if possible. Note: the scenario above is simplified from my real world code, where multiple threads are running on multiple cores, but the main object of the question stays: does SetThreadPriority cause thread scheduling to happen?

    Read the article

  • CUDA Global Memory, Where is it?

    - by gamerx
    I understand that in CUDA's memory hierachy, we have things like shared memory, texture memory, constant memory, registers and of course the global memory which we allocate using cudaMalloc(). I've been searching through whatever documentations I can find but I have yet to come across any that explicitly explains what is the global memory. I believe that the global memory allocated is on the GDDR of graphics card itself and not the RAM that is shared with the CPU since one of the documentations did state that the pointer cannot be dereferenced by the host side. Am I right?

    Read the article

  • Java concurrency - Should block or yield?

    - by teto
    Hi, I have multiple threads each one with its own private concurrent queue and all they do is run an infinite loop retrieving messages from it. It could happen that one of the queues doesn't receive messages for a period of time (maybe a couple seconds), and also they could come in big bursts and fast processing is necessary. I would like to know what would be the most appropriate to do in the first case: use a blocking queue and block the thread until I have more input or do a Thread.yield()? I want to have as much CPU resources available as possible at a given time, as the number of concurrent threads may increase with time, but also I don't want the message processing to fall behind, as there is no guarantee of when the thread will be reescheduled for execution when doing a yield(). I know that hardware, operating system and other factors play an important role here, but setting that aside and looking at it from a Java (JVM?) point of view, what would be the most optimal?

    Read the article

  • How can I optimize this code?

    - by loop0
    Hi, I'm developing a logger daemon to squid to grab the logs on a mongodb database. But I'm experiencing too much cpu utilization. How can I optimize this code? from sys import stdin from pymongo import Connection connection = Connection() db = connection.squid logs = db.logs buffer = [] a = 'timestamp' b = 'resp_time' c = 'src_ip' d = 'cache_status' e = 'reply_size' f = 'req_method' g = 'req_url' h = 'username' i = 'dst_ip' j = 'mime_type' L = 'L' while True: l = stdin.readline() if l[0] == L: l = l[1:].split() buffer.append({ a: float(l[0]), b: int(l[1]), c: l[2], d: l[3], e: int(l[4]), f: l[5], g: l[6], h: l[7], i: l[8], j: l[9] } ) if len(buffer) == 1000: logs.insert(buffer) buffer = [] if not l: break connection.disconnect()

    Read the article

  • When is a program limited by the memory bandwidth?

    - by hanno
    I want to know if a program that I am using and which requires a lot of memory is limited by the memory bandwidth. When do you expect this to happen? Did it ever happen to you in a real life scenario? I found several articles discussing this issue, including http://www.cs.virginia.edu/~mccalpin/papers/bandwidth/node12.html http://www.cs.virginia.edu/~mccalpin/papers/bandwidth/node13.html http://ispass.org/ucas5/session2_3_ibm.pdf The first link is a bit old, but suggests that you need to perform less than about 1-40 floating point operations per floating point variable in order to see this effect (correct me if I'm wrong). How can I measure the memory bandwidth that a given program is using and how do I measure the (peak) bandwidth that my system can offer? I don't want to discuss any complicated cache issues here. I'm only interested in the communication between the CPU and the memory.

    Read the article

  • VS2008 is very slow on a specific large C++ solution

    - by VioletRose
    I have a solution with 21 C++ projects and 1 VB.NET project. The IDE responds very slowly when I simply move the carret in a file or try to open the menu. The process seems to take 50% of CPU for each movement. It only happens with this solution and only on my machine. The solution has total of 2380 source and header files, of which 1280 are header files. I tried to remove all connection to the source control (Perforce) but it didn't help. Also, I have Visual Assist installed but even after removing it (uninstall), the same behavior continued. Any idea?

    Read the article

  • How to determine bandwidth used by cron job?

    - by Lost_in_code
    I'm not a unix guy. CPanel does a good job of managing cronjobs and that is what I used to run dozens of cronjobs. All of them combined run more than 5000 times every day. Every cron makes a call to an external API. How can I check how much bandwidth are all the cron jobs eating? For my website I use awstats and that shows bandwidth usage et al. Another thing is that I dont want the admins to ban the cron jobs because they are using too much bandwidth (and CPU), more than what is allocated in my web hosting package.

    Read the article

  • Import external dll based on 64bit or 32bit OS

    - by Mike_G
    I have a dll that comes in both 32bit and 64bit version. My .NET WinForm is configured for "Any CPU" and my boss will not let us have separate installs for the different OS versions. So I am wondering: if I package both dlls in the install, then is there a way to have the WinForm determine if its 64bit/32bit and load the proper dll. I found this article for determining version. But i am not sure how to inject the proper way to define the DLLImport attribute on the methods i wish to use. Any ideas?

    Read the article

  • java virtual machine - how does it allocate resources?

    - by Will
    I am testing the performance of a data streaming system that supports continuous queries. This is how it works: - There is a polling service which sends data to my system. - As data passes into the system, each query evaluates based on a window of the stream at the current time. - The window slides as data passes in. My problem is this, when I add more queries to the system, I should expect the throughput to decrease because it can't cope the data rate. However, I actually observe an increase in throughput. I can't understand why this is the case and I am guessing that it's something to do with the way the JVM allocates CPU, memory etc. Can anyone shed any light to my problem?

    Read the article

  • Loop function works first time, not second time

    - by user1483101
    I'm creating a parsing program to look for certain strings in a a text file and count them. However, I'm having some trouble with one spot. def callbrowse(): filename = tkFileDialog.askopenfilename(filetypes = (("Text files", "*.txt"),("HTML files", ".html;*.htm"),("All files", "*.*"))) print filename try: global filex global writefile filex = open(filename, 'r') print "Success!!" print filename except: print "Failed to open file" ######This returns the correct count only the first time it is run. The next time it ######returns 0. If the browse button is clicked again, then this function returns the ######correct count again. def count_errors(error_name): count = 0 for line in filex: if error_name == "CPU > 79%": stringparse = "Utilization is above" elif error_name == "Stuck touchscreen": stringparse = "Stuck touchscreen" if re.match("(.*)" + "Utilization is above" + "(.*)",line): count = count + 1 return count Thanks for any help. I can't seem to get this to work right.

    Read the article

  • Mesos slave not 'Running' multiple executors simultaneously

    - by user3084164
    I am using Mesos to distribute a bunch of tasks to different machines (mesos-slaves). Here is what happens: 1. My scheduler gets resource offers and accepts it. 2. Mesos stages multiple executors on the same mesos-slaves (each slave has 4 cpus) 3. Only ONE executor enters the 'Running' state on each of the slaves while the others are shown in 'Staging' state. 4. Only after the current executor finishes execution the other executor starts running. Given that I have 4 CPUs on each machine, shouldn't each slave be running 4 executors simultaneously? Each executor requires 1 CPU.

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >