Search Results

Search found 5572 results on 223 pages for 'cpu'.

Page 169/223 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • What's your development setup? (Talking right now to my boss)

    - by Flinkman
    How do I tell my boss, that I need endless cpu power to automate my daily job? By the way, what's your setup, now in sep, 2008. How fast disks? How much memory? How many cores? How big screen? (Ok, what the hell are you doing, you may ask. I'm working in multiple environments, vmware. Have couple of build-systems running, for compatibility tests. These build systems are automated. The setup of the build system is also. Is there an another way?) Thanks!

    Read the article

  • Which would be better? Storing/access data in a local text file, or in a database?

    - by TerranRich
    Basically, I'm still working on a puzzle-related website (micro-site really), and I'm making a tool that lets you input a word pattern (e.g. "r??n") and get all the matching words (in this case: rain, rein, ruin, etc.). Should I store the words in local text files (such as words5.txt, which would have a return-delimited list of 5-letter words), or in a database (such as the table Words5, which would again store 5-letter words)? I'm looking at the problem in terms of data retrieval speeds and CPU server load. I could definitely try it both ways and record the times taken for several runs with both methods, but I'd rather hear it from people who might have had experience with this. Which method is generally better overall?

    Read the article

  • Amazon EC2 prices for Windows Instance?

    - by Abhishek Gupta
    Hello Guys , I want to ask from some Amazon cloud technology Experts , that is it profitable to deploy our web application on amazon cloud as compared to normal server? Currently there are micro,small, large and other types of instances available , if we start from micro instance then we realize that our app needs some more CPU cycle and Ram then how can we dynamically move to next more powerful instance automatically at runtime. What is the approx minimum yearly cost for a single EC2 windows small instance? I wnat to deploy a simple Online quiz application (ASP.net based) on Amazon Cloud which at a time can have maximum of 500 users only. Please suggest me as I m very new to Cloud .Should I go for Azure or Amazon?

    Read the article

  • openmp vs opencl for computer vision

    - by user1235711
    I am creating a computer vision application that detect objects via a web camera. I am currently focusing on the performance of the application My problem is in a part of the application that generates the XML cascade file using Haartraining file. This is very slow and takes about 6days . To get around this problem I decided to use multiprocessing, to minimize the total time to generate Haartraining XML file. I found two solutions: opencl and (openMp and openMPI ) . Now I'm confused about which one to use. I read that opencl is to use multiple cpu and GPU but on the same machine. Is that so? On the other hand OpenMP is for multi-processing and using openmpi we can use multiple CPUs over the network. But OpenMP has no GPU support. Can you please suggest the pros and cons of using either of the libraries.

    Read the article

  • How to track IIS server performance

    - by Chris Brandsma
    I have a reoccurring issue where a customer calls up and complains that the web site is too slow. Specifically, if they are inactive for a short period of time, then go back to the site, there will be a minute-two minute delay before the user sees a response. (the standard browser is Firefox in this case) I have Perfmon up and running, the cpu utilization is usually below 20% (single proc...don't ask). The database is humming along. And I'm pulling my hair out. So, what metrics/tools do you find useful when evaluating IIS performance?

    Read the article

  • C++/CLI Missing MSVCR90.DLL

    - by Mitch
    I have a c++/cli dll that I load at runtime and which works great in debug mode. If I try and load the dll in release mode it fails to load stating that one or more dependencies are missing. If I run depends against it I am missing MSVCR90.DLL from MSVCM90.DLL. If I check the debug version of the dll it also has the missing dependency, but against the debug (D) version. I have made sure debug/release embed the manifest file. I read something about there being issues with the app loading the dll being build as Any CPU and the dll being built as x86, but I don't see how to set them both to x86. I am using VS2010. Anyway, I've been messing around for a while now and have no idea what is wrong. I'm sure someone out there knows what is going on. Let me know if I need to include additional info.

    Read the article

  • List of phones that will work with Eclipse?

    - by user1058647
    I need an android phone to test my apps with that will work with Eclipse. It has to be low cost, run Gingerbread with modest memory and CPU. Thinking that any android phone would work I recently purchased a Virgin Mobil Chaser but as it turns out, it cannot be seen by either Eclipse or adb (but device manager does see the phone). Another developer has also had the same identical problem with the Chaser. I could keep buying phones and see if they work but that could be long and frustrating. I hope to find a "no contract" phone. Is there any list of phones that work with Eclipse. Does anyone know of any other Virgin Mobil phones that will work? thanks, Gary

    Read the article

  • Controlling read and write access width to memory mapped registers in C

    - by srking
    I'm using and x86 based core to manipulate a 32-bit memory mapped register. My hardware behaves correctly only if the CPU generates 32-bit wide reads and writes to this register. The register is aligned on a 32-bit address and is not addressable at byte granularity. What can I do to guarantee that my C (or C99) compiler will only generate full 32-bit wide reads and writes in all cases? For example, if I do a read-modify-write operation like this: volatile uint32_t* p_reg = 0xCAFE0000; *p_reg |= 0x01; I don't want the compiler to get smart about the fact that only the bottom byte changes and generate 8-bit wide read/writes. Since the machine code is often more dense for 8-bit operations on x86, I'm afraid of unwanted optimizations. Disabling optimizations in general is not an option.

    Read the article

  • How to implement HeightMap smoothing using Thrust

    - by igal k
    I'm trying to implement height map smoothing using Thrust. Let's say I have a big array ( around 1 million floats). A known graphics algorithm to implement the above problem is to calculate the average around a given cell. If for example I need to calculate the value at a given cell[i,j] what I will basically do is: cell[i,j] = cell[i-1,j-1] + cell[i-1,j] + cell[i-1,j+1] + cell[i,j-1] + cell[i,j+1] + cell[i+1,j -1] + cell[i+1,j] + cell[i+1,j+1] cell[i,j] /=9 That's the CPU code. Is there a way to implement it using thrust? I know that I could use the transform algorithm. But I'm not sure it's correct to access different cells which are occupied but different threads (banks conflicts and so on).

    Read the article

  • All things equal what is the fastest way to output data to disk in C++?

    - by user260197
    I am running simulation code that is largely bound by CPU speed. I am not interested in pushing data in/out to a user interface, simply saving it to disk as it is computed. What would be the fastest solution that would reduce overhead? iostreams? printf? I have previously read that printf is faster. Will this depend on my code and is it impossible to get an answer without profiling? Edit: Output data needs to be in text format, whether tab or comma separated. This will require formatting, precision, etc. Running in Windows.

    Read the article

  • Howto read system informations in C++ on windows and linux?

    - by f4
    I need to read system information like cpu/ram/disks usage in C++. Maybe swap, network and process too but that's less important. It has probably been done thousand of times before so I first tried to search for a library. Someone here suggested SIGAR, which seems to fit my needs but is GPL and it is for inclusion in a proprietary product. So it's not an option here. I feel like it's something not that easy to implement, as it'll need testing on several platforms. So a library would be welcome. If you don't know of any library, could you point me in the right direction for both platforms?

    Read the article

  • c++ programming for clusters and HPC

    - by Abruzzo Forte e Gentile
    HI All I need to write a scientific application in C++ doing a lot of computations and using a lot of memory. I have part of the job but due to high requirements in terms of resources I was thinking to start moving to OpenMPI. Before doing that I have a simple curiosity: If I understood the principle of OpenMPI is the developer that has the task of splitting the jobs over different nodes calling SEND and RECEIVE based on node available at that time. Do you know if it does exist some library or OS or whatever that has this capability letting my code reamain as it is now? Basically something that connects all computers and let share as one their memory and CPU? I am a bit confused because of the high material available on the topic. Should I look at cloud computing? or Distributed Shared Memory? Can you help me or address me a bit? Thanks

    Read the article

  • Is there a faster way to draw text?

    - by mystify
    Shark complains about a big performance hit with this line, which takes like 80% of CPU time. I have a counter that is updated very frequently and performance seriously sucks. It's an custom UILabel subclass with -drawRect: implemented. Every time the counter value changes, this is used to draw the new text: [self.text drawInRect:textRect withFont:correctedFont lineBreakMode:self.lineBreakMode alignment:self.textAlignment]; When I comment this line out, performance rocks. Its smooth and fast. So Shark isn't wrong about this. But what could I do to improve this? Maybe go a level deeper? Does that make any sense? Probably drawing text is really so incredible heavy...?

    Read the article

  • R problems using rpart with 4000 records and 13 attributes

    - by josh
    I have attempted to email the author of this package without success, just wondering if anybody else has experienced this. I am having an using rpart on 4000 rows of data with 13 attributes. I can run the same test on 300 rows of the same data with no issue. When I run on 4000 rows, Rgui.exe runs consistently at 50% cpu and the UI hangs.... it will stay like this for at least 4-5hours if I let it run, and never exit or become responsive. here is the code I am using both on the 300 and 4000 size subset : train<-read.csv("input.csv",header=T) y<-train[,18] x<-train[,3:17] library(rpart) fit<-rpart(y~.,x) Is this a known limitation of rpart, am I doing something wrong? potential workarounds? any assistance appreciated

    Read the article

  • message queue : selection and sizing

    - by user238591
    Hi, I have 20 messages/s, each 1 - 1.5 Mbytes. I need High Availability (2 to 4 servers min). I need low latencey (high daily volume - full RAM prefered). I need persistent poisoned messages queue. Only few clients (about 16), locally. I can have 12-16G bytes RAM per server (brooker). Which JMS message queue / messaging would you recommend ? On what configuration (CPU/RAM) ? Can I propose optionnal NAS persistence (in case of final delivery failure) ? Thanks

    Read the article

  • Regex to find a word, then extract a line containing the first occurence of a different word immedia

    - by Hinchy
    World's most convuluted title I know, an example should explain it better. I have a large txt file in the below format, though details and amount of lines will change everytime: Username: john_joe Owner: John Joe Account: CLI: Default: LGICMD: Flags: Primary days: Secondary days: No access restrictions Expiration: Pwdlifetime: Last Login: Maxjobs: Maxacctjobs: Maxdetach: Prclm: Prio: Queprio: CPU: Authorized Privileges: BYPASS Default Privileges: SYSPRV This sequence is repeated a couple of thousand times for different users. I need to find every user (ideally the entire first line of the above) that has SYSPRV under "Default Permissions". I know I could write an application to do this, I was just hoping their might be a nice regex I could use. Cheers

    Read the article

  • What alternatives are there to Google App Engine?

    - by Chris Marasti-Georg
    What alternatives are there to GAE, given that I already have a good bit of code working that I would like to keep. In other words, I'm digging python. However, my use case is more of a low number of requests, higher CPU usage type use case, and I'm worried that I may not be able to stay with App Engine forever. I have heard a lot of people talking about Amazon Web Services and other sorts of cloud providers, but I am having a hard time seeing where most of these other offerings provide the range of services (data querying, user authentication, automatic scaling) that App Engine provides. What are my options here?

    Read the article

  • Oracle: TABLE ACCESS FULL with Primary key?

    - by tim
    There is a table: CREATE TABLE temp ( IDR decimal(9) NOT NULL, IDS decimal(9) NOT NULL, DT date NOT NULL, VAL decimal(10) NOT NULL, AFFID decimal(9), CONSTRAINT PKtemp PRIMARY KEY (IDR,IDS,DT) ) ; SQL>explain plan for select * from temp; Explained. SQL> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial')); PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- --------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| --------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 61 | 2 (0)| | 1 | TABLE ACCESS FULL| TEMP | 1 | 61 | 2 (0)| --------------------------------------------------------------- Note ----- - 'PLAN_TABLE' is old version 11 rows selected. SQL server 2008 shows in the same situation Clustered index scan. What is the reason?

    Read the article

  • Faking a Single Address Space

    - by dsimcha
    I have a large scientific computing task that parallelizes very well with SMP, but at too fine grained a level to be easily parallelized via explicit message passing. I'd like to parallelize it across address spaces and physical machines. Is it feasible to create a scheduler that would parallelize already multithreaded code across multiple physical computers under the following conditions: The code is already multithreaded and can scale pretty well on SMP configurations. The fact that not all of the threads are running in the same address space or on the same physical machine must be transparent to the program, even if this comes at a significant performance penalty in some use cases. You may assume that all of the physical machines involved are running operating systems and CPU architectures that are binary compatible. Things like locks and atomic operations may be slow (having network latency to deal with and all) but must "just work".

    Read the article

  • prevent linux thread from being interrupted by scheduler

    - by johnnycrash
    How do you tell the thread scheduler in linux to not interrupt your thread for any reason? I am programming in user mode. Does simply locking a mutex acomplish this? I want to prevent other threads in my process from being scheduled when a certain function is executing. They would block and I would be wasting cpu cycles with context switches. I want any thread executing the function to be able to finish executing without interruption even if the threads' timeslice is exceeded.

    Read the article

  • How to optimize indexing of large number of DB records using Zend_Lucene and Zend_Paginator

    - by jdichev
    So I have this cron script that is deployed and ran using Cron on a host and indexes all the records in a database table - the index is later used both for the front end of the site and the backed operations as well. After the operation, the index is about 3-4 MB. The problem is it takes a lot of resources (CPU: 30+ and a good chunk of memory) and slows the machine down. My question is about how to optimize the operation described below: First there is a select query built using the Zend Framework API, this query is then passed to a Paginator factory that returns a paginator which I am using to balance the current number of items being indexed and not iterate over too much items. The script is iterating over the current items in the paginator object using a foreach loop until reaching the end and then it starts from the beginning after getting items for the next page. I am suspecting this overhead is caused by the Zend_Lucene but no idea how this could be improved.

    Read the article

  • How to find root cause for "too many connections" error in MySQL/PHP

    - by Nir
    I'm running a web service which runs algorithms that serve millions of calls daily and run some background processing as well. Every now and than I see "Too many connections" error in attempts to connect to the MySQL box" for a few seconds. However this is not necessarily attributed to high traffic times or anything I can put my finger on. I want to find the bottleneck causing it. Other than in the specific times this happens the server isn't too loaded in terms of CPU and Memory, and has 2-3 connections (threads) open and everything works smoothly. (I use Zabbix for monitoring) Any creative ideas on how to trace it?

    Read the article

  • Endless saving of CoreData Context

    - by Robert
    Sometimes I noticed that a 'save:' operation an a ManagedObjectContext never returns and consumes 100% CPU. I'm using an SQL Store in a GarbageCollected environment (Mac OS X 10.6.3). The disk activity shows about 700 KB/s writing. While having a look at the folder that contains the sqlite database file the "-journal" file appears and disappears, appears and disappears, ... This is part of the call graph from the process analysis: 2203 -[NSManagedObjectContext save:] 1899 -[NSPersistentStoreCoordinator(_NSInternalMethods) executeRequest:withContext:] 1836 -[NSSQLCore executeRequest:withContext:] 1836 -[NSSQLCore saveChanges:] 1479 -[NSSQLCore performChanges] ... 335 -[NSSQLCore recordChangesInContext:] ... 20 -[NSSQLCore rollbackChanges] ... 2 -[NSSQLCore prepareForSave:] ... 62 -[NSPersistentStoreCoordinator(_NSInternalMethods) _checkRequestForStore:originalRequest:andOptimisticLocking:] ... 1 -[NSPersistentStore(_NSInternalMethods) _preflightCrossCheck] ... 184 -[NSMergePolicy resolveConflicts:] ... 120 -[NSManagedObjectContext(_NSInternalChangeProcessing) _prepareForPushChanges:] ... Everything a happening in the main GUI thread. Any ideas what I can to do to resolve the problem?

    Read the article

  • How can two programs talk to each other in Java?

    - by Arnon
    I want to ?reduce? the CPU usage/ROM usage/RAM usage - generally?, all system resources that my app uses - who doesn't? :) For this reason I want to split the preferences window from the rest of the application, and let the preferences window to run as ?independent? program. The preferences program ?should? write to a Property file(not a problem at all) and to send a "update signal" to the main program - which means it should call the update method (that i wrote) that found in the Main class. How can I call the update method in the Main program from the preferences program? To put it another way, is a way to build preferences window that take system resources just when the window appears? Is this approach - of separating programs and let them talk to each other (somehow) - the right approach for speeding up my programs?

    Read the article

  • Install the proper bitness Visual C++ Runtime Library via a Setup project

    - by chiru_valentin
    Hi all! The context: I have a solution that contains amongh other C# projects, a VC++ project that suports compiling only as x64 or Win32 (but not Any CPU). In order for the application (which in fact is a macro for a third party application) to run, it requires Visual C++ Runtime libraries (x86) or (x64) (The macro will run on both x64 and x86 operating systems.) The problem: I want to create a Visual Studio setup project that would install the macro on both x86 and x64 operating systems, and the problem I have is to specify what Visual C++ Runtime library to use a prerequisite. If both are selected (x64 and x86) than I have a runtime error message when running the setup.exe, as on x86 operating systems you cannot run x64 executables like the Visual C++ Runtime libraries (x64) kit is...(which the setup calls in the back). So I would need a bitness condition, or something like that to tell the setup what bitness version of the Visual C++ Runtime library to try to install...I'm not sure if this is possible, or even where such a code should be placed in the setup. Thank you for the support, Vali

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >