Search Results

Search found 5701 results on 229 pages for 'cpu cooler'.

Page 168/229 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • typesafe NotifyPropertyChanged using linq expressions

    - by bitbonk
    Form Build your own MVVM I have the following code that lets us have typesafe NotifyOfPropertyChange calls: public void NotifyOfPropertyChange<TProperty>(Expression<Func<TProperty>> property) { var lambda = (LambdaExpression)property; MemberExpression memberExpression; if (lambda.Body is UnaryExpression) { var unaryExpression = (UnaryExpression)lambda.Body; memberExpression = (MemberExpression)unaryExpression.Operand; } else memberExpression = (MemberExpression)lambda.Body; NotifyOfPropertyChange(memberExpression.Member.Name); } How does this approach compare to standard simple strings approach performancewise? Sometimes I have properties that change at a very high frequency. Am I safe to use this typesafe aproach? After some first tests it does seem to make a small difference. How much CPU an memory load does this approach potentially induce?

    Read the article

  • NSDictionary with specific property of plist

    - by Leonardo
    Hi, I have a plist with key values. Each key represent a language, let's say 'it' 'en' ....ecc The value of each key is another key/value(=array) set. At startup I would like to create a dictionary by reading only a specific key. Let's say the locale of my iphone is 'it', then the init method would only parse it key, because the only way I found by now is to make a dictionary from the whole plist file and then another dictionary. But I can imagine this can become quite cpu consuming if I add more language in the future. thanks Leonardo

    Read the article

  • Virtual dedicated surver repetitive draining RAM, OOM constantly

    - by Deerly
    My linux (fedora red hat 7) virtual dedicated server has been experiencing OOM multiple times a day for the past several days. I thought the issue was with spamd/spamassassin but after disabling this the errors remains. The highest usage displayed on ps faux --cumulative: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 28412 8.7 0.5 309572 109308 ? Sl 22:15 0:17 /usr/java/jdk1. mysql 7716 0.0 0.0 136256 18000 ? Sl 22:12 0:00 _ /usr/libexe named 17697 0.0 0.0 120904 15316 ? Ssl 22:09 0:00 /usr/sbin/named I'm not running any java applications so I'm not sure why the top issue is showing up. It is frustrating as I barely have anything running on the server and use the tiniest fraction of bandwidth. Any help or suggestions on zeroing in on the source of the drain would be much appreciated! Thanks!

    Read the article

  • How to get regular expression matches between two boundaries

    - by Rubans
    Hi, I have the following text: started: Project: ProjectA, Configuration: Release Any CPU ------ I would like to get just the actual project name which in this example is "ProjectA". I do have a regular expression "started:(\s)Project:(\s).*," which will give me "started: Project: ProjectA," and then I can use further basic string searching to return the project name but was wondering if there is any way I can just grab the actual project name without doing the extra string searching, maybe using a correct regular expression. What I need is the string value between boundaries "started: Project: " and ",". Any ideas?

    Read the article

  • C/C++/Assembly Programatically detect if hyper-threading is active on Windows, Mac and Linux

    - by HTASSCPP
    I can already correctly detect the number of logical processors correctly on all three of these platforms. To be able to detect the number of physical processors/cores correctly I'll have to detect if hyperthreading is supported AND active (or enabled if you prefer) and if so divide the number of logical processors by 2 to determine the number of physical processors. Perphaps I should provide an example: A quad core Intel CPU's with hyperthreading enabled has 4 physical cores, yet 8 logical processors (hyperthreading creates 4 more logical processors). So my current function would detect 8 instead of the desired 4. My question therefore is if there is a way to detect whether hyperthreading is supported AND ENABLED?

    Read the article

  • Make compiler copy characters using movsd

    - by Suma
    I would like to copy a relatively short sequence of memory (less than 1 KB, typically 2-200 bytes) in a time critical function. The best code for this on CPU side seems to be rep movsd. However I somehow cannot make my compiler to generate this code. I hoped (and I vaguely remember seeing so) using memcpy would do this using compiler built-in instrinsic, but based on disassembly and debugging it seems compiler is using call to memcpy/memmove library implementation instead. I also hoped the compiler might be smart enough to recognize following loop and use rep movsd on its own, but it seems it does not. char *dst; const char *src; // ... for (int r=size; --r>=0; ) *dst++ = *src++; Is there some way to make the Visual Studio compiler to generate rep movsd sequence other than using inline assembly?

    Read the article

  • Is it more efficient to always run a delete query, or to check if that information exists first

    - by Scarface
    Ok this is going to seem like a dumb question to experienced developers, but questions like this bother me until I ask them. So... is it more efficient to always run a delete query by default whether an entry exists or not, for example to delete a user name after a certain period of time (DELETE * from table where username=user), or should you check if that entry exists first using a select query and mysql_num_rows. What uses more processor on the server side? Obviously one approach contains more code, but I was wondering if certain mysql operations used a lot more cpu than others.

    Read the article

  • Image Drawing on UIView

    - by user1180261
    I'm trying to create an application where I can draw a lot of pictures at a specific point (determined for each image) on one view. I have a coordinates where I need draw a picture, width and height of it For example: I have 2 billion jpeg's images. for each images I have a specific origin point and size. In 1 second I need draw on view 20-50 images in specific point. I have already tryid solve that in the next way: UIGraphicsBeginImageContextWithOptions(self.previewScreen.bounds.size, YES, 0); [self.previewScreen.image drawAtPoint:CGPointMake(0, 0)]; [image drawAtPoint:CGPointMake(nRect.left, nRect.top)]; UIImage *imagew = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); [self.previewScreen setImage:imagew]; but in this solution I have a very big latency with displaying images and big CPU usage WBR Maxim Tartachnik

    Read the article

  • Java Compiler: Optimization of "cascaded" ifs and best practices?

    - by jens
    Hello, does the Java Compiler optimize a statement like this if (a == true) { if (b == true) { if (c == true) { if(d == true) { //code to process stands here } } } } to if (a == true && b==true && c==true && d == true) So thats my first question: Do both take exactly the same "CPU Cycles" or is the first variant "slowlier". My Second questin is, is the first variant with the cascaded if considered bad programming style as it is so verbose? (I like the first variant as I can better logically group my expressions and better comment them (my if statements are more complex than in the example), but maybe thats bad proramming style?) and even slowlier, thats why I am asking... Thanks Jens

    Read the article

  • JetBrains dotTrace, is it possible to profile source code line by line? else I need another tool

    - by m3ntat
    I am using JetBrains dotTrace, I've profiled my app which is entirely CPU bound. But the results as you walk down the tree don't sum to the level above in the tree, I only see method calls not the body lines of the node in questions method. Is it possible to profile the source code line by line. i.e for one node: SimulatePair() 99.04% --nextUniform() 30.12% --IDCF() 24.08% So the method calls nextUniform + IDCF use 54% of the time in SimulatePair (or 54% total execution time I'm not sure how to read this) regardless what is happening the other 46% of SimulatePair I need some detail on a line by line basis. Any help or alternative tools is much appreciated. Thanks

    Read the article

  • Filtering at server or at client?

    - by ablmf
    I am thinking about how to build advertise site which works like twitter. That means, most user don't not visit the site by browser, they should run a dedicated client application on their PC or smart phone. Then they set some filters about what kind of advertise they like. And when new post that fulfill their needs appear, the client will make a notification. To make that client as real time as possible, it has to poll the server within a short time interval. The problem is, should I do the filtering at the server side when client polls, or should I simply transfer all new posts to client and let client do the filtering? Making server side filtering might cause too much CPU cycles of server, but transferring every post blindly to client might waste a lot of bandwidth. Just a brain game. :)

    Read the article

  • Rendering to a single Bitmap object from multiple threads

    - by Lee Treveil
    What im doing is rendering a number of bitmaps to a single bitmap. There could be hundreds of images and the bitmap being rendered to could be over 1000x1000 pixels. Im hoping to speed up this process by using multiple threads but since the Bitmap object is not thread-safe it cant be rendered to directly concurrently. What im thinking is to split the large bitmap into sections per cpu, render them separately then join them back together at the end. I haven't done this yet incase you guys/girls have any better suggestions. Any ideas? Thanks

    Read the article

  • Raspberry Pi cluster, neuron networks and brain simulation

    - by jokoon
    Since the RBPI (Raspberry Pi) has very low power consumption and very low production price, it means one could build a very big cluster with those. I'm not sure, but a cluster of 100000 RBPI would take little power and little room. Now I think it might not be as powerful as existing supercomputers in terms of FLOPS or others sorts of computing measurements, but could it allow better neuronal network simulation ? I'm not sure if saying "1 CPU = 1 neuron" is a reasonable statement, but it seems valid enough. So does it mean such a cluster would more efficient for neuronal network simulation, since it's far more parallel than other classical clusters ?

    Read the article

  • Why doesn't Linux use the hardware context switch via the TSS?

    - by smwikipedia
    Hi guys! I read the following statement: The x86 architecture includes a specific segment type called the Task State Segment (TSS), to store hardware contexts. Although Linux doesn't use hardware context switches, it is nonetheless forced to set up a TSS for each distinct CPU in the system. I am wondering: Why doesn't Linux use the hardware support for context switch? Isn't the hardware approach much faster than the software approach? Is there any OS which does take advantage of the hardware context switch? Does windows use it? At last and as usual, thanks for your patience and reply.

    Read the article

  • Inserting Large volume of data in SQL Server 2005

    - by Manjoor
    We have a application (written in c#) to store live stock market price in the database (SQL Server 2005). It insert about 1 Million record in a single day. Now we are adding some more segment of market into it and the no of records would be double (2 Millions/day). Currently the average record insertion per second is about 50, maximum is 450 and minimum is 0. To check certain conditions i have used service broker (asynchronous trigger) on my price table. It is running fine at this time(about 35% CPU utilization). Now i am planning to create a in memory dataset of current stock price. we would like to do some simple calculations. I want to know different views of members on this. Please provide your way of dealing with such situation.

    Read the article

  • What is good server performance monitoring software for Windows?

    - by Luke
    I'm looking for some software to monitor a single server for performance alerts. Preferably free and with a reasonable default configuration. Edit: To clarify, I would like to run this software on a Windows machine and monitor a remote Windows server for CPU/memory/etc. usage alerts (not a single application). Edit: I suppose its not necessary that this software be run remotely, I would also settle for something that ran on the server and emailed me if there was an alert. It seems like Windows performance logs and alerts might be used for this purpose somehow but it was not immediately obvious to me. Edit: Found a neat tool on the coding horror blog, not as useful for remote monitoring but very useful for things you would worry about as a server admin: http://www.winsupersite.com/showcase/winvista_ff_rmon.asp

    Read the article

  • Django admin causes high load for one model...

    - by Joe
    In my Django admin, when I try to view/edit objects from one particular model class the memory usage and CPU rockets up and I have to restart the server. I can view the list of objects fine, but the problem comes when I click on one of the objects. Other models are fine. Working with the object in code (i.e. creating and displaying) is ok, the problem only arises when I try to view an object with the admin interface. The class isn't even particularly exotic: class Comment(models.Model): user = models.ForeignKey(User) thing = models.ForeignKey(Thing) date = models.DateTimeField(auto_now_add=True) content = models.TextField(blank=True, null=True) approved = models.BooleanField(default=True) class Meta: ordering = ['-date'] Any ideas? I'm stumped. The only reason I could think of might be that the thing is quite a large object (a few kb), but as I understand it, it wouldn't get loaded until it was needed (correct?).

    Read the article

  • Compiling a C program with a specific architecture

    - by Marplesoft
    I was recently fighting some problems trying to compile an open source library on my Mac that depended on another library and got some errors about incompatible library architectures. Can somebody explain the concept behind compiling a C program for a specific architecture? I have seen the -arch compiler flag before and have seen values passed to it such as ppc, i386 and x86_64 which I assume maps to the CPU "language", but my understanding stops there. If one program uses a particular architecture, do all libraries that it loads need to be on the same architecture as well? How can I tell what architecture a given program/process is running under?

    Read the article

  • Fast multi-window rendering with C#

    - by seb
    I've been searching and testing different kind of rendering libraries for C# days for many weeks now. So far I haven't found a single library that works well on multi-windowed rendering setups. The requirement is to be able to run the program on 12+ monitor setups (financial charting) without latencies on a fast computer. Each window needs to update multiple times every second. While doing this CPU needs to do lots of intensive and time critical tasks so some of the burden has to be shifted to GPUs. That's where hardware rendering steps in, in another words DirectX or OpenGL. I have tried GDI+ with windows forms and figured it's way too slow for my needs. I have tried OpenGL via OpenTK (on windows forms control) which seemed decently quick (I still have some tests to run on it) but painfully difficult to get working properly (hard to find/program good text rendering libraries). Recently I tried DirectX9, DirectX10 and Direct2D with Windows forms via SharpDX. I tried a separate device for each window and a single device/multiple swap chains approaches. All of these resulted in very poor performance on multiple windows. For example if I set target FPS to 20 and open 4 full screen windows on different monitors the whole operating system starts lagging very badly. Rendering is simply clearing the screen to black, no primitives rendered. CPU usage on this test was about 0% and GPU usage about 10%, I don't understand what is the bottleneck here? My development computer is very fast, i7 2700k, AMD HD7900, 16GB ram so the tests should definitely run on this one. In comparison I did some DirectX9 tests on C++/Win32 API one device/multiple swap chains and I could open 100 windows spread all over the 4-monitor workspace (with 3d teapot rotating on them) and still had perfectly responsible operating system (fps was dropping of course on the rendering windows quite badly to around 5 which is what I would expect running 100 simultaneous renderings). Does anyone know any good ways to do multi-windowed rendering on C# or am I forced to re-write my program in C++ to get that performance (major pain)? I guess I'm giving OpenGL another shot before I go the C++ route... I'll report any findings here. Test methods for reference: For C# DirectX one-device multiple swapchain test I used the method from this excellent answer: Display Different images per monitor directX 10 Direct3D10 version: I created the d3d10device and DXGIFactory like this: D3DDev = new SharpDX.Direct3D10.Device(SharpDX.Direct3D10.DriverType.Hardware, SharpDX.Direct3D10.DeviceCreationFlags.None); DXGIFac = new SharpDX.DXGI.Factory(); Then initialized the rendering windows like this: var scd = new SwapChainDescription(); scd.BufferCount = 1; scd.ModeDescription = new ModeDescription(control.Width, control.Height, new Rational(60, 1), Format.R8G8B8A8_UNorm); scd.IsWindowed = true; scd.OutputHandle = control.Handle; scd.SampleDescription = new SampleDescription(1, 0); scd.SwapEffect = SwapEffect.Discard; scd.Usage = Usage.RenderTargetOutput; SC = new SwapChain(Parent.DXGIFac, Parent.D3DDev, scd); var backBuffer = Texture2D.FromSwapChain<Texture2D>(SC, 0); _rt = new RenderTargetView(Parent.D3DDev, backBuffer); Drawing command executed on each rendering iteration is simply: Parent.D3DDev.ClearRenderTargetView(_rt, new Color4(0, 0, 0, 0)); SC.Present(0, SharpDX.DXGI.PresentFlags.None); DirectX9 version is very similar: Device initialization: PresentParameters par = new PresentParameters(); par.PresentationInterval = PresentInterval.Immediate; par.Windowed = true; par.SwapEffect = SharpDX.Direct3D9.SwapEffect.Discard; par.PresentationInterval = PresentInterval.Immediate; par.AutoDepthStencilFormat = SharpDX.Direct3D9.Format.D16; par.EnableAutoDepthStencil = true; par.BackBufferFormat = SharpDX.Direct3D9.Format.X8R8G8B8; // firsthandle is the handle of first rendering window D3DDev = new SharpDX.Direct3D9.Device(new Direct3D(), 0, DeviceType.Hardware, firsthandle, CreateFlags.SoftwareVertexProcessing, par); Rendering window initialization: if (parent.D3DDev.SwapChainCount == 0) { SC = parent.D3DDev.GetSwapChain(0); } else { PresentParameters pp = new PresentParameters(); pp.Windowed = true; pp.SwapEffect = SharpDX.Direct3D9.SwapEffect.Discard; pp.BackBufferFormat = SharpDX.Direct3D9.Format.X8R8G8B8; pp.EnableAutoDepthStencil = true; pp.AutoDepthStencilFormat = SharpDX.Direct3D9.Format.D16; pp.PresentationInterval = PresentInterval.Immediate; SC = new SharpDX.Direct3D9.SwapChain(parent.D3DDev, pp); } Code for drawing loop: SharpDX.Direct3D9.Surface bb = SC.GetBackBuffer(0); Parent.D3DDev.SetRenderTarget(0, bb); Parent.D3DDev.Clear(ClearFlags.Target, Color.Black, 1f, 0); SC.Present(Present.None, new SharpDX.Rectangle(), new SharpDX.Rectangle(), HWND); bb.Dispose(); C++ DirectX9/Win32 API test with multiple swapchains and one device code is here: http://pastebin.com/tjnRvATJ It's a modified version from Kevin Harris's nice example code.

    Read the article

  • haystack's RealTimeSearchIndex causes django to hang on data entry

    - by lsc
    I'm using django-haystack and a xapian backend with real time indexing (haystack.indexes.RealTimeSearchIndexing) of model data and it works fine on my Ubuntu server. However, it causes django to hang upon data entry when I deployed the app on a RHEL5 server. Everything is hunky dory if I switch to a standard SearchIndex. Running ./manage.py rebuild_index manually works fine too. The major differences between the two setups would be the versions of Python (2.4.3 vs 2.6.4) and the xapian (1.0.4-1 vs 1.0.15). Any suggestions on what may be the problem? Nothing interesting appears in the logs, and I've tried different databases (mysql, sqlite3) and deployment methods (mod_python, wsgi) with no luck yet. I have noted the warning on the haystack docs stating that RealTimeSearchIndex is only handled gracefully with a Solr backend, however I'm running a very traffic site with only occasional writes so I'm fine with some CPU overheads on writes.

    Read the article

  • Get Hardware Information for HWs that is not installed

    - by Isaac
    I am pretty sure how to retrieve hardware information with WMI classes. but WMIs have a big limitation: They Just can get information for installed hardwares. I need to retrieve information about CPU (model,speed,etc..),Video Card, Sound Card, USB Ports, etc. I found a really good software (HWiNFO) that can do this even the drivers for hardware parts is not installed. It seems that HWiNFO uses a internal database to give a name for each hardware part. So is there any free library/DLL/component that can do this in Windows XP or higher Note: Although HWiNFO SDK seems good, it's not free. So it doesn't exist! ;) I need a free library.

    Read the article

  • Is it possible to cache JSP bytecode to avoid recompiles w/ Tomcat?

    - by Computer Guru
    Hi, Is there any way of caching the bytecode for JSP webapps/ In particular, using Tomcat as the Java servlet? I'm getting really fed up of Tomcat taking up all the CPU for 10 minutes while it compiles 4 different webapps every time I restart it.... I'm already using Jikes to "speed up" the compiles, but it's really killing me. The code does not change unless the webapp is upgraded (very rarely), and I cannot believe that there is no way to cache the compiled java bytecode instead of recompiling it each and every time. I'd appreciate any advice on the matter!

    Read the article

  • Illegal instruction in Assembly

    - by Natasha
    I really do not understand why this simple code works fine in the first attempt but when putting it in a procedure an error shows: NTVDM CPU has encountered an illegal instruction CS:db22 IP:4de4 OP:f0 ff ff ff ff The first code segment works just fine: .model small .stack 100h .code start: mov ax,@data mov ds,ax mov es,ax MOV AH,02H ;sets cursor up MOV BH,00H MOV DH,02 MOV DL,00 INT 10H EXIT: MOV AH,4CH INT 21H END However This generates an error: .model small .stack 100h .code start: mov ax,@data mov ds,ax mov es,ax call set_cursor PROC set_cursor near MOV AH,02H ;sets cursor up MOV BH,00H MOV DH,02 MOV DL,00 INT 10H RET set_cursor ENDP EXIT: MOV AH,4CH INT 21H END Note: Nothing is wrong with windows config. I have tried many sample codes that work fine Thanks

    Read the article

  • writing a fast parser in python

    - by panzi
    I've written a hands-on recursive pure python parser for a some file format (ARFF) we use in one lecture. Now running my exercise submission is awfully slow. Turns out by far the most time is spent in my parser. It's consuming a lot of CPU time, the HD is not the bottleneck. I wonder what performant ways are there to write a parser in python? I'd rather not rewrite it in C. I tried to use jython, but that decreased performance a lot! The files I parse are partially huge ( 150 MB) with very long lines. My current parser only needs a look-ahead of one character. I'd post the source here but I don't know if that's such a good idea. After all the submission deadline has not jet ended. But then, the focus in this exercise is not the parser. You can choose whatever language you want to use and there already is a parser for Java.

    Read the article

  • mysql query performance help

    - by Stefano
    Hi I have a quite large table storing words contained in email messages mysql> explain t_message_words; +----------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+---------+------+-----+---------+----------------+ | mwr_key | int(11) | NO | PRI | NULL | auto_increment | | mwr_message_id | int(11) | NO | MUL | NULL | | | mwr_word_id | int(11) | NO | MUL | NULL | | | mwr_count | int(11) | NO | | 0 | | +----------------+---------+------+-----+---------+----------------+ table contains about 100M rows mwr_message_id is a FK to messages table mwr_word_id is a FK to words table mwr_count is the number of occurrencies of word mwr_word_id in message mwr_message_id To calculate most used words, I use the following query SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id ORDER BY word_count DESC LIMIT 100; that runs almost forever (more than half an hour on the test server) mysql> show processlist; +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- | Id | User | Host | db | Command | Time | State | Info +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- processlist | 41 | root | localhost:3148 | tst_db | Query | 1955 | Copying to tmp table | SELECT SUM(mwr_count) AS word_count, mwr_word_id FROM t_message_words GROUP BY mwr_word_id | +----+------+----------------+--------+---------+------+----------------------+----------------------------------------------------- 3 rows in set (0.00 sec) Is there anything I can do to "speed up" the query (apart from adding more ram, more cpu, faster disks)? thank you in advance stefano

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >