Search Results

Search found 5756 results on 231 pages for 'cpu utilization'.

Page 221/231 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • How to produce precisely-timed tone and silence?

    - by Bob Denny
    I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms. It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play. I've tried System.Media.SoundPlayer. It's a loser because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone. I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue. DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked. I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution.

    Read the article

  • Xcache - No different after using it

    - by Charles Yeung
    Hi I have installed Xcache in my site(using xampp), I have tested more then 10 times on several page and the result is same as default(no any cache installed), is it something wrong with the configure? Updated [xcache-common] ;; install as zend extension (recommended), normally "$extension_dir/xcache.so" zend_extension = /usr/local/lib/php/extensions/non-debug-non-zts-xxx/xcache.so zend_extension_ts = /usr/local/lib/php/extensions/non-debug-zts-xxx/xcache.so ;; For windows users, replace xcache.so with php_xcache.dll zend_extension_ts = C:\xampp\php\ext\php_xcache.dll ;; or install as extension, make sure your extension_dir setting is correct ; extension = xcache.so ;; or win32: ; extension = php_xcache.dll [xcache.admin] xcache.admin.enable_auth = On xcache.admin.user = "mOo" ; xcache.admin.pass = md5($your_password) xcache.admin.pass = "" [xcache] ; ini only settings, all the values here is default unless explained ; select low level shm/allocator scheme implemenation xcache.shm_scheme = "mmap" ; to disable: xcache.size=0 ; to enable : xcache.size=64M etc (any size > 0) and your system mmap allows xcache.size = 60M ; set to cpu count (cat /proc/cpuinfo |grep -c processor) xcache.count = 1 ; just a hash hints, you can always store count(items) > slots xcache.slots = 8K ; ttl of the cache item, 0=forever xcache.ttl = 0 ; interval of gc scanning expired items, 0=no scan, other values is in seconds xcache.gc_interval = 0 ; same as aboves but for variable cache xcache.var_size = 4M xcache.var_count = 1 xcache.var_slots = 8K ; default ttl xcache.var_ttl = 0 xcache.var_maxttl = 0 xcache.var_gc_interval = 300 xcache.test = Off ; N/A for /dev/zero xcache.readonly_protection = Off ; for *nix, xcache.mmap_path is a file path, not directory. ; Use something like "/tmp/xcache" if you want to turn on ReadonlyProtection ; 2 group of php won't share the same /tmp/xcache ; for win32, xcache.mmap_path=anonymous map name, not file path xcache.mmap_path = "/dev/zero" ; leave it blank(disabled) or "/tmp/phpcore/" ; make sure it's writable by php (without checking open_basedir) xcache.coredump_directory = "" ; per request settings xcache.cacher = On xcache.stat = On xcache.optimizer = Off [xcache.coverager] ; per request settings ; enable coverage data collecting for xcache.coveragedump_directory and xcache_coverager_start/stop/get/clean() functions (will hurt executing performance) xcache.coverager = Off ; ini only settings ; make sure it's readable (care open_basedir) by coverage viewer script ; requires xcache.coverager=On xcache.coveragedump_directory = "" Thanks you

    Read the article

  • Printf in assembler doesn't print

    - by Gaim
    Hi there, I have got a homework to hack program using buffer overflow ( with disassambling, program was written in C++, I haven't got the source code ). I have already managed it but I have a problem. I have to print some message on the screen, so I found out address of printf function, pushed address of "HACKED" and address of "%s" on the stack ( in this order ) and called that function. Called code passed well but nothing had been printed. I have tried to simulate the environment like in other place in the program but there has to be something wrong. Do you have any idea what I am doing wrong that I have no output, please? Thanks a lot EDIT: This program is running on Windows XP SP3 32b, written in C++, Intel asm there is the "hack" code CPU Disasm Address Hex dump Command Comments 0012F9A3 90 NOP ;hack begins 0012F9A4 90 NOP 0012F9A5 90 NOP 0012F9A6 89E5 MOV EBP,ESP 0012F9A8 83EC 7F SUB ESP,7F ;creating a place for working data 0012F9AB 83EC 7F SUB ESP,7F 0012F9AE 31C0 XOR EAX,EAX 0012F9B0 50 PUSH EAX 0012F9B1 50 PUSH EAX 0012F9B2 50 PUSH EAX 0012F9B3 89E8 MOV EAX,EBP 0012F9B5 83E8 09 SUB EAX,9 0012F9B8 BA 1406EDFF MOV EDX,FFED0614 ;address to jump, it is negative because there mustn't be 00 bytes 0012F9BD F7DA NOT EDX 0012F9BF FFE2 JMP EDX ;I have to jump because there are some values overwritten by the program 0012F9C1 90 NOP 0012F9C2 0090 00000000 ADD BYTE PTR DS:[EAX],DL 0012F9C8 90 NOP 0012F9C9 90 NOP 0012F9CA 90 NOP 0012F9CB 90 NOP 0012F9CC 6C INS BYTE PTR ES:[EDI],DX ; I/O command 0012F9CD 65:6E OUTS DX,BYTE PTR GS:[ESI] ; I/O command 0012F9CF 67:74 68 JE SHORT 0012FA3A ; Superfluous address size prefix 0012F9D2 2069 73 AND BYTE PTR DS:[ECX+73],CH 0012F9D5 203439 AND BYTE PTR DS:[EDI+ECX],DH 0012F9D8 34 2C XOR AL,2C 0012F9DA 2066 69 AND BYTE PTR DS:[ESI+69],AH 0012F9DD 72 73 JB SHORT 0012FA52 0012F9DF 74 20 JE SHORT 0012FA01 0012F9E1 3120 XOR DWORD PTR DS:[EAX],ESP 0012F9E3 6C INS BYTE PTR ES:[EDI],DX ; I/O command 0012F9E4 696E 65 7300909 IMUL EBP,DWORD PTR DS:[ESI+65],-6F6FFF8D 0012F9EB 90 NOP 0012F9EC 90 NOP 0012F9ED 90 NOP 0012F9EE 31DB XOR EBX,EBX ; hack continues 0012F9F0 8818 MOV BYTE PTR DS:[EAX],BL ; writing 00 behind word "HACKED" 0012F9F2 83E8 06 SUB EAX,6 0012F9F5 50 PUSH EAX ; address of "HACKED" 0012F9F6 B8 3B8CBEFF MOV EAX,FFBE8C3B 0012F9FB F7D0 NOT EAX 0012F9FD 50 PUSH EAX ; address of "%s" 0012F9FE B8 FFE4BFFF MOV EAX,FFBFE4FF 0012FA03 F7D0 NOT EAX 0012FA05 FFD0 CALL EAX ;address of printf This code is really ugly because I am new in assembler and there mustn't be null bytes because of buffer-overflow bug

    Read the article

  • NSKeyedUnarchiver chokes when trying to unarchive more than one object

    - by ajduff574
    We've got a custom matrix class, and we're attempting to archive and unarchive an NSArray containing four of them. The first seems to get unarchived fine (we can see that initWithCoder is called once), but then the program simply hangs, using 100% CPU. It doesn't continue or output any errors. These are the relevant methods from the matrix class (rows, columns, and matrix are our only instance variables): -(void)encodeWithCoder:(NSCoder*) coder { float temp[rows * columns]; for(int i = 0; i < rows; i++) { for(int j = 0; j < columns; j++) { temp[columns * i + j] = matrix[i][j]; } } [coder encodeBytes:(const void *)temp length:rows*columns*sizeof(float) forKey:@"matrix"]; [coder encodeInteger:rows forKey:@"rows"]; [coder encodeInteger:columns forKey:@"columns"]; } -(id)initWithCoder:(NSCoder *) coder { if (self = [super init]) { rows = [coder decodeIntegerForKey:@"rows"]; columns = [coder decodeIntegerForKey:@"columns"]; NSUInteger * len; *len = (unsigned int)(rows * columns * sizeof(float)); float * temp = (float * )[coder decodeBytesForKey:@"matrix" returnedLength:len]; matrix = (float ** )calloc(rows, sizeof(float*)); for (int i = 0; i < rows; i++) { matrix[i] = (float*)calloc(columns, sizeof(float)); } for(int i = 0; i < rows *columns; i++) { matrix[i / columns][i % columns] = temp[i]; } } return self; } And this is really all we're trying to do: NSArray * weightMatrices = [NSArray arrayWithObjects:w1,w2,w3,w4,nil]; [NSKeyedArchiver archiveRootObject:weightMatrices toFile:@"weights.archive"]; NSArray * newWeights = [NSKeyedUnarchiver unarchiveObjectWithFile:@"weights.archive"]; What's driving us crazy is that we can archive and unarchive a single matrix just fine. We've done so (successfully) with a matrix many times larger than these four combined.

    Read the article

  • Fairness: Where can it be better handled?

    - by Srinivas Nayak
    Hi, I would like to share one of my practical experience with multiprogramming here. Yesterday I had written a multiprogram. Modifications to sharable resources were put under critical sections protected by P(mutex) and V(mutex) and those critical section code were put in a common library. The library will be used by concurrent applications (of my own). I had three applications that will use the common code from library and do their stuff independently. my library --------- work_on_shared_resource { P(mutex) get_shared_resource work_with_it V(mutex) } --------- my application ----------- application1 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application2 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application3 { *[ work_on_shared_resource ] } *[...] denote a loop. ------------ I had to run the applications on Linux OS. I had a thought in my mind, hanging over years, that, OS shall schedule all the processes running under him with all fairness. In other words, it will give all the processes, their pie of resource-usage equally well. When first two applications were put to work, they run perfectly well without deadlock. But when the third application started running, always the third one got the resources, but since it is not doing anything in its non-critical region, it gets the shared resource more often when other tasks are doing something else. So the other two applications were found almost totally halted. When the third application got terminated forcefully, the previous two applications resumed their work as before. I think, this is a case of starvation, first two applications had to starve. Now how can we ensure fairness? Now I started believing that OS scheduler is innocent and blind. It depends upon who won the race; he got the largest pie of CPU and resource. Shall we attempt to ensure fairness of resource users in the critical-section code in library? Or shall we leave it up to the applications to ensure fairness by being liberal, not greedy? To my knowledge, adding code to ensure fairness to the common library shall be an overwhelming task. On the other hand, believing on the applications will also never ensure 100% fairness. The application which does a very little task after working with shared resources shall win the race where as the application which does heavy processing after their work with shared resources shall always starve. What is the best practice in this case? Where we ensure fairness and how? Sincerely, Srinivas Nayak

    Read the article

  • "Emulating" Application.Run using Application.DoEvents

    - by Luca
    I'm getting in trouble. I'm trying to emulate the call Application.Run using Application.DoEvents... this sounds bad, and then I accept also alternative solutions to my question... I have to handle a message pump like Application.Run does, but I need to execute code before and after the message handling. Here is the main significant snippet of code. // Create barrier (multiple kernels synchronization) sKernelBarrier = new KernelBarrier(sKernels.Count); foreach (RenderKernel k in sKernels) { // Create rendering contexts (one for each kernel) k.CreateRenderContext(); // Start render kernel kernels k.mThread = new Thread(RenderKernelMain); k.mThread.Start(k); } while (sKernelBarrier.KernelCount > 0) { // Wait untill all kernel loops has finished sKernelBarrier.WaitKernelBarrier(); // Do application events Application.DoEvents(); // Execute shared context services foreach (RenderKernelContextService s in sContextServices) s.Execute(sSharedContext); // Next kernel render loop sKernelBarrier.ReleaseKernelBarrier(); } This snippet of code is execute by the Main routine. Pratically I have a list of Kernel classes, which runs in separate threads, these threads handle a Form for rendering in OpenGL. I need to synchronize all the Kernel threads using a barrier, and this work perfectly. Of course, I need to handle Form messages in the main thread (Main routine), for every Form created, and indeed I call Application.DoEvents() to do the job. Now I have to modify the snippet above to have a common Form (simple dialog box) without consuming the 100% of CPU calling Application.DoEvents(), as Application.Run does. The goal should be to have the snippet above handle messages when arrives, and issue a rendering (releasing the barrier) only when necessary, without trying to get the maximum FPS; there should be the possibility to switch to a strict loop to render as much as possible. How could it be possible? Note: the snippet above must be executed in the Main routine, since the OpenGL context is created on the main thread. Moving the snippet in a separated thread and calling Application.Run is quite unstable and buggy...

    Read the article

  • server|configuration problem, a php script just die with no error log & no reason

    - by Roberto
    Hi (first of all, thanks for your attention & sorry for my bad english hahaha also this is not a programming error, or thats what I think, I think this is an error in some configuration of the server or something else but I dont know what) I have a php script (is running like a process of linux, its not running on the web browser) that send SMS via SMPP on the port 2055 (using sockets in php) & then inserts like 10,000 rows into a dababase on MySQL, the script gets the data from a XML file; firts it was running in a shared server (Hostgator is our hosting provider) & at the begining it worked fine, with no trouble, but 5 months later an error appear, the process just die with no reason, the script only sent & inserted 700 rows in the table of the database & the process didnt show any warning or error, nothing appears in the error logs, & I didnt make any change in the script Hostgator never helped us hahaha so we decided to move the script from the shared server to a dedicated server; I thought it was a memory problem or something like that, but when we move the script to the dedicated server the problem just get worse, the script die when has just sent & inserted 40 to 50 rows to the database the information about this error: the shared server is on Red Hat 4.1.2-46 & the dedicated server is on CentOS 5.4 I have commented the line that sends the SMS, & the problem remains in the shared server, at the begining the script was fine, but then the script started to die when has just inserted 700 aprox. in the database, & now the script is dying when has inserted 2500 rows, its better but we didnt change anything in the dedicated server, the script dies when has just inserted like 40 row in the table the script, before it dies, change to a zombie process & we dont know why the usage of memory appears to be 0.3%, and of the cpu appears to be 0.7% to 1% I have changed the max memory limit of php to 128Mb, and even to -1 (so php wont have any limit) but the problem remains we have the limit of 50 connections of mysql at the same time, so I think this is not the problem Im using mysqli to connect from php to mysql Hostgator report that they haven't made any change or update in the servers what could be the problem?? what should I do??? what should I search??? is something in the logic Im missing?? what steps do I have to follow when managing & searching problems of process on Linux??? thank you very much, I think this is not a programming problem, but you have more experience than me, you can tell me thanks!!! bye!!! :)

    Read the article

  • ASP.NET application developed in 32 bit environment not working in 64 bit environment

    - by jgonchik
    We have developed an ASP.NET website on a Windows 7 - 32 bit platform using Visual Studio 2008. This website is being hosted at a hosting company where we share a server with hundreds of other ASP.NET websites. We are in the process of changing our hosting to a dedicated Windows 2008 - 64 bit server. We have installed Visual Studio on this new server in order to debug our application. If we try to start the application on this new server using Visual Studios 2008's own web server (not IIS 7) we get the error below. We have tried to compile the application in both 32 as well as 64 bit mode. We also tried to compile to "Any CPU". But nothing helps. We also tried running Visual Studio as an administrator but without success. We get the following error: Server Error in '/' Application. The specified module could not be found. (Exception from HRESULT: 0x8007007E) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.IO.FileNotFoundException: The specified module could not be found. (Exception from HRESULT: 0x8007007E) Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [FileNotFoundException: The specified module could not be found. (Exception from HRESULT: 0x8007007E)] System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +0 System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +43 System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +127 System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +142 System.Reflection.Assembly.Load(String assemblyString) +28 System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +46 [ConfigurationErrorsException: The specified module could not be found. (Exception from HRESULT: 0x8007007E)] System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +613 System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() +203 System.Web.Configuration.CompilationSection.LoadAssembly(AssemblyInfo ai) +105 System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) +178 System.Web.Compilation.BuildProvidersCompiler..ctor(VirtualPath configPath, Boolean supportLocalization, String outputAssemblyName) +54 System.Web.Compilation.ApplicationBuildProvider.GetGlobalAsaxBuildResult(Boolean isPrecompiledApp) +232 System.Web.Compilation.BuildManager.CompileGlobalAsax() +51 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +337 [HttpException (0x80004005): The specified module could not be found. (Exception from HRESULT: 0x8007007E)] System.Web.Compilation.BuildManager.ReportTopLevelCompilationException() +58 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +512 System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters) +729 [HttpException (0x80004005): The specified module could not be found. (Exception from HRESULT: 0x8007007E)] System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +8897659 System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +85 System.Web.HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr) +259 Does anyone know why this error appears and how to solve it?

    Read the article

  • Performance Optimization for Matrix Rotation

    - by Summer_More_More_Tea
    Hello everyone: I'm now trapped by a performance optimization lab in the book "Computer System from a Programmer's Perspective" described as following: In a N*N matrix M, where N is multiple of 32, the rotate operation can be represented as: Transpose: interchange elements M(i,j) and M(j,i) Exchange rows: Row i is exchanged with row N-1-i A example for matrix rotation(N is 3 instead of 32 for simplicity): ------- ------- |1|2|3| |3|6|9| ------- ------- |4|5|6| after rotate is |2|5|8| ------- ------- |7|8|9| |1|4|7| ------- ------- A naive implementation is: #define RIDX(i,j,n) ((i)*(n)+(j)) void naive_rotate(int dim, pixel *src, pixel *dst) { int i, j; for (i = 0; i < dim; i++) for (j = 0; j < dim; j++) dst[RIDX(dim-1-j, i, dim)] = src[RIDX(i, j, dim)]; } I come up with an idea by inner-loop-unroll. The result is: Code Version Speed Up original 1x unrolled by 2 1.33x unrolled by 4 1.33x unrolled by 8 1.55x unrolled by 16 1.67x unrolled by 32 1.61x I also get a code snippet from pastebin.com that seems can solve this problem: void rotate(int dim, pixel *src, pixel *dst) { int stride = 32; int count = dim >> 5; src += dim - 1; int a1 = count; do { int a2 = dim; do { int a3 = stride; do { *dst++ = *src; src += dim; } while(--a3); src -= dim * stride + 1; dst += dim - stride; } while(--a2); src += dim * (stride + 1); dst -= dim * dim - stride; } while(--a1); } After carefully read the code, I think main idea of this solution is treat 32 rows as a data zone, and perform the rotating operation respectively. Speed up of this version is 1.85x, overwhelming all the loop-unroll version. Here are the questions: In the inner-loop-unroll version, why does increment slow down if the unrolling factor increase, especially change the unrolling factor from 8 to 16, which does not effect the same when switch from 4 to 8? Does the result have some relationship with depth of the CPU pipeline? If the answer is yes, could the degrade of increment reflect pipeline length? What is the probable reason for the optimization of data-zone version? It seems that there is no too much essential difference from the original naive version. EDIT: My test environment is Intel Centrino Duo processor and the verion of gcc is 4.4 Any advice will be highly appreciated! Kind regards!

    Read the article

  • cuda/thrust: Trying to sort_by_key 2.8GB of data in 6GB of gpu RAM throws bad_alloc

    - by Sven K
    I have just started using thrust and one of the biggest issues I have so far is that there seems to be no documentation as to how much memory operations require. So I am not sure why the code below is throwing bad_alloc when trying to sort (before the sorting I still have 50% of GPU memory available, and I have 70GB of RAM available on the CPU)--can anyone shed some light on this? #include <thrust/device_vector.h> #include <thrust/sort.h> #include <thrust/random.h> void initialize_data(thrust::device_vector<uint64_t>& data) { thrust::fill(data.begin(), data.end(), 10); } #define BUFFERS 3 int main(void) { size_t N = 120 * 1024 * 1024; char line[256]; try { std::cout << "device_vector" << std::endl; typedef thrust::device_vector<uint64_t> vec64_t; // Each buffer is 900MB vec64_t c[3] = {vec64_t(N), vec64_t(N), vec64_t(N)}; initialize_data(c[0]); initialize_data(c[1]); initialize_data(c[2]); std::cout << "initialize_data finished... Press enter"; std::cin.getline(line, 0); // nvidia-smi reports 48% memory usage at this point (2959MB of // 6143MB) std::cout << "sort_by_key col 0" << std::endl; // throws bad_alloc thrust::sort_by_key(c[0].begin(), c[0].end(), thrust::make_zip_iterator(thrust::make_tuple(c[1].begin(), c[2].begin()))); std::cout << "sort_by_key col 1" << std::endl; thrust::sort_by_key(c[1].begin(), c[1].end(), thrust::make_zip_iterator(thrust::make_tuple(c[0].begin(), c[2].begin()))); } catch(thrust::system_error &e) { std::cerr << "Error: " << e.what() << std::endl; exit(-1); } return 0; }

    Read the article

  • Why is cell phone software still so primitive?

    - by Tomislav Nakic-Alfirevic
    I don't do mobile development, but it strikes me as odd that features like this aren't available by default on most phones: full text search: searches all address book contents, messages, anything else being a plus better call management: e.g. a rotating audio call log, meaning you always have the last N calls recorded for your listening pleasure later (your little girl just said her first "da-da" while you were on a business trip, you had a telephone job interview, you received complex instructions to do something etc.) bluetooth remote control (like e.g. anyRemote, but available by default on a bluetooth phone) no multitasking capabilities worth mentioning and in general no e.g. weekly software updates, making the phone much more usable (even if it had to be done over USB, rather than over the network). I'm sure I was dumbfounded by the lack or design of other features as well, but they don't come to mind right now. To clarify, I'm not talking about smartphones here: my plain, 2-year old phone has a CPU an order of magnitude faster than my first PC, about as much storage space and it's ridiculous how bad (slow, unwieldy) the software is and it's not one phone or one manufacturer. What keeps the (to me) obvious software functionality vacuum on a capable hardware platform from being filled up? Edit: I believe a clarification on the multitasking point might be beneficial. I'll use my phone as an example, although the point is much more general. The phone can multitask and in fact does: you can listen to music and do something else at the same time. On the other hand, the way the software has been designed makes multitasking next to useless. (Ditto with the external touch screen: it can take touch commands, but only one application makes use of it, and only with 3 commands.) To take the multitasking example to the extreme, if I plug my phone into my laptop and it registers as an external disk, it doesn't allow any kind of operation: messages, calling, calendar, everything out of reach, although I can receive a call. No "battery life" issue there: it's charging while connected. BTW, another example of design below the current state of the art: I don't see a phone on the horizon which will remember where in an audio or video file you were when you stopped listening/watching it last time (podcasts are a good use case). Simplistic rewind/fast forward functionality only aggravates the problem.

    Read the article

  • WPF Application Slow Unresponsive when demonstrating using remote sharing software

    - by Kev
    After spending 14 hours on this I think its time to share my woes and see if anyone has experienced this issue before. Ill describe the issue and tests I have done to rule out certain things. Ok so I have a WPF application which loads in data from an SQL database. I am using DevExpress Components for datagrids, ribbons etc.. and FluentNhibernate to provide a session for database operations. I am also using log4net to log events to a textfile. Using the application on my laptop with SQL Express 2008 works fine.. the application starts up, retrieves 1000 records and I can tab through the controls on the ribbon. Now, I decided to demo the application to a third party and used remote login/sharing software online to share my desktop with the other person so as I could load the application on my laptop and they could view me using the application. Now, the application takes approx 45 seconds to load... 30 seconds with a blank database where as, when im not sharing out my screen using the online software the application loads in about 7-10 seconds. As well as that, even using the controls in the application during the demo were very sticky, slow and unresponsive. During the sharing session though however I was able to use other applications without any problems.. everything else worked fine. But I cannot understand how my application works ok under normal conditions , even browsing the net at the same time etc... BUT totally fails to perform correctly when I am sharing a session with another user... the CPU usage shot up to 100% too at times when the application was trying to start up... Please see below a list of 3rd party dlls I am using as references in my project. DevExpress dlls FluidKit PixelLab.WPF PixelLab.Common Galasoft WPF Kit FluentNHibernate NHibernate Nhibernate.ByteCode.Castle Skype4ComLib TXTEXTControl log4net LinqKit All of these DLLs are in the output folder with the application dlls created from the class assemblys in the project. So when installed via an installer on a machine the dlls will be in the same application folder as the application file itself. Many thanks

    Read the article

  • Can GPU capabilities impact virtual machine performance?

    - by Dave White
    While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here. It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach. I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on. Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup". My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on this laptops is night and day compared to the E6510. They are crisp and you barely aware that you are running in a virtual environment. Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M. http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html 3100M = 111th (E6510) FX 2700m = 47th (Precision M6400) Radeon HD 5870 = 8th (Alienware) The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium. So after that long setup, my question is: Is the GPU significantly impacting the virtual machine's performance or are there other factors that I'm not looking at that I can use to boost the vm's performance? Do we now have to consider GPU performance when purchasing laptops where we expect to use virtualized development environments? Thanks in advance. Cheers, Dave

    Read the article

  • Thread.CurrentThread.CurrentUICulture not working consistently

    - by xTRUMANx
    I've been working on a pet project on the weekends to learn more about C# and have encountered an odd problem when working with localization. To be more specific, the problem I have is with System.Threading.Thread.CurrentThread.CurrentUICulture. I've set up my app so that the user can quickly change the language of the app by clicking a menu item. The menu item in turn, saves the two-letter code for the language (e.g. "en", "fr", etc.) in a user setting called 'Language' and then restarts the application. Properties.Settings.Default.Language = "en"; Properties.Settings.Default.Save(); Application.Restart(); When the application is started up, the first line of code in the Form's constructor (even before InitializeComponent()) fetches the Language string from the settings and sets the CurrentUICulture like so: public Form1() { Thread.CurrentThread.CurrentUICulture = new CultureInfo(Properties.Settings.Default.Language); InitializeComponent(); } The thing is, this doesn't work consistently. Sometimes, all works well and the application loads the correct language based on the string saved in the settings file. Other times, it doesn't, and the language remains the same after the application is restarted. At first I thought that I didn't save the language before restarting the application but that is definitely not the case. When the correct language fails to load, if I were to close the application and run it again, the correct language would come up correctly. So this implies that the Language string has been saved but the CurrentUICulture assignment in my form constructor is having no effect sometimes. Any help? Is there something I'm missing of how threading works in C#? This could be machine-specific, so if it makes any difference I'm using Pentium Dual-Core CPU. UPDATE Vlad asked me to check what the CurrentThread's CurrentUICulture is. So I added a MessageBox on my constructor to tell me what the CurrentUICulture two-letter code is as well as the value of my Language user string. MessageBox.Show(string.Format("Current Language: {0}\nCurrent UI Culture: {1}", Properties.Settings.Default.Language, Thread.CurrentThread.CurrentUICulture.TwoLetterISOLanguageName)); When the wrong language is loaded, both the Language string and CurrentUICulture have the wrong language. So I guess the CurrentUICulture has been cleared and my problem is actually with the Language Setting. So I guess the problem is that my application sometimes loads the previously saved language string rather than the last saved language string. If the app is restarted, it will then load the actual saved language string.

    Read the article

  • Another C datatypes question

    - by b-gen-jack-o-neill
    Hello. Well, I completely get the most basic datatypes of C, like short, int, long, float, to be exact, all numerical types.These types are needed to be known perform right operations with right numbers. For example to use FPU to add two float numbers. So the compiler must know what the type is. But, when it comes to characters I am little bit off. I know that basic C datatype char is there for ASCII characters coding. But what I don´t know is, why you even need another datatype for characters. Why could not you just use 1 byte integer value to store ASCII character. If you call printf, you apecify the datatype in the call, so you could say to printf that the integer represents ASCII character. I dont know how cout resolves datatype, but I guess you could just specify it somehow. Another thing is, when you want to use Unicode, you must use datatype wchar. But, what if I would like to use some another, for example ISO, or Windows coding instead of UTF? Becouse wchar codes characters as UTF-16 or UTF-32 (I read its compiler specific). And, what if I would want to use for example some imaginary new 8 byte text coding? What datatype should I use for it? I am actually pretty confused of this, becouse I always expected that if I want to use UTF-32 instead of ASCII, I just tell compiler "get UTF-32 value of the character I typed and save it into 4 char field." I thought that text coding is to be dealt with by the end, print function for example. That I just need to specify the coding for the compiler to use, since Windows doesent use ASCII in win32 apps, I guess C compiler must convert the char I typed to ASCII from whatever the type is that windows sends to the C editor. And the last thing is, what if I want to use for example 25 Byte integer for some high math operations? C has no specify-yourself datatype. Yes, I know that this would be difficult since all the math operations would need to be changed, becouse CPU can not add 25 Bytes numbers together. But is there a way to do it? Or is there some math library for it? What if I want to compute Pi to 1000000000000000 digits? :) I know my question is pretty long, but I just wanted to explain my thoughts the best I can in English, since its not my native language it is difficult. And I believe there is simple answer to my question(s), something I missed that explains everything. I read lot about text coding, C tutorials, but nothing about his. Thank you for your time.

    Read the article

  • Excel UDF calculation should return 'original' value

    - by LeChe
    Hi all, I've been struggling with a VBA problem for a while now and I'll try to explain it as thoroughly as possible. I have created a VSTO plugin with my own RTD implementation that I am calling from my Excel sheets. To avoid having to use the full-fledged RTD syntax in the cells, I have created a UDF that hides that API from the sheet. The RTD server I created can be enabled and disabled through a button in a custom Ribbon component. The behavior I want to achieve is as follows: If the server is disabled and a reference to my function is entered in a cell, I want the cell to display Disabled If the server is disabled, but the function had been entered in a cell when it was enabled (and the cell thus displays a value), I want the cell to keep displaying that value If the server is enabled, I want the cell to display Loading Sounds easy enough. Here is an example of the - non functional - code: Public Function RetrieveData(id as Long) Dim result as String // This returns either 'Disabled' or 'Loading' result = Application.Worksheet.Function.RTD("SERVERNAME", "", id) RetrieveData = result If(result = "Disabled") Then // Obviously, this recurses (and fails), so that's not an option If(Not IsEmpty(Application.Caller.Value2)) Then // So does this RetrieveData = Application.Caller.Value2 End If End If End Function The function will be called in thousands of cells, so storing the 'original' values in another data structure would be a major overhead and I would like to avoid it. Also, the RTD server does not know the values, since it also does not keep a history of it, more or less for the same reason. I was thinking that there might be some way to exit the function which would force it to not change the displayed value, but so far I have been unable to find anything like that. Any ideas on how to solve this are greatly appreciated! Thanks, Che EDIT: By popular demand, some additional info on why I want to do all this: As I said, the function will be called in thousands of cells and the RTD server needs to retrieve quite a bit of information. This can be quite hard on both network and CPU. To allow the user to decide for himself whether he wants this load on his machine, he or she can disable the updates from the server. In that case, he or she should still be able to calculate the sheets with the values currently in the fields, yet no updates are pushed into them. Once new data is required, the server can be enabled and the fields will be updated. Again, since we are talking about quite a bit of data here, I would rather not store it somewhere in the sheet. Plus, the data should be usable even if the workbook is closed and loaded again.

    Read the article

  • response.redirect to classic asp failing {Unable to evaluate expression because the code is optimize

    - by jeff
    I have the following code pasted below. For some reason, the response.redirect seems to be failing and it is maxing out the cpu on my server and just doesn't do anything. The .net code uploads the file fine, but does not redirect to the asp page to do the processing. I know this is absolute rubbish why would you have .net code redirecting to classic asp, it is a legacy app. I have tried putting false or true etc. at the end of the redirect as I have read other people have had issues with this. Please help as it's driving me insane! It's so strange, it runs locally on my machine but won't run on my server! I am getting the following error when I debugged remotely. {Unable to evaluate expression because the code is optimized or a native frame is on top of the call stack.} (UPDATED) After debugging remotely and taking the redirect out of the try catch, I have found that the redirect is trying to get to the correct location but after it leaves the redirect is just seems to get lost. (almost as if it can't navigate away from the cobra_import project) back up a level to COBRA/pages. Why is this??? This has worked previously!!! public void btnUploadTheFile_Click(object Source, EventArgs evArgs) { //need to check that the uploaded file is an xls file. string strFileNameOnServer = "PJI3.txt"; string strBaseLocation = ConfigurationSettings.AppSettings["str_file_location"]; if ("" == strFileNameOnServer) { txtOutput.InnerHtml = "Error - a file name must be specified."; return; } if (null != uplTheFile.PostedFile) { try { uplTheFile.PostedFile.SaveAs(strBaseLocation+strFileNameOnServer); txtOutput.InnerHtml = "File <b>" + strBaseLocation+strFileNameOnServer+"</b> uploaded successfully"; Response.Redirect ("/COBRA/pages/sap_import_pji3_prc.asp"); } catch (Exception e) { txtOutput.InnerHtml = "Error saving <b>" + strBaseLocation+strFileNameOnServer+"</b><br>"+ e.ToString(); } } }

    Read the article

  • How to do the processing and keep GUI refreshed using databinding?

    - by macias
    History of the problem This is continuation of my previous question How to start a thread to keep GUI refreshed? but since Jon shed new light on the problem, I would have to completely rewrite original question, which would make that topic unreadable. So, new, very specific question. The problem Two pieces: CPU hungry heavy-weight processing as a library (back-end) WPF GUI with databinding which serves as monitor for the processing (front-end) Current situation -- library sends so many notifications about data changes that despite it works within its own thread it completely jams WPF data binding mechanism, and in result not only monitoring the data does not work (it is not refreshed) but entire GUI is frozen while processing the data. The aim -- well-designed, polished way to keep GUI up to date -- I am not saying it should display the data immediately (it can skip some changes even), but it cannot freeze while doing computation. Example This is simplified example, but it shows the problem. XAML part: <StackPanel Orientation="Vertical"> <Button Click="Button_Click">Start</Button> <TextBlock Text="{Binding Path=Counter}"/> </StackPanel> C# part (please NOTE this is one piece code, but there are two sections of it): public partial class MainWindow : Window,INotifyPropertyChanged { // GUI part public MainWindow() { InitializeComponent(); DataContext = this; } private void Button_Click(object sender, RoutedEventArgs e) { var thread = new Thread(doProcessing); thread.IsBackground = true; thread.Start(); } // this is non-GUI part -- do not mess with GUI here public event PropertyChangedEventHandler PropertyChanged; public void OnPropertyChanged(string property_name) { if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs(property_name)); } long counter; public long Counter { get { return counter; } set { if (counter != value) { counter = value; OnPropertyChanged("Counter"); } } } void doProcessing() { var tmp = 10000.0; for (Counter = 0; Counter < 10000000; ++Counter) { if (Counter % 2 == 0) tmp = Math.Sqrt(tmp); else tmp = Math.Pow(tmp, 2.0); } } } Known workarounds (Please do not repost them as answers) Those two first are based on Jon ideas: pass GUI dispatcher to library and use it for sending notifications -- why it is ugly? because it could be no GUI at all give up with data binding COMPLETELY (one widget with databinding is enough for jamming), and instead check from time to time data and update the GUI manually -- well, I didn't learn WPF just to give up with it now ;-) and this is mine, it is ugly, but simplicity of it kills -- before sending notification freeze a thread -- Thread.Sleep(1) -- to let the potential receiver "breathe" -- it works, it is minimalistic, it is ugly though, and it ALWAYS slows down computation even if no GUI is there So... I am all ears for real solutions, not some tricks.

    Read the article

  • Memory corruption in System.Move due to changed 8087CW mode (png + stretchblt)

    - by André Mussche
    I have strange a memory corruption problem. After many hours debugging and trying I think I found something. For example: I do a simple string assignment: sTest := 'SET LOCK_TIMEOUT '; However, the result sometimes becomes: sTest = 'SET LOCK'#0'TIMEOUT ' So, the _ gets replaced by an 0 byte. I have seen this happening once (reproducing is tricky, dependent on timing) in the System.Move function, when it uses the FPU stack (fild, fistp) for fast memory copy (in case of 9 till 32 bytes to move): ... @@SmallMove: {9..32 Byte Move} fild qword ptr [eax+ecx] {Load Last 8} fild qword ptr [eax] {Load First 8} cmp ecx, 8 jle @@Small16 fild qword ptr [eax+8] {Load Second 8} cmp ecx, 16 jle @@Small24 fild qword ptr [eax+16] {Load Third 8} fistp qword ptr [edx+16] {Save Third 8} ... Using the FPU view and 2 memory debug views (Delphi - View - Debug - CPU - Memory) I saw it going wrong... once... could not reproduce however... This morning I read something about the 8087CW mode, and yes, if this is changed into $27F I get memory corruption! Normally it is $133F: The difference between $133F and $027F is that $027F sets up the FPU for doing less precise calculations (limiting to Double in stead of Extended) and different infiniti handling (which was used for older FPU’s, but is not used any more). Okay, now I found why but not when! I changed the working of my AsmProfiler with a simple check (so all functions are checked at enter and leave): if Get8087CW = $27F then //normally $1372? if MainThreadID = GetCurrentThreadId then //only check mainthread DebugBreak; I "profiled" some units and dll's and bingo (see stack): Windows.StretchBlt(3372289943,0,0,514,345,4211154027,0,0,514,345,13369376) pngimage.TPNGObject.DrawPartialTrans(4211154027,(0, 0, 514, 345, (0, 0), (514, 345))) pngimage.TPNGObject.Draw($7FF62450,(0, 0, 514, 345, (0, 0), (514, 345))) Graphics.TCanvas.StretchDraw((0, 0, 514, 345, (0, 0), (514, 345)),$7FECF3D0) ExtCtrls.TImage.Paint Controls.TGraphicControl.WMPaint((15, 4211154027, 0, 0)) So it is happening in StretchBlt... What to do now? Is it a fault of Windows, or a bug in PNG (included in D2007)? Or is the System.Move function not failsafe?

    Read the article

  • Implementing coroutines in Java

    - by JUST MY correct OPINION
    This question is related to my question on existing coroutine implementations in Java. If, as I suspect, it turns out that there is no full implementation of coroutines currently available in Java, what would be required to implement them? As I said in that question, I know about the following: You can implement "coroutines" as threads/thread pools behind the scenes. You can do tricksy things with JVM bytecode behind the scenes to make coroutines possible. The so-called "Da Vinci Machine" JVM implementation has primitives that make coroutines doable without bytecode manipulation. There are various JNI-based approaches to coroutines also possible. I'll address each one's deficiencies in turn. Thread-based coroutines This "solution" is pathological. The whole point of coroutines is to avoid the overhead of threading, locking, kernel scheduling, etc. Coroutines are supposed to be light and fast and to execute only in user space. Implementing them in terms of full-tilt threads with tight restrictions gets rid of all the advantages. JVM bytecode manipulation This solution is more practical, albeit a bit difficult to pull off. This is roughly the same as jumping down into assembly language for coroutine libraries in C (which is how many of them work) with the advantage that you have only one architecture to worry about and get right. It also ties you down to only running your code on fully-compliant JVM stacks (which means, for example, no Android) unless you can find a way to do the same thing on the non-compliant stack. If you do find a way to do this, however, you have now doubled your system complexity and testing needs. The Da Vinci Machine The Da Vinci Machine is cool for experimentation, but since it is not a standard JVM its features aren't going to be available everywhere. Indeed I suspect most production environments would specifically forbid the use of the Da Vinci Machine. Thus I could use this to make cool experiments but not for any code I expect to release to the real world. This also has the added problem similar to the JVM bytecode manipulation solution above: won't work on alternative stacks (like Android's). JNI implementation This solution renders the point of doing this in Java at all moot. Each combination of CPU and operating system requires independent testing and each is a point of potentially frustrating subtle failure. Alternatively, of course, I could tie myself down to one platform entirely but this, too, makes the point of doing things in Java entirely moot. So... Is there any way to implement coroutines in Java without using one of these four techniques? Or will I be forced to use the one of those four that smells the least (JVM manipulation) instead?

    Read the article

  • What is fastest way to convert bool to byte?

    - by Amir Rezaei
    What is fastest way to convert bool to byte? I want this mapping: False=0, True=1 Note: I don't want to use any if statement. Update: I don't want to use conditional statement. I don't want the CPU to halt or guess next statement. I want to optimize this code: private static string ByteArrayToHex(byte[] barray) { char[] c = new char[barray.Length * 2]; byte k; for (int i = 0; i < barray.Length; ++i) { k = ((byte)(barray[i] >> 4)); c[i * 2] = (char)(k > 9 ? k + 0x37 : k + 0x30); k = ((byte)(barray[i] & 0xF)); c[i * 2 + 1] = (char)(k > 9 ? k + 0x37 : k + 0x30); } return new string(c); } Update: The length of the array is very large, it's in terabyte order! Therefore I need to do optimization if possible. I shouldn't need to explain my self. The question is still valid. Update: I'm working on a project and looking at others code. That's why I didn't provide with the function at first place. I didn't want to spend time on explaining for people when they have opinion about the code. I shouldn’y need to provide in my question the background of my work, and a function that is not written by me. I have started to optimize it part by part. If I needed help with the whole function I would asked that in another question. That is why I asked this very simple at the beginning. Unfortunately people couldn’t keep themselves to the question. So please if you want to help answer the question. Update: For dose who want to see the point of this question. This example shows how two if statement are reduced from the code. byte A = k > 9 ; //If it was possible (k>9) == 0 || 1 c[i * 2] = A * (k + 0x30) - (A - 1) * (k + 0x30);

    Read the article

  • C# DLL Deployed in COM+. Error while accessing the methods.

    - by Dakshinamurthy
    I have the C# Dll (ABService) deployed in COM + and my os Is windows 2008. I have given the strong name for this dll and its dependent dll’s When I access the method of this dll through localhost or if I add the reference to the client project the method are executed successfully. Simply if I access the dll from the same machine with the reference it is working. So I think there is no problem with the way I deployed in the COM +. I have the doubt whether I have the problem in OS and Visual Studio 2008 combination. I have built all the dll with the Visual Studio 2008 with Target cpu as x86 and targert framework as 2.0. I have given in below the codes I have tried with and the errors. I need to create the object for the dll in server machine(64 bit) and access its method from the client(32 bit) Code : Type svr = Type.GetTypeFromProgID("ABService.Service", strserver1url[2],false); ABService.Service service1= (ABService.Service)Activator.CreateInstance(svr); strresult = service1.ExecuteService(orequest.xml); Error :{"Retrieving the COM class factory for remote component with CLSID {77BF00E0-41AC-3967-9E72-A4927CC0B880} from machine 10.105.138.64 failed due to the following error: 80040154."} Code Type svr = Type.GetTypeFromProgID("ABService.Service", strserver1url[2],true); object Service1 = null; Service1 = (ABService.Service)Activator.CreateInstance(svr, true); strresult = Convert.ToString(ReflectionHelper.Invoke(Service1, "ExecuteService", new object[] { orequest.xml })); Service1 = null; Error: Retrieving the COM class factory for remote component with CLSID {77BF00E0-41AC-3967-9E72-A4927CC0B880} from machine ftpsite failed due to the following error: 80040154. With the below code instead of C#.Net dll if i have the vb dll in Com + the method is executed successfully. Code ords = new RDS.DataSpace(); ords.InternetTimeout = 600000; object M_Service = null; ABService.Service oabservice = null; M_Service = ords.CreateObject("ABService.Service",url); strresult = Convert.ToString(ReflectionHelper.Invoke(M_Service, "ExecuteService", new object[] { orequest.xml })); Error : {"Object doesn't support this property or method 'ExecuteService'"} Code object obj=Interaction.CreateObject("ABService.Service", "10.105.138.64"); strresult = Convert.ToString(ReflectionHelper.Invoke(obj, "ExecuteService", new object[] { orequest.xml })); Error: {"Cannot create ActiveX component."} Code object obj = Activator.GetObject(typeof(ABService.Service), @"http://10.105.138.64:80/ABANET"); strresult = Convert.ToString(ReflectionHelper.Invoke(obj, "ExecuteService", new object[] { orequest.xml })); Error: InnerException {"The remote server returned an error: (405) Method Not Allowed."} System.Exception {System.Net.WebException} Message "Exception has been thrown by the target of an invocation.”

    Read the article

  • MySQL - Calculating fields on the fly vs storing calculated data

    - by Christian Varga
    Hi Everyone, I apologise if this has been asked before, but I can't seem to find an answer to a question that I have about calculating on the fly vs storing fields in a database. I read a few articles that suggested it was preferable to calculate when you can, but I would just like to know if that still applies to the following 2 examples. Example 1. Say you are storing data relating to a car. You store the fuel tank size in litres, and how many litres it uses per 100km. You also want to know how many KMs it can travel, which can be calculated from the tank size and economy. I see 2 ways of doing this: When a car is added or updated, calculate the amount of KMs and store this as a static field in the database. Every time a car is accessed, calculate the amount of KMs on the fly. Because the cars economy/tank size doesn't change (although it could be edited), the KMs is a pretty static value. I don't see why we would calculate it every single time the car is accessed. Wouldn't this waste cpu time as opposed to simply storing it in a separate field in the database and calculating only when a car is added or updated? My next example, which is almost an entirely different question (but on the same topic), relates to counting children. Let's say we have a app which has categories and items. We have a view where we display all the categories, and a count of all the items inside each category. Again, I'm wondering what's better. To perform a MySQL query to count all the items in each category every single time the page is accessed? Or store the count in a field in the categories table and update when an item is added / deleted? I know it is redundant to store anything that can be calculated, but I worry that calculating fields or counting records might be slow as opposed to storing the data in a field. If it's not then please let me know, I just want to learn about when to use either method. On a small scale I guess it wouldn't matter either way, but apps like Facebook, would they really count the amount of friends you have every time someone views your profile or would they just store it as a field? I'd appreciate any responses to both of these scenarios, and any resource that might explain the benefits of calculating vs storing. Thanks in advance, Christian

    Read the article

  • Is it Bad Practice to use C++ only for the STL containers?

    - by gmatt
    First a little background ... In what follows, I use C,C++ and Java for coding (general) algorithms, not gui's and fancy program's with interfaces, but simple command line algorithms and libraries. I started out learning about programming in Java. I got pretty good with Java and I learned to use the Java containers a lot as they tend to reduce complexity of book keeping while guaranteeing great performance. I intermittently used C++, but I was definitely not as good with it as with Java and it felt cumbersome. I did not know C++ enough to work in it without having to look up every single function and so I quickly reverted back to sticking to Java as much as possible. I then made a sudden transition into cracking and hacking in assembly language, because I felt I was concentrated too much attention on a much too high level language and I needed more experience with how a CPU interacts with memory and whats really going on with the 1's and 0's. I have to admit this was one of the most educational and fun experiences I've had with computers to date. For obviously reasons, I could not use assembly language to code on a daily basis, it was mostly reserved for fun diversions. After learning more about the computer through this experience I then realized that C++ is so much closer to the "level of 1's and 0's" than Java was, but I still felt it to be incredibly obtuse, like a swiss army knife with far too many gizmos to do any one task with elegance. I decided to give plain vanilla C a try, and I quickly fell in love. It was a happy medium between simplicity and enough "micromanagent" to not abstract what is really going on. However, I did miss one thing about Java: the containers. In particular, a simple container (like the stl vector) that expands dynamically in size is incredibly useful, but quite a pain to have to implement in C every time. Hence my code currently looks like almost entirely C with containers from C++ thrown in, the only feature I use from C++. I'd like to know if its consider okay in practice to use just one feature of C++, and ignore the rest in favor of C type code?

    Read the article

  • C++ DLL creation for C# project - No functions exported

    - by Yeti
    I am working on a project that requires some image processing. The front end of the program is C# (cause the guys thought it is a lot simpler to make the UI in it). However, as the image processing part needs a lot of CPU juice I am making this part in C++. The idea is to link it to the C# project and just call a function from a DLL to make the image processing part and allow to the C# environment to process the data afterwards. Now the only problem is that it seems I am not able to make the DLL. Simply put the compiler refuses to put any function into the DLL that I compile. Because the project requires some development time testing I have created two projects into a C++ solution. One is for the Dll and another console application. The console project holds all the files and I just include the corresponding header into my DLL project file. I thought the compiler should take out the functions that I marked as to be exported and make the DLL from them. Nevertheless this does not happens. Here it is how I defined the function in the header: extern "C" __declspec(dllexport) void _stdcall RobotData(BYTE* buf, int** pToNewBackgroundImage, int* pToBackgroundImage, bool InitFlag, ObjectInformation* robot1, ObjectInformation* robot2, ObjectInformation* robot3, ObjectInformation* robot4, ObjectInformation* puck); extern "C" __declspec(dllexport) CvPoint _stdcall RefPointFinder(IplImage* imgInput, CvRect &imgROI, CvScalar &refHSVColorLow, CvScalar &refHSVColorHi ); Followed by the implementation in the cpp file: extern "C" __declspec(dllexport) CvPoint _stdcall RefPointFinder(IplImage* imgInput, CvRect &imgROI,&refHSVColorLow, CvScalar &refHSVColorHi ) { \\... return cvPoint((int)( M10/M00) + imgROI.x, (int)( M01/M00 ) + imgROI.y) ;} extern "C" __declspec(dllexport) void _stdcall RobotData(BYTE* buf, int** pToNewBackgroundImage, int* pToBackgroundImage, bool InitFlag, ObjectInformation* robot1, ObjectInformation* robot2, ObjectInformation* robot3, ObjectInformation* robot4, ObjectInformation* puck) { \\ ...}; And my main file for the DLL project looks like: #ifdef _MANAGED #pragma managed(push, off) #endif /// <summary> Include files. </summary> #include "..\ImageProcessingDebug\ImageProcessingTest.h" #include "..\ImageProcessingDebug\ImageProcessing.h" BOOL APIENTRY DllMain( HMODULE hModule, DWORD ul_reason_for_call, LPVOID lpReserved) { return TRUE; } #ifdef _MANAGED #pragma managed(pop) #endif Needless to say it does not work. A quick look with DLL export viewer 1.36 reveals that no function is inside the library. I don't get it. What I am doing wrong ? As side not I am using the C++ objects (and here it is the C++ DLL part) such as the vector. However, only for internal usage. These will not appear in the headers of either function as you can observe from the previous code snippets. Any ideas? Thx, Bernat

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >