Search Results

Search found 7213 results on 289 pages for 'multi processor'.

Page 236/289 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • Compiler optimization causing the performance to slow down

    - by aJ
    I have one strange problem. I have following piece of code: template<clss index, class policy> inline int CBase<index,policy>::func(const A& test_in, int* srcPtr ,int* dstPtr) { int width = test_in.width(); int height = test_in.height(); double d = 0.0; //here is the problem for(int y = 0; y < height; y++) { //Pointer initializations //multiplication involving y //ex: int z = someBigNumber*y + someOtherBigNumber; for(int x = 0; x < width; x++) { //multiplication involving x //ex: int z = someBigNumber*x + someOtherBigNumber; if(soemCondition) { // floating point calculations } *dstPtr++ = array[*srcPtr++]; } } } The inner loop gets executed nearly 200,000 times and the entire function takes 100 ms for completion. ( profiled using AQTimer) I found an unused variable double d = 0.0; outside the outer loop and removed the same. After this change, suddenly the method is taking 500ms for the same number of executions. ( 5 times slower). This behavior is reproducible in different machines with different processor types. (Core2, dualcore processors). I am using VC6 compiler with optimization level O2. Follwing are the other compiler options used : -MD -O2 -Z7 -GR -GX -G5 -X -GF -EHa I suspected compiler optimizations and removed the compiler optimization /O2. After that function became normal and it is taking 100ms as old code. Could anyone throw some light on this strange behavior? Why compiler optimization should slow down performance when I remove unused variable ? Note: The assembly code (before and after the change) looked same.

    Read the article

  • Java: volatile guarantees and out-of-order execution

    - by WizardOfOdds
    Note that this question is solely about the volatile keyword and the volatile guarantees: it is not about the synchronized keyword (so please don't answer "you must use synchronize" for I don't have any issue to solve: I simply want to understand the volatile guarantees (or lack of guarantees) regarding out-of-order execution). Say we have an object containing two volatile String references that are initialized to null by the constructor and that we have only one way to modify the two String: by calling setBoth(...) and that we can only set their references afterwards to non-null reference (only the constructor is allowed to set them to null). For example (it's just an example, there's no question yet): public class SO { private volatile String a; private volatile String b; public SO() { a = null; b = null; } public void setBoth( @NotNull final String one, @NotNull final String two ) { a = one; b = two; } public String getA() { return a; } public String getB() { return b; } } In setBoth(...), the line assigning the non-null parameter "a" appears before the line assigning the non-null parameter "b". Then if I do this (once again, there's no question, the question is coming next): if ( so.getB() != null ) { System.out.println( so.getA().length ); } Am I correct in my understanding that due to out-of-order execution I can get a NullPointerException? In other words: there's no guarantee that because I read a non-null "b" I'll read a non-null "a"? Because due to out-of-order (multi)processor and the way volatile works "b" could be assigned before "a"? volatile guarantees that reads subsequent to a write shall always see the last written value, but here there's an out-of-order "issue" right? (once again, the "issue" is made on purpose to try to understand the semantics of the volatile keyword and the Java Memory Model, not to solve a problem).

    Read the article

  • Using jQuery delay() with separate elements

    - by boomturn
    I want to fake a simple animation on page load by sequentially revealing several pngs, each appearing instantly but with unique, precisely timed delay before the next one. Using the latest version of jQuery (1.4.2 at writing), which provides a delay method. My first (braindead) impulse was: $('#image1').show('fast').delay(800); $('#image2').show('fast').delay(1200); $('#image3').show('fast'); Of course all come up simultaneously. jQuery doc examples refer to a single item being chained, not handling multiple items. OK... try again using nested callbacks from show(): $('#image1').show('fast', function() { $('#image2').show('fast', function() { $('#image3').show('fast', function() { }); }); }); Looks ugly (imagine doing 10, also wonder what processor hit would be) but hey, it works sequentially! Without delay though. So... chain delay() after show() using a callback function from it (not even sure if it supports one): $('#image1').show('fast').delay(800, function() { $('#image2').show('fast').delay(1200, function() { $('#image3').show('fast'); }); }); Hm. Opposite result of what I thought might happen: the callback works, but still no delay. So... should I be using some combination of queue and delay instead of nested callbacks? jQuery doc examples for queue again only use one element for example. Guessing this would involve using a custom queue, but am uncertain how to implement this. Or maybe I'm missing a simple way? (For a simple problem!)

    Read the article

  • parameterized doctype in xsl:stylesheet

    - by Flavius
    How could I inject a --stringparam (xsltproc) into the DOCTYPE of a XSL stylesheet? The --stringparam is specified from the command line. I have several books in docbook5 format I want to process with the same customization layer, each book having an unique identifier, here "demo", so I'm running something like xsltproc --stringparam course.name demo ... for each book. Obviously the parameter is not recognized as such, but as verbatim text, giving the error: warning: failed to load external entity "http://edu.yet-another-project.com/course/$(course.name)/entities.ent" Here it is how I've tried, which won't work: <?xml version='1.0'?> <!DOCTYPE stylesheet [ <!ENTITY % myent SYSTEM "http://edu.yet-another-project.com/course/$(course.name)/entities.ent"> %myent; ]> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <!-- the docbook template used --> <xsl:import href="http://docbook.org/ns/docbook/xhtml/chunk.xsl"/> <!-- processor parameters --> <xsl:param name="html.stylesheet">default.css</xsl:param> <xsl:param name="use.id.as.filename">1</xsl:param> <xsl:param name="chunker.output.encoding">UTF-8</xsl:param> <xsl:param name="chunker.output.indent">yes</xsl:param> <xsl:param name="navig.graphics">1</xsl:param> <xsl:param name="generate.revhistory.link">1</xsl:param> <xsl:param name="admon.graphics">1</xsl:param> <!-- here more stuff --> </xsl:stylesheet> Ideas?

    Read the article

  • Simple, fast SQL queries for flat files.

    - by plinehan
    Does anyone know of any tools to provide simple, fast queries of flat files using a SQL-like declarative query language? I'd rather not pay the overhead of loading the file into a DB since the input data is typically thrown out almost immediately after the query is run. Consider the data file, "animals.txt": dog 15 cat 20 dog 10 cat 30 dog 5 cat 40 Suppose I want to extract the highest value for each unique animal. I would like to write something like: cat animals.txt | foo "select $1, max(convert($2 using decimal)) group by $1" I can get nearly the same result using sort: cat animals.txt | sort -t " " -k1,1 -k2,2nr And I can always drop into awk from there, but this all feels a bit awkward (couldn't resist) when a SQL-like language would seem to solve the problem so cleanly. I've considered writing a wrapper for SQLite that would automatically create a table based on the input data, and I've looked into using Hive in single-processor mode, but I can't help but feel this problem has been solved before. Am I missing something? Is this functionality already implemented by another standard tool? Halp!

    Read the article

  • ARMv6 FIQ, acknowledge interrupt

    - by fastmonkeywheels
    I'm working with an i.mx35 armv6 core processor. I have Interrupt 62 configured as a FIQ with my handler installed and being called. My handler at the moment just toggles an output pin so I can test latency with a scope. With the code below, once I trigger the FIQ it continues forever as fast as it can, apparently not being acknowledged. I'm triggering the FIQ by means of the Interrupt Force Register so I'm assured that the source isn't triggering it this fast. If I disable Interrupt 62 in the AVIC in my FIQ routine the interrupt only triggers once. I have read the sections on the VIC Port in the ARM1136JF-S and ARM1136J-S Technical Reference Manual and it covers proper exit procedure. I'm only having one FIQ handler so I have no need to branch. The line that I don't understand is: STR R0, [R8,#AckFinished] I'm not sure what AckFinished is supposed to be or what this command is supposed to do. My FIQ handler is below: ldr r9, IOMUX_ADDR12 ldr r8, [r9] orr r8, #0x08 @ top LED str r8,[r9] @turn on LED bic r8, #0x08 @ top LED str r8,[r9] @turn off LED subs pc, r14, #4 IOMUX_ADDR12: .word 0xFC2A4000 @remapped IOMUX addr My handler returns just fine and normal system operation resumes if I disable it after the first go, otherwise it triggers constantly and the system appears to hang. Do you think my assumption is right that the core isn't acknowledging the AVIC or could there be another cause of this FIQ triggering?

    Read the article

  • Unable to load huge XML document (incorrectly suppose it's due to the XSLT processing)

    - by krisvandenbergh
    I'm trying to match certain elements using XSLT. My input document is very large and the source XML fails to load after processing the following code (consider especially the first line). <xsl:template match="XMI/XMI.content/Model_Management.Model/Foundation.Core.Namespace.ownedElement/Model_Management.Package/Foundation.Core.Namespace.ownedElement"> <rdf:RDF> <rdf:Description rdf:about=""> <xsl:for-each select="Foundation.Core.Class"> <xsl:for-each select="Foundation.Core.ModelElement.name"> <owl:Class rdf:ID="@Foundation.Core.ModelElement.name" /> </xsl:for-each> </xsl:for-each> </rdf:Description> </rdf:RDF> </xsl:template> Apparently the XSLT fails to load after "Model_Management.Model". The PHP code is as follows: if ($xml->loadXML($source_xml) == false) { die('Failed to load source XML: ' . $http_file); } It then fails to perform loadXML and immediately dies. I think there are two options now. 1) I should set a maximum executing time. Frankly, I don't know how that I do this for the built-in PHP 5 XSLT processor. 2) Think about another way to match. What would be the best way to deal with this? The input document can be found at http://krisvandenbergh.be/uml_pricing.xml Any help would be appreciated! Thanks.

    Read the article

  • Locate modules by stack address

    - by PaulH
    I have a Winodws Mobile 6.1 application running on an ARMV4I processor. Given a stack address (from unwinding an exception), I like to determine what module owns that address. Using the ToolHelpAPI, I'm able to determine most modules using the following method: HANDLE snapshot = ::CreateToolhelp32Snapshot( TH32CS_SNAPMODULE | TH32CS_GETALLMODS, 0 ); if( INVALID_HANDLE_VALUE != snapshot ) { MODULEENTRY32 mod = { 0 }; mod.dwSize = sizeof( mod ); if( ::Module32First( snapshot, &mod ) ) { do { if( stack_address > (DWORD)mod.modBaseAddr && stack_address < (DWORD)( mod.modBaseAddr + mod.modBaseSize ) ) { // Found the module! // offset = stack_address - mod.modBaseAddr break; } } while( ::Module32Next( snapshot, &mod ) ); } ::CloseToolhelp32Snapshot( snapshot ); } But, I don't always seem to be able to find a module that matches an address. For example: stack address module offset 0x03f65bd8 coredll.dll + 0x0001bbd8 0x785cab1c mylib.dll + 0x0002ab1c 0x785ca9e8 mylib.dll + 0x0002a9e8 0x785ca0a0 mylib.dll + 0x0002a0a0 0x785c8144 mylib.dll + 0x00028144 0x3001d95c ??? 0x3001dd44 ??? 0x3001db90 ??? 0x03f88030 coredll.dll + 0x0003e030 0x03f8e46c coredll.dll + 0x0004446c 0x801087c4 ??? 0x801367b4 ??? 0x8010ce78 ??? 0x801086dc ??? 0x03f8e588 coredll.dll + 0x00044588 0x785a56a4 mylib.dll + 0x000056a4 0x785bdd60 mylib.dll + 0x0001dd60 0x785bbd0c mylib.dll + 0x0001bd0c 0x785bdb38 mylib.dll + 0x0001db38 0x3001db20 ??? 0x3001dc40 ??? 0x3001a8a4 ??? 0x3001a79c ??? 0x03f67348 coredll.dll + 0x0001d348 Where do I find those stack addresses that are missing? Any suggestions? Thanks, PaulH

    Read the article

  • What vertex shader code should be used for a pixel shader used for simple 2D SpriteBatch drawing in XNA?

    - by Michael
    Preface First of all, why is a vertex shader required for a SilverlightEffect (.slfx file) in Silverlight 5? I'm trying to port a simple 2D XNA game to Silverlight 5 RC, and I would like to use a basic pixel shader. This shader works great in XNA for Windows and Xbox, but I can't get it to compile with Silverlight as a SilverlightEffect. The MS blog for the Silverlight Toolkit says that "there is no difference between .slfx and .fx", but apparently this isn't quite true -- or at least SpriteBatch is working some magic for us in "regular XNA", and it isn't in "Silverlight XNA". If I try to directly copy my pixel shader file into a Silverlight project (and change it to the supported "Effect - Silverlight" importer/processor), when I try to compile I see the following error message: Invalid effect file. Unable to find vertex shader in pass "P0" Indeed, there isn't a vertex shader in my pixel shader file. I haven't needed one with my other 2D XNA apps since I'm just doing basic SpriteBatch drawing. I tried adding a vertex shader to my shader file, using Remi Gillig's comment on this Shawn Hargreaves blog post for guidance, but it doesn't quite work. The shader file successfully compiles, and I see some semblance of my game on screen, but it's tiny, twisted, repeated, and all jumbled up. So clearly something's not quite right. The Real Question So that brings me to my real question: Since a vertex shader is required, is there a basic vertex shader function that works for simple 2D SpriteBatch drawing? And if the vertex shader requires world/view/project matricies as parameters, what values am I supposed to use for a 2D game? Can any shader pros help? Thanks!

    Read the article

  • image analysis and 64bit OS

    - by picciopiccio
    I developed a C# application that makes use of Congex vision library (VPro). My application is developed with Visual Studio 2008 Pro on a 32bit Windows PC with 3GB of RAM. During the startup of application I see that a large amount of memory is allocated. So far so good, but when I add many and many vision elaboration the memory allocation increases and a part of application (only Cognex OCX) stops working well. The rest of application stills to work (working threads, com on socket....) I did whatever I could to save memory, but when the memory allocated is about 700MB I begin to have the problems. A note on the documentation of Cognex library tells that /LARGEADDRESSWARE is not supported. Anyway I'm thinking to try the migration of my app on win64 but what do I have to do? Can I simply use a processor with 64bit and windows 64bit without recompiling my application that would remain a 32bit application to take advantage of 64bit ? Or I should recompile my application ? If I don't need to recompile my application, can I link it with 64bit Congnex library? If I have to recompile my application, is it possible to cross compile the application so that my develop suite is on a 32bit PC? Every help will be very appreciated!! Thank in advance

    Read the article

  • Map element position in data file to class property

    - by Augusto
    I need to read/write files, following a format provided by a third party specification. The specification itself is pretty simple: it says the position and the size of the data that will be saved in the file. For example: Position Size Description -------------------------------------------------- 0001 10 Device serial number 0011 02 Hour 0013 02 Minute 0015 02 Second 0017 02 Day 0019 02 Month 0021 02 Year The list is very long, it has about 400 elements. But lots of them can be combined. For example, hour, minute, second, day, month and year can be combined in a single DateTime object. I've splitted the elements into about 4 categories, and created separeted classes for holding the data. So, instead of a big structure representing the data, I have some smaller classes. I've also created different classes for reading and writing the data. The problem is: how to map the positions in the file to the objects properties, so that I don't need to repeat the values in the reading/writing class? I could use some custom attributes and retrieve them via reflection. But since the code will be running on devices with small memory and processor, it would be nice to find another way. My current read code looks like this: public void Read() { DataFile dataFile = new DataFile(); // the arguments are: position, size dataFile.SerialNumber = ReadLong(1, 10); //... } Any ideas on this one? Thanks!

    Read the article

  • Floating point precision in Visual C++

    - by Luigi Giaccari
    HI, I am trying to use the robust predicates for computational geometry from Jonathan Richard Shewchuk. I am not a programmer, so I am not even sure of what I am saying, I may be doing some basic mistake. The point is the predicates should allow for precise aritmthetic with adaptive floating point precision. On my computer: Asus pro31/S (Core Due Centrino Processor) they do not work. The problem may stay in the fact the my computer may use some improvements in the floating point precision taht conflicts with the one used by Shewchuk. The author says: /* On some machines, the exact arithmetic routines might be defeated by the / / use of internal extended precision floating-point registers. Sometimes / / this problem can be fixed by defining certain values to be volatile, / / thus forcing them to be stored to memory and rounded off. This isn't / / a great solution, though, as it slows the arithmetic down. */ Now what I would like to know is that there is a way, maybe some compiler option, to turn off the internal extended precision floating-point registers. I really appriaciate your help

    Read the article

  • ASP.NET MVC intermittent slow response

    - by arehman
    Problem In our production environment, system occasionally delays the page response of an ASP.NET MVC application up to 30 seconds or so, even though same page renders in 2-3 seconds most of the times. This happens randomly with any arbitrary page, and GET or POST type requests. For example, log files indicates, system took 15 seconds to complete a request for jquery script file or for other small css file it took 10 secs. Similar Problems: Random Slow Downs Production Environment: Windows Server 2008 - Standard (32-bit) - App Pool running in integrated mode. ASP.NET MVC 1.0 We have tried followings/observations: Moved the application to a stand alone web server, but, it didn't help. We didn't ever notice same issue on the server for any 'ASP.NET' application. App Pool settings are fine. No abrupt recycles/shutdowns. No cpu spikes or memory problems. No delays due to SQL queries or so. It seems as something causing delay along HTTP Pipeline or worker processor seeing the request late. Looking for other suggestions. -- Thanks

    Read the article

  • Performance of java on different hardware?

    - by tangens
    In another SO question I asked why my java programs run faster on AMD than on Intel machines. But it seems that I'm the only one who has observed this. Now I would like to invite you to share the numbers of your local java performance with the SO community. I observed a big performance difference when watching the startup of JBoss on different hardware, so I set this program as the base for this comparison. For participation please download JBoss 5.1.0.GA and run: jboss-5.1.0.GA/bin/run.sh (or run.bat) This starts a standard configuration of JBoss without any extra applications. Then look for the last line of the start procedure which looks like this: [ServerImpl] JBoss (Microcontainer) [5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221634)] Started in 25s:264ms Please repeat this procedure until the printed time is somewhat stable and post this line together with some comments on your hardware (I used cpu-z to get the infos) and operating system like this: java version: 1.6.0_13 OS: Windows XP Board: ASUS M4A78T-E Processor: AMD Phenom II X3 720, 2.8 GHz RAM: 2*2 GB DDR3 (labeled 1333 MHz) GPU: NVIDIA GeForce 9400 GT disc: Seagate 1.5 TB (ST31500341AS) Use your votes to bring the fastest configuration to the top. I'm very curious about the results. EDIT: Up to now only a few members have shared their results. I'd really be interested in the results obtained with some other architectures. If someone works with a MAC (desktop) or runs an Intel i7 with less than 3 GHz, please once start JBoss and share your results. It will only take a few minutes.

    Read the article

  • Help with Neuroph neural network

    - by user359708
    For my graduate research I am creating a neural network that trains to recognize images. I am going much more complex than just taking a grid of RGB values, downsampling, and and sending them to the input of the network, like many examples do. I actually use over 100 independently trained neural networks that detect features, such as lines, shading patterns, etc. Much more like the human eye, and it works really well so far! The problem is I have quite a bit of training data. I show it over 100 examples of what a car looks like. Then 100 examples of what a person looks like. Then over 100 of what a dog looks like, etc. This is quite a bit of training data! Currently I am running at about one week to train the network. This is kind of killing my progress, as I need to adjust and retrain. I am using Neuroph, as the low-level neural network API. I am running a dual-quadcore machine(16 cores with hyperthreading), so this should be fast. My processor percent is at only 5%. Are there any tricks on Neuroph performance? Or Java peroformance in general? Suggestions? I am a cognitive psych doctoral student, and I am decent as a programmer, but do not know a great deal about performance programming.

    Read the article

  • Triangle numbers problem....show within 4 seconds

    - by Daredevil
    The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be: 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ... Let us list the factors of the first seven triangle numbers: 1: 1 3: 1,3 6: 1,2,3,6 10: 1,2,5,10 15: 1,3,5,15 21: 1,3,7,21 28: 1,2,4,7,14,28 We can see that 28 is the first triangle number to have over five divisors. Given an integer n, display the first triangle number having at least n divisors. Sample Input: 5 Output 28 Input Constraints: 1<=n<=320 I was obviously able to do this question, but I used a naive algorithm: Get n. Find triangle numbers and check their number of factors using the mod operator. But the challenge was to show the output within 4 seconds of input. On high inputs like 190 and above it took almost 15-16 seconds. Then I tried to put the triangle numbers and their number of factors in a 2d array first and then get the input from the user and search the array. But somehow I couldn't do it: I got a lot of processor faults. Please try doing it with this method and paste the code. Or if there are any better ways, please tell me.

    Read the article

  • When is a>a true ?

    - by Cricri
    Right, I think I really am living a dream. I have the following piece of code which I compile and run on an AIX machine: AIX 3 5 PowerPC_POWER5 processor type IBM XL C/C++ for AIX, V10.1 Version: 10.01.0000.0003 #include <stdio.h> #include <math.h> #define RADIAN(x) ((x) * acos(0.0) / 90.0) double nearest_distance(double radius,double lon1, double lat1, double lon2, double lat2){ double rlat1=RADIAN(lat1); double rlat2=RADIAN(lat2); double rlon1=lon1; double rlon2=lon2; double a=0,b=0,c=0; a = sin(rlat1)*sin(rlat2)+ cos(rlat1)*cos(rlat2)*cos(rlon2-rlon1); printf("%lf\n",a); if (a > 1) { printf("aaaaaaaaaaaaaaaa\n"); } b = acos(a); c = radius * b; return radius*(acos(sin(rlat1)*sin(rlat2)+ cos(rlat1)*cos(rlat2)*cos(rlon2-rlon1))); } int main(int argc, char** argv) { nearest_distance(6367.47,10,64,10,64); return 0; } Now, the value of 'a' after the calculation is reported as being '1'. And, on this AIX machine, it looks like 1 1 is true as my 'if' is entered !!! And my acos of what I think is '1' returns NanQ since 1 is bigger than 1. May I ask how that is even possible ? I do not know what to think anymore ! The code works just fine on other architectures where 'a' really takes the value of what I think is 1 and acos(a) is 0.

    Read the article

  • A very weird problem with localization (ASP.NET MVC)

    - by Alex
    I'm using Web Developer 2010. I just created a resource file lang.resx with different values in english. Then I created a lang.FR-fr.resx with French equivalents. However after tryingto compile, I get Error 131 Task could not find "AL.exe" using the SdkToolsPath "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin\NETFX 4.0 Tools\" or the registry key "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SDKs\Windows\v7.0A". Make sure the SdkToolsPath is set and the tool exists in the correct processor specific location under the SdkToolsPath and that the Microsoft Windows SDK is installed This is so weird! If I remove FR-fr part, it will work, but there will be no translation of course. I went to that directory and found out that I don't have al.exe there. I managed to find it in .NET 2 folder, but after copying it didn't help. It throws an exception. I tried to reinstall .NET 4, to install Windows SDK, and it still doesn't work. Maybe I can somehow get this al.exe file?

    Read the article

  • Load Balancing of PHP/MYSQL script without big code changes

    - by DR.GEWA
    Sorry for my Dummy Question, but... I am making a script on php/mysql (codeigniter) and I am extremally interested in knowing if there is a way without big architectural changes of the script make a load balancing. I mean, that for example now I will rent a medium dedicated server with 2GB ram, 200GB memory and good processor, and this will be enough lets say half year for the users which will come. But when they will become more and more, and as its a social net and at nights the server is waiting to have 500-1500 or 5000-8000 users online, I wander if there is a way for lets say just add second server with some config which will bear next pressure. After again one and so on... ???? <? if($answer=YES) { how(??); } esle{ whatToDo(??); } ?> If there is no way, than maybe you could point to a easiest way of load balancing solution.... I will be extremally thanksfull if you can tell me for such purposes , should I move lets say to PostgreSQl or FireBird? Which of them will be more easy in the future to handle ? I am getting on the mysite.com/users/show/$userId page something like 60queries for all data... maybe too much, but anyway....after some optimization it can be 20-30....

    Read the article

  • Visual Studio 2008 freeze after save

    - by Klay
    I recently added about a dozen classes from another solution into my current solution in Visual Studio. After adding these classes, Visual Studio started freezing for about 10 seconds whenever I Save. The cursor disappears and mouse clicks and keys do nothing. Some interesting points: Even after I removed the classes, the freezing behavior is still there. Freezing occurs whether I've made changes to the code or not. This behavior ONLY seems to affect this particular version of this solution. No other solutions exhibit this behavior. Older versions of this solution are not affected. In Sysinternals Process Explorer, whenever I save in Visual Studio, the I/O bytes graph jumps from 0 to 2MB for about 5 seconds, then drops to about 1 MB for a split second, then jumps back to 2MB for another 5 seconds. Processor use goes up to about 3-5% during this time. Here are the details of my setup: C# Silverlight project (maybe 20 classes), .NET version 3.5 SP1, Visual Studio 2008 v9.0.30729 SP1. EDIT: I edited this question extensively to reflect the more detailed information. I thought this might be preferable to starting a new question.

    Read the article

  • Advice on designing and building distributed application to track vehicles

    - by dario-g
    I'm working on application for tracking vehicles. There will be about 10k or more vehicles. Each will be sending ~250bytes in each minute. Data contains gps location and everything from CAN Bus (every data that we can read from vehicle computer and dashboard). Data are sent by GSM/GPRS (using UDP protocol). Estimated rows with this data per day is ~2000k. I see there 3 main blocks. 1. Multithreaded Socket Server (MSS) - I have it. MSS stores received data to the queue (using NServiceBus). 2. Rule Processor Server (RPS) - this is core of this system. This block is responsible for parsing received data, storing in the database, processing rules, sending messages to Notifier Server (this will be sending e-mails/sms texts). Rule example. As I said between received bytes there will be information about current speed. When speed will be above 120 then: show alert in web application for specified users, send e-mail, send sms text. (There can be more than one instance of RPS). 3. Web application - allows reporting and defining rules by users, monitoring alerts, etc. I'm looking for advice how to design communication between RPS and Web application. Some questions: - Should Web application and RPS have separated databases or one central database will be enough? I have one domain model in web application. If there will be one central database then can I use the same model (objects) on RPS? So, how to send changed rules to RPS? I try to decouple this blocks as much as possible. I'm planning to create different instance of application for each client (each client will have separated database). One client will be have 10k vehicles, others only 100 vehicles.

    Read the article

  • Suitable ESXi Spec

    - by Canacourse
    Finally I have some money to buy a new server and replace the one I have been using for 10 years. Im thinking of running ESXi on the new server. And intend to use it as follows; One W2008 R2 Guest running Exchange, File store, SVN and an accounting application for day to day running of the company. Multiple Guest VMs W2K, XP, Vista & WIN7 that were setup for testing in-house & real customer images also for testing. Probably Two Server Guest Os's W2003 & W2008 running at the same time again for testing. One Guest VM for builds & Continuous integration. Possibly one Guest running W220R2 for a customer website (Portal) This server will have to last another 10 years so I want to get the spec right. Althought I am clear on the memory and disk requirments I am not so clear on the processor(s). Im thinking of 2 Quadcore processors but welcome advice on this. Proposed Spec 10GB Ram 2TB Sata Drives (Hardware Raid 1) 2 Processors (TBC) Normally 3 Server VM's will running concurrently and the other VMs will be started as required. Max expected VMs running about 7. Max users = 4. TIA..

    Read the article

  • memory alignment within gcc structs

    - by Mumbles
    I am porting an application to an ARM platform in C, the application also runs on an x86 processor, and must be backward compatible. I am now having some issues with variable alignment. I have read the gcc manual for __attribute__((aligned(4),packed)) I interpret what is being said as the start of the struct is aligned to the 4 byte boundry and the inside remains untouched because of the packed statement. originally I had this but occasionally it gets placed unaligned with the 4 byte boundary. typedef struct { unsigned int code; unsigned int length; unsigned int seq; unsigned int request; unsigned char nonce[16]; unsigned short crc; } __attribute__((packed)) CHALLENGE; so I change it to this. typedef struct { unsigned int code; unsigned int length; unsigned int seq; unsigned int request; unsigned char nonce[16]; unsigned short crc; } __attribute__((aligned(4),packed)) CHALLENGE; The understand I stated earlier seems to be incorrect as both the struct is now aligned to a 4 byte boundary, and and the inside data is now aligned to a four byte boundary, but because of the endianess, the size of the struct has increased in size from 42 to 44 bytes. This size is critical as we have other applications that depend on the struct being 42 bytes. Could some describe to me how to perform the operation that I require. Any help is much appreciated.

    Read the article

  • Trouble using the ref keyword. Very newbie question!

    - by Sergio Tapia
    Here's my class: public class UserInformation { public string Username { get; set; } public string ComputerName { get; set; } public string Workgroup { get; set; } public string OperatingSystem { get; set; } public string Processor { get; set; } public string RAM { get; set; } public string IPAddress { get; set; } public UserInformation GetUserInformation() { var CompleteInformation = new UserInformation(); GetPersonalDetails(ref CompleteInformation); GetMachineDetails(ref CompleteInformation); return CompleteInformation; } private void GetPersonalDetails(ref UserInformation CompleteInformation) { } private void GetMachineDetails(ref UserInformation CompleteInformation) { } } I'm under the impression that the ref keyword tells the computer to use the same variable and not create a new one. Am I using it correctly? Do I have to use ref on both the calling code line and the actual method implementation?

    Read the article

  • How can I get bitfields to arrange my bits in the right order?

    - by Jim Hunziker
    To begin with, the application in question is always going to be on the same processor, and the compiler is always gcc, so I'm not concerned about bitfields not being portable. gcc lays out bitfields such that the first listed field corresponds to least significant bit of a byte. So the following structure, with a=0, b=1, c=1, d=1, you get a byte of value e0. struct Bits { unsigned int a:5; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); (Actually, this is C++, so I'm talking about g++.) Now let's say I'd like a to be a six bit integer. Now, I can see why this won't work, but I coded the following structure: struct Bits2 { unsigned int a:6; unsigned int b:1; unsigned int c:1; unsigned int d:1; } __attribute__((__packed__)); Setting b, c, and d to 1, and a to 0 results in the following two bytes: c0 01 This isn't what I wanted. I was hoping to see this: e0 00 Is there any way to specify a structure that has three bits in the most significant bits of the first byte and six bits spanning the five least significant bits of the first byte and the most significant bit of the second? Please be aware that I have no control over where these bits are supposed to be laid out: it's a layout of bits that are defined by someone else's interface.

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >