Search Results

Search found 7065 results on 283 pages for 'cpu sockets'.

Page 257/283 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • A map and set which uses contiguous memory and has a reserve function

    - by edA-qa mort-ora-y
    I use several maps and sets. The lack of contiguous memory, and high number of (de)allocations, is a performance bottleneck. I need a mainly STL-compatbile map and set class which can use a contiguous block of memory for internal objects (or multiple blocks). It also needs to have a reserve function so that I can preallocate for expected sizes. Before I write my own I'd like to check what is available first. Is there something in Boost which does this? Does somebody know of an available implementation elsewhere? Intrusive collection types are not usable here as the same objects need to exist in several collections. As far as I know STL memory pools are per-type, not per instance. These global pools are not efficient with respect to memory locality in mutli-cpu/core processing. Object pools don't work as the types will be shared between instance but their pool should not. In many cases a hash map may be an option in some cases.

    Read the article

  • javafx doesnt repaint label till method has finished, why?

    - by jeff porter
    Hi all, I have a JavaFX app with a some code like this... public class MainListener extends EventListener{ override public function event (arg0 : String) : Void { statusText.content = arg0; } } statusText is defined like this... var statusText = Text { x: 30 y: stageHeight - 40 font: Font { name: "Bitstream Vera Sans Bold" size: 10 } wrappingWidth: 420 fill: Color.WHITE textAlignment: TextAlignment.CENTER content: "Status: awaiting DBF file." }; I also have some other Javacode that is load data, much like this.. public ArrayList<CustomerRecord> read(EventListener listener) { ArrayList<CustomerRecord> listOfCustomerRecords = new ArrayList<CustomerRecord>(); listener.event("Status: Starting read"); // ** takes a while... List<Map<String, CustomerField>> customerRecords = new Reader(file).readData(listener); // ** long running method over. listener.event("Status: Loaded all customers, count:" + listOfCustomerRecords.size()); return listOfCustomerRecords; } Now while the last method is in its long running call, I would expect to see my statusText updated to have 'Status: Starting read', but its doesn't. Its only when the read() method returns that the text is updated. If its was 'straight' java I would presume that the long running job is hogging the CPU, or the statusText needed to have repaint() called on it. Can anyone give me any ideas? Thanks Jeff Porter

    Read the article

  • Where is my python script spending time? Is there "missing time" in my cprofile / pstats trace?

    - by fmark
    I am attempting to profile a long running python script. The script does some spatial analysis on raster GIS data set using the gdal module. The script currently uses three files, the main script which loops over the raster pixels called find_pixel_pairs.py, a simple cache in lrucache.py and some misc classes in utils.py. I have profiled the code on a moderate sized dataset. pstats returns: p.sort_stats('cumulative').print_stats(20) Thu May 6 19:16:50 2010 phes.profile 355483738 function calls in 11644.421 CPU seconds Ordered by: cumulative time List reduced from 86 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.008 0.008 11644.421 11644.421 <string>:1(<module>) 1 11064.926 11064.926 11644.413 11644.413 find_pixel_pairs.py:49(phes) 340135349 544.143 0.000 572.481 0.000 utils.py:173(extent_iterator) 8831020 18.492 0.000 18.492 0.000 {range} 231922 3.414 0.000 8.128 0.000 utils.py:152(get_block_in_bands) 142739 1.303 0.000 4.173 0.000 utils.py:97(search_extent_rect) 745181 1.936 0.000 2.500 0.000 find_pixel_pairs.py:40(is_no_data) 285478 1.801 0.000 2.271 0.000 utils.py:98(intify) 231922 1.198 0.000 2.013 0.000 utils.py:116(block_to_pixel_extent) 695766 1.990 0.000 1.990 0.000 lrucache.py:42(get) 1213166 1.265 0.000 1.265 0.000 {min} 1031737 1.034 0.000 1.034 0.000 {isinstance} 142740 0.563 0.000 0.909 0.000 utils.py:122(find_block_extent) 463844 0.611 0.000 0.611 0.000 utils.py:112(block_to_pixel_coord) 745274 0.565 0.000 0.565 0.000 {method 'append' of 'list' objects} 285478 0.346 0.000 0.346 0.000 {max} 285480 0.346 0.000 0.346 0.000 utils.py:109(pixel_coord_to_block_coord) 324 0.002 0.000 0.188 0.001 utils.py:27(__init__) 324 0.016 0.000 0.186 0.001 gdal.py:848(ReadAsArray) 1 0.000 0.000 0.160 0.160 utils.py:50(__init__) The top two calls contain the main loop - the entire analyis. The remaining calls sum to less than 625 of the 11644 seconds. Where are the remaining 11,000 seconds spent? Is it all within the main loop of find_pixel_pairs.py? If so, can I find out which lines of code are taking most of the time?

    Read the article

  • Bash: how to simply parallelize tasks?

    - by NoozNooz42
    I'm writing a tiny script that calls the "PNGOUT" util on a few hundred PNG files. I simply did this: find $BASEDIR -iname "*png" -exec pngout {} \; And then I looked at my CPU monitor and noticed only one of the core was used, which is quite sad. In this day and age of dual, quad, octo and hexa (?) cores desktop, how do I simply parallelize this task with Bash? (it's not the first time I've had such a need, for quite a lot of these utils are mono-threaded... I already had the case with mp3 encoders). Would simply running all the pngout in the background do? How would my find command look like then? (I'm not too sure how to mix find and the '&' character) I if have three hundreds pictures, this would mean swapping between three hundreds processes, which doesn't seem great anyway!? Or should I copy my three hundreds files or so in "nb dirs", where "nb dirs" would be the number of cores, then run concurrently "nb finds"? (which would be close enough) But how would I do this?

    Read the article

  • Reasons for & against a Database

    - by dbemerlin
    Hi, i had a discussion with a coworker about the architecture of a program i'm writing and i'd like some more opinions. The Situation: The Program should update at near-realtime (+/- 1 Minute). It involves the movement of objects on a coordinate system. There are some events that occur at regular intervals (i.e. creation of the objects). Movements can change at any time through user input. My solution was: Build a server that runs continously and stores the data internally. The server dumps a state-of-the-program at regular intervals to protect against powerfailures and/or crashes. He argued that the program requires a Database and i should use cronjobs to update the data. I can store movement information by storing startpoint, endpoint and speed and update the position in the cronjob (and calculate collisions with other objects there) by calculating direction and speed. His reasons: Requires more CPU & Memory because it runs constantly. Powerfailures/Crashes might destroy data. Databases are faster. My reasons against this are mostly: Not very precise as events can only occur at full minutes (wouldn't be that bad though). Requires (possibly costly) transformation of data on every run from relational data to objects. RDBMS are a general solution for a specialized problem so a specialized solution should be more efficient. Powerfailures (or other crashes) can leave the Data in an undefined state with only partially updated data unless (possibly costly) precautions (like transactions) are taken. What are your opinions about that? Which arguments can you add for any side?

    Read the article

  • Android ANR keyDispatchingTimedOut Error while continuous tapping on screen.

    - by user519846
    Hi All, I am getting Application Not Responding (ANR) dialog while continuous tapping on the screen. There is no view on the screen where i am tapping. Frequency of this issue is less but still i am not able to remove it completely. Here i am attaching the log what i caught during this error. ERROR/ActivityManager(1322): ANR in com.test.mj.and.ui (com.test.mj.and.ui/.TermsAndCondActivity) ERROR/ActivityManager(1322): Reason: keyDispatchingTimedOut ERROR/ActivityManager(1322): Parent: com.test.mj.and.ui/.SplashActivity ERROR/ActivityManager(1322): Load: 6.59 / 6.37 / 5.21 ERROR/ActivityManager(1322): CPU usage from 11430ms to 2196ms ago: ERROR/ActivityManager(1322): rtal.mj.and.ui: 9% = 7% user + 1% kernel / faults: 649 minor ERROR/ActivityManager(1322): system_server: 4% = 2% user + 2% kernel / faults: 10 minor ERROR/ActivityManager(1322): logcat: 3% = 1% user + 1% kernel / faults: 675 minor 1 major ERROR/ActivityManager(1322): synaptics_wq: 1% = 0% user + 1% kernel ERROR/ActivityManager(1322): ami304d: 1% = 0% user + 0% kernel ERROR/ActivityManager(1322): .process.lghome: 1% = 0% user + 0% kernel / faults: 47 minor ERROR/ActivityManager(1322): sync_supers: 0% = 0% user + 0% kernel ERROR/ActivityManager(1322): droid.DunServer: 0% = 0% user + 0% kernel / faults: 6 minor ERROR/ActivityManager(1322): events/0: 0% = 0% user + 0% kernel ERROR/ActivityManager(1322): oid.inputmethod: 0% = 0% user + 0% kernel / faults: 2 minor ERROR/ActivityManager(1322): m.android.phone: 0% = 0% user + 0% kernel / faults: 2 minor ERROR/ActivityManager(1322): ndroid.settings: 0% = 0% user + 0% kernel ERROR/ActivityManager(1322): sh: 0% = 0% user + 0% kernel / faults: 110 minor ERROR/ActivityManager(1322): -flush-179:0: 0% = 0% user + 0% kernel ERROR/ActivityManager(1322): TOTAL: 19% = 13% user + 6% kernel WARN/WindowManager(1322): Continuing to wait for key to be dispatched WARN/WindowManager(1322): No window to dispatch pointer action 1 Can anyone please help me to solve this issue? Thanks in advance.

    Read the article

  • Run a PHP script every second using CLI

    - by Saif Bechan
    Hello, I have a dedicated server running Cent OS with a Parallel PLESK panel. I need to run a php script every second, that updates my database. These is no alternative way timewise, i have checked every method, it needs to be updated every second. I can find my script using the url: http://www.mysite.com/phpfile.php?key=123, and this has to be executed every second. Does anyone have any knowledge at all on doing this, i can not seem to find the answer. I heard about doing it with CLI and putty, but i have no knowledge of this at all. Or can this be done using the PLESK Panel? And can the file be executed locally every second. Like \phpfile.php If someone helps me on answering these question i would really appreciate it. Regards EDIT It has been a few months since i added this question. I ended up using the following code: #!/user/bin/php $start = microtime(true); set_time_limit(60); for (i = 0; i < 59; ++$i) { doMyThings(); time_sleep_until($start + $i + 1); } Thank you for this code guys! My cronjob is set to every minute. I have been running this for some time now in a test environment, and this works out great. It works really supperfast, and i see no increase in CPU nor Memory usage.

    Read the article

  • C & MinGW: Hello World gives me the error "programm too big to fit in memory"

    - by user1692088
    I'm new here. Here's my problem: I installed MinGW on my Windows 7 Home Premium 32-bit Netbook with Intel Atom CPU N550, 1.50GHz and 2GB RAM. Now I made a file named hello.h and tried to compile it via CMD with the following command: "gcc c:\workspace\c\helloworld\hello.h -o out.exe" It compiles with no error, but when I try to run out.exe, it gives me following error: "program too big to fit in memory" Things I have checked: I have added "C:\MinGW\bin" to the Windows PATH Variable I have googled for about one hour, but ever since I'm a newbie, I can't really figure out what the problem is. I have compiled the same code on my 64-bit machine, compiles perfectly, but cannot be run due to 64-bit <- 16-bit problematic. I'd really appreciate, if someone could figure out, what the problem is. Btw, here's my hello.h: #include <stdio.h> int main(void){ printf("Hello, World\n"); } ... That's it. Thanks for your replies. Cheers, Boris

    Read the article

  • Perl Parallel::ForkManager wait_all_children() takes excessively long time

    - by zhang18
    I have a script that uses Parallel::ForkManager. However, the wait_all_children() process takes incredibly long time even after all child-processes are completed. The way I know is by printing out some timestamps (see below). Does anyone have any idea what might be causing this (I have 16 CPU cores on my machine)? my $pm = Parallel::ForkManager->new(16) for my $i (1..16) { $pm->start($i) and next; ... do something within the child-process ... print (scalar localtime), " Process $i completed.\n"; $pm->finish(); } print (scalar localtime), " Waiting for some child process to finish.\n"; $pm->wait_all_children(); print (scalar localtime), " All processes finished.\n"; Clearly, I'll get the Waiting for some child process to finish message first, with a timestamp of, say, 7:08:35. Then I'll get a list of Process i completed messages, with the last one at 7:10:30. However, I do not receive the message All Processes finished until 7:16:33(!). Why is that 6-minute delay between 7:10:30 and 7:16:33? Thx!

    Read the article

  • How do I stop the m2eclipse plugin interfering with command line mvn builds?

    - by locka
    I use the m2eclipse plugin in Eclipse so that I can import a Maven project. The plugin reads the pom.xml and sorts out the dependencies in the projects in an Eclipse friendly way so I'm not looking at a sea of broken references and errors. I use Eclipse for code development however I usually build the projects from the command line, e.g. "mvn clean install". Unfortunately when I do this, m2eclipse detects disk activity and attempts to rebuild the workspace. This interferes with the command line build and sometimes results in a race condition. For example the command line might be in its clean phase but fails because it tries to delete a file or directory which is locked during the workspace rebuild. Aside from that workspace rebuilding is incredibly slow, and between failed builds and wasted CPU my build process is 2-3x longer than it should be. It isn't an option to not use Eclipse (e.g. to use Netbeans), or to disable m2eclipse. It is a useful plugin except for this behaviour. So my question is, how do I stop m2eclipse from rebuilding the workspace all the time? Can I invoke a manual refresh and otherwise disable this behaviour?

    Read the article

  • Improve Application Performace

    - by Gtest
    Hello, Want To Improvide Performace Of C#.Net Applicaiton.. In My Application I am using Third Party Interop/Dll To Process .doc Files. It's a Simple Operation, Which Pass Input/Output FilePath to Interop dll ...& dll will execute text form input file. To Improve Performace I have Tried, Execute 2 therad to process 32 files.(each Thread process 16 files) Execute application code by creating 2 new AppDomains(each AppDomain Code process 16 files) Execute Code Using TPL(Task Parellel Library) But all options take around same time (32 sec) to process 32 files.Manually process tooks same 32 sec to process 32 files. Just tried one thing ..when i have created sample exe to process 16 files as input & output for refrence PAth given in TextBox. ..I open 2 exe instance to process. 1 exe has differnt 16 input files & output Created with input file path 2 exe has differnt 16 input files & output Created with input file path When i click on start button of both exe ..it use 100% cpu & Utilize both core significantly & Process Completed within 16 sec for 32 files. Can we provide this kind of explicit prallism to Improve my applicaiton Peformace? Thanks.

    Read the article

  • Weird behaviour of C++ destructors

    - by Vilx-
    #include <iostream> #include <vector> using namespace std; int main() { vector< vector<int> > dp(50000, vector<int>(4, -1)); cout << dp.size(); } This tiny program takes a split second to execute when simply run from the command line. But when run in a debugger, it takes over 8 seconds. Pausing the debugger reveals that it is in the middle of destroying all those vectors. WTF? Note - Visual Studio 2008 SP1, Core 2 Duo 6700 CPU with 2GB of RAM. Added: To clarify, no, I'm not confusing Debug and Release builds. These results are on one and the same .exe, without even any recompiling inbetween. In fact, switching between Debug and Release builds changes nothing.

    Read the article

  • Fetching real time data from excel

    - by Umesh Sharma
    I am seriouly looking for your valuable help first time here. If possible, plese help me. I am developing a VB.NET app in which i read "real time data" from a excel sheet using "Microsoft.Office.Interop.Excel" i.e. excel automation. All cells in excel sheet are fetching stock data from some LOCAL DDE Server like "=XYZ|Bid!GOLD", "=XYZ|Bid!SILVER", "=XYZ|Ask!SILVER" and so on... Some cells also having fixed values like "Symbol", "Bid Rate", "32.90" etc. Values of DDE mapped cells (i.e. =XYZ|xxxx!yyy) are continuously changing. THE PROBLEM is here..."FIXED values" from excel cells are coming quite ok to my app but all DDE mapped cells values are coming "-2146826246" (When datasource local dde server ON) or "-2146826265" (OFF). Although, if i use C#.NET, it's all ok but not with Vb.NET. I want to display range of excel (A1 to J50) into VB.NET ListView which are changing in every 200ms (5 times in every 1 second) ================ Important ====================================================== Is it possible to BIND "listview items/columns values" with "excel cells" or some local memory variables ?? Currently, i am reading excel "cell by cell" and trying to put values in .NET listview but CPU USES are very high as well as it's toooo slow process. If yes, then how please ? I am a VFP developer but new to .NET It's very easy in VFP then why not in .NET ?? Please guide me, if someone has the solution...

    Read the article

  • MSTest on x64 C++/CLI

    - by Oyvind
    I got a problem using MSTest on x64: The test project depends on a couple of C++/CLI assemblies, and fails to load for some reason. In Visual Studio, I get (stripped down): Error loading D:\xxx\Xxx.Test.dll: Unable to load the test container 'D:\xxx\Xxx.Test.dll' or one of its dependencies. Error details: System.BadImageFormatException: Could not load file or assembly 'Common.Geometry.Native, Version=1.1.4574.22395, Culture=neutral, PublicKeyToken=null' or one of its dependencies. An attempt was made to load a program with an incorrect format. Running MSTest manually in a command prompt, I get: Unable to load the test container 'D:\xxx\Xxx.Test.dll' or one of its dependencies. Error details: System.IO.FileNotFoundException: Could not load file or assembly 'Common.Geometry.Native, Version=1.1.4574.22395, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified. Details worth mentioning: The test project itself is compiled using 'Any Cpu'. I use a x64 specific testrunconfig Dependency walker shows no missing native dependencies in the C++/CLI assembly (Common.Geometry.Native) Even more interesting, there is another test project in the same solution using the same C++/CLI assembly (Common.Geometry.Native), and it runs without any problems. I have also verified that there are no 32bit assemblies/dlls interfering. Any suggestions is welcome !

    Read the article

  • Fast, very lightweight algorithm for camera motion detection?

    - by Ertebolle
    I'm working on an augmented reality app for iPhone that involves a very processor-intensive object recognition algorithm (pushing the CPU at 100% it can get through maybe 5 frames per second), and in an effort to both save battery power and make the whole thing less "jittery" I'm trying to come up with a way to only run that object recognizer when the user is actually moving the camera around. My first thought was to simply use the iPhone's accelerometers / gyroscope, but in testing I found that very often people would move the iPhone at a consistent enough attitude and velocity that there wouldn't be any way to tell that it was still in motion. So that left the option of analyzing the actual video feed and detecting movement in that. I got OpenCV working and tried running their pyramidal Lucas-Kanade optical flow algorithm, which works well but seems to be almost as processor-intensive as my object recognizer - I can get it to an acceptable framerate if I lower the depth levels / downsample the image / track fewer points, but then accuracy suffers and it starts to miss some large movements and trigger on small hand-shaking-y ones. So my question is, is there another optical flow algorithm that's faster than Lucas-Kanade if I just want to detect the overall magnitude of camera movement? I don't need to track individual objects, I don't even need to know which direction the camera is moving, all I really need is a way to feed something two frames of video and have it tell me how far apart they are.

    Read the article

  • C# Multithreaded Domain Design

    - by Thijs Cramer
    Let's say i have a Domain Model that im trying to make compatible with multithreading. The prototype domain is a game domain that consists of Space, SpaceObject, and Location objects. A SpaceObject has the Move method and Asteroid and Ship extend this object with specific properties for the object (Ship has a name and Asteroid has a color) Let's say i want to make the Move method for each object run in a seperate thread. That would be stupid because with 10000 objects, i would have 10000 threads. What would be the best way to seperate the workload between cores/threads? I'm trying to learn the basics of concurrency, and building a small game to prototype a lot of concepts. What i've already done, is build a domain, and a threading model with a timer that launches events based on intervals. If the event occurs i want to update my entire model with the new locations of any SpaceObject. But i don't know how and when to launch new threads with workloads when the event occurs. Some people at work told me that u can't update your core domain multithreaded, because you have to synch everything. But in that case i can't run my game on a dual quadcore server, because it would only use 1 CPU for the hardest tasks. Anyone know what to do here?

    Read the article

  • Problem with load testing Web Service - VSTS 2008

    - by Carlos
    Hello, I have a webtest with makes a simple call to a WebService which looks like that: MyWebService webService = new MyWebService(); webService.Timeout = 180000; webService.myMethod(); I am not using ThinkTimes, also the Run Duration is set to 5 minutes. When I ran this test simulating only 1 user, I check the counters and I found something like that: Tests Total: 4500 Network Interface\Bytes sent (agent machine): 35,500 Then I ran the same tests, but this time simulating 2 users and I got something like that: Tests Total: 2225 Network Interface\Bytes sent (agent machine): 30,500 So when I increased the numbers of users the tests/sec was half than when I use only 1 user and the bytes sent by the agent was also lower. I think it is strange, because it doesn't seems I have a bottleneck in my agent machine since CPU is never higher than 30% and I have over 1.5GB of RAM free, also my network utilization is like 0.5% of its capacity. In order to troubleshot this I ran a test using Step Pattern, the simulated users went from 20 to 800 users. When I check the requests/sec it is practically constant through the whole test, so it is clear there is something in my test or my environment which is preventing the number of requests from gets higher. It would be a expected behavior if the "response time" was getting higher because it would tell me the requests wasn't been processed properly, but the strange thing is the response time is practically constant all the time and it is pretty low actually. I have no idea why my agent can't send more requests when I increase the numbers of users, any help/tip/guess would be really appreciate.

    Read the article

  • Help Me With This Access Query

    - by yae
    I have 2 tables: "products" and "pieces" PRODUCTS idProd product price PIECES id idProdMain idProdChild quant idProdMain and idProdChild are related with the table: "products". Other considerations is that 1 product can have some pieces and 1 product can be a piece. Price product equal a sum of quantity * price of all their pieces. "Products" table contains all products (p EXAMPLE: TABLE PRODUCTS (idProd - product - price) 1 - Computer - 300€ 2 - Hard Disk - 100€ 3 - Memory - 50€ 4 - Main Board - 100€ 5 - Software - 50€ 6 - CDroms 100 un. - 30€ TABLE PIECES (id - idProdMain - idProdChild - Quant.) 1 - 1 - 2 - 1 2 - 1 - 3 - 2 3 - 1 - 4 - 1 WHAT I NEED? I need update the price of the main product when the price of the product child (piece) is changed. Following the previous example, if I change the price of this product "memory" (is a piece too) to 60€, then product "Computer" will must change his price to 320€ How I can do it using queries? Already I have tried this to obatin the price of the main product, but not runs. This query not returns any value: SELECT Sum(products.price*pieces.quant) AS Expr1 FROM products LEFT JOIN pieces ON (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdMain) WHERE (((pieces.idProdMain)=5)); MORE INFO The table "products" contains all the products to sell that it is in the shop. The table "pieces" is to take a control of the compound products. To know those who are the products children. For example of compound product: computers. This product is composed by other products (motherboard, hard disk, memory, cpu, etc.)

    Read the article

  • How can you start a process from asp.net without interfering with the website?

    - by Sem Dendoncker
    Hi, We have an asp.net application that is able to create .air files. To do this we use the following code: System.Diagnostics.Process process = new System.Diagnostics.Process(); //process.StartInfo.FileName = strBatchFile; if (File.Exists(@"C:\Program Files\Java\jre6\bin\java.exe")) { process.StartInfo.FileName = @"C:\Program Files\Java\jre6\bin\java.exe"; } else { process.StartInfo.FileName = @"C:\Program Files (x86)\Java\jre6\bin\java.exe"; } process.StartInfo.Arguments = GetArguments(); process.StartInfo.RedirectStandardOutput = true; process.StartInfo.RedirectStandardError = true; process.StartInfo.UseShellExecute = false; process.PriorityClass = ProcessPriorityClass.Idle; process.Start(); string strOutput = process.StandardOutput.ReadToEnd(); string strError = process.StandardError.ReadToEnd(); HttpContext.Current.Response.Write(strOutput + "<p>" + strError + "</p>"); process.WaitForExit(); Well the problem now is that sometimes the cpu of the server is reaching 100% causing the application to run very slow and even lose sessions (we think this is the problem). Is there any other solution on how to generate air files or run an external process without interfering with the asp.net application? Cheers, M.

    Read the article

  • Global.asax parser errors when deploying MVC 1 application to remote server.

    - by mannish
    So we're having some issues deploying an ASP.NET MVC app to a client site. Basically when we try to test the app from localhost, we get the dreaded Global.asax parser error indicating it could not load the application global. Research indicates there are basically 4 possible reasons for this exception we're seeing: The solution hasn't been built. This clearly isn't the case since we can deploy it here and it runs fine on any machine we deploy to AND we had to build and publish the darn thing to deploy it anyway. The Global.asax namespace inheritance does not match the application global code file. Again we double checked this and since it runs just fine here that can't be the issue. Miscellaneous non-descript IIS/VS.NET mischief. Basically something get's wonky in IIS or VS.NET and the web server won't behave correctly for this application. We've done cleans and rebuilds, we've deleted virtual dir and recreated, and performed all of the IIS munging that we've found elsewhere online. Various combinations of IIS bounces, server reboots, virtual dir/application recreation, etc. Code level permissions issue. We've verified full trust in machine/web config in the framework directory, we've set .NET trust to full in IIS, we've granted Everyone full control on the directories just to hit it with the security hammer, etc. etc. The pertinent detials: Windows Server 2008 x64 IIS 7, 32 bit compatible app pool (app was written on 32 bit OS compiled for any cpu) App pool identity set to NetworkService Microsoft ASP.NET MVC 1.0 XCopy deployment We deployed another read-only app just fine. The significant difference in this app is the use of NHibernate and Log4Net which require full trust. Additionally, the actual project name of the web project differs from the default namespace however the Inherits namespace in Global.asax and the Global.asax.cs files match so this shouldn't be an issue. Anybody have any bright ideas? We're officially down to just the dim ones.

    Read the article

  • Better way of looping to detect change.

    - by Dremation
    As of now I'm using a while(true) method to detect changes in memory. The problem with this is it's kill the applications performance. I have a list of 30 pointers that need checked as rapidly as possible for changes, without sacrificing a huge performance loss. Anyone have ideas on this? memScan = new Thread(ScanMem); public static void ScanMem() { int i = addy.Length; while (true) { Thread.Sleep(30000); //I do this to cut down on cpu usage for (int j = 0; j < i; j++) { string[] values = addy[j].Split(new char[] { Convert.ToChar(",") }); //MessageBox.Show(values[2]); try { if (Memory.Scanner.getIntFromMem(hwnd, (IntPtr)Convert.ToInt32(values[0], 16), 32).ToString() != values[1].ToString()) { //Ok, it changed lets do our work //work if (Globals.Working) return; SomeFunction("Results: " + values[2].ToString(), "Memory"); Globals.Working = true; }//end if }//end try catch { } }//end for }//end while }//end void

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • How to build a web crawler to find a specific advert, which is in an iframe loaded by Javascript

    - by ZoFreX
    I'm trying to find all instances of an advert on a website. The advert is in an iframe which is loaded by javascript (it doesn't appear at all if javascript is turned off). Detecting the advert itself is extremely simple, both the name of the flash file and the target of the href always contain a certain string. What would be the best "starting point" for achieving this? At the moment I'm considering an Adobe AIR app, which could crawl the site and examine the DOM to find the ad, and would run javascript and load the content of the iframe. The other option I can think of is using Firefox as the platform (using maybe GreaseMonkey or Selenium? I don't really know how to leverage Firefox like this). Does anyone know of anything suitable to build this, or have any suggestions on using Firefox to do it? Extra details: Being CPU intensive isn't really an issue, nor is anything depending on a browser being open. This doesn't need to run on a headless server, it will be running on a powerful desktop box. OS is also not an issue. It would be advantageous if the crawler loaded each page multiple times, as the advert is in rotation. While the crawler does need to execute the javascript and load the content of the iframe, it does not need to be able to display flash files.

    Read the article

  • Strategy for animating a lot of "LED's" - thread?, UIView animations? NSOperation? (iPhone)

    - by RickiG
    Hi I have to do some different views containing 72 LED lights. I built an LED Class so I can loop through the LED's and set them to different colors (Green, Red, Orange, Blue None etc.). The LED then loads the appropriate .png. This works fine, I loop over the LED's and set them. Now I know that at some time they will need to not just turn on/off change color, but will have to turn on with a small delay. Like an equalizer. I have a 5-10 views containing the 72 LED's and I would like to achieve the above with the minimum amount of memory/CPU strain. for(LED *l in self.ledArray) { [l display:Green]; } I simply loop as shown above and inside the LED is a switch case that does the correct logic. If this were actual LED's and a microController I would use sleep(100) or similar in the loop, but I would really like to avoid stuff like that for obvious reasons. I was thinking that doing a performOnThread withDelay would really be consuming, so would UIView animation changing the alpha and NSOperation would also be a lot of lifting for a small feature. Is there a both efficient and clever way to go around this? Thanks for any inspiration given:)

    Read the article

  • What is the performance penalty of XML data type in SQL Server when compared to NVARCHAR(MAX)?

    - by Piotr Owsiak
    I have a DB that is going to keep log entries. One of the columns in the log table contains serialized (to XML) objects and a guy on my team proposed to go with XML data type rather than NVARCHAR(MAX). This table will have logs kept "forever" (archiving some very old entries may be considered in the future). I'm a little worried about the CPU overhead, but I'm even more worried that DB can grow faster (FoxyBOA from the referenced question got 70% bigger DB when using XML). I have read this question http://stackoverflow.com/questions/514827/microsoft-sql-server-2005-2008-xml-vs-text-varchar-data-type and it gave me some ideas but I am particulairly interrested in clarification on whether the DB size increases or decreases. Can you please share your insight/experiences in that matter. BTW. I don't currently have any need to depend on XML features within SQL Server (there's nearly zero advantage to me in the specific case). Ocasionally log entries will be extracted, but I prefer to handle the XML using .NET (either by writing a small client or using a function defined in a .NET assembly).

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >