Search Results

Search found 5279 results on 212 pages for 'execution counter'.

Page 105/212 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • PHP: Prevent a function from returning value?

    - by Industrial
    Hi everybody, How could I make sure that the startProcess(); function is being called, but without halting the execution for myFunction(). I'll guess that there's a way to call a function and prevent it from returning it's value to thereby accomplishing this? Pseudo-code: function myFunction() { startProcess(); return $something; } function startProcess() { sleep(5); // Do stuff that user doesn't should have to wait for. }

    Read the article

  • Convert a python dict to a string and back

    - by AJ00200
    I am writing a program that stores data in a dictionary object, but this data needs to be saved at some point during the program execution and loaded back into the dictionary object when the program is run again. How would I convert a dictionary object into a string that can be written to a file and loaded back into a dictionary object? This will hopefully support dictionaries containing dictionaries.

    Read the article

  • What runs before main()?

    - by MikimotoH
    After testing on msvc8, I found: Parse GetCommandLine() to argc and argv Standard C Library initialization C++ Constructor of global variables These three things are called before entering main(). My questions are: Will this execution order be different when I porting my program to different compiler (gcc or armcc), or different platform? What stuff does Standard C Library initialization do? So far I know setlocale() is a must. Is it safe to call standard C functions inside C++ constructor of global variables?

    Read the article

  • how to see the optimized code in c

    - by sganesh
    I can examine the optimization using profiler, size of the executable file and time to take for the execution. I can get the result of the optimization. But I have these questions, How to get the optimized C code. Which algorithm or method used by C to optimize a code. Thanks in advance.

    Read the article

  • Running PHP,MySQL and apache in Ubuntu 10.04 LTS

    - by Ramprakash
    Hello all, I have installed native apache and mysql,php in my linux server. I tried a page using phpinfo() and it worked.But when I try my own pages, the execution of the page stops when it comes to the php tag, even the css tag following it doesn't come to the browser. Please help me how to fix this issue.. Thanks in advance

    Read the article

  • Why thread started by ScheduledExecutorService.schedule() never quits?

    - by moonese
    If I create a scheduled task by calling ScheduledExecutorService.schedule(), it never quits after execution, is it a JDK bug, or I just miss something? note: doSomething() is empty method below. public static void doSomething() { } public static void main(String[] args) { ScheduledFuture scheduleFuture = Executors.newSingleThreadScheduledExecutor().schedule(new Callable() { public Void call() { try { doSomething(); } catch (Exception e) { e.printStackTrace(); } return null; } }, 1, TimeUnit.SECONDS); }

    Read the article

  • C# stop and continue..

    - by Betamoo
    I need some way to do the following efficiently in C#: Make program execution stop until certain value is changed. Note: I do not want to make it with a while loop to avoid wasting cpu power..

    Read the article

  • Why use one dimensional array instead of a two dimensional arrray?

    - by user3869145
    I was doing some work handling a lot of information and my partner told me that I was using too many matrices to manipulate the variables of the problem. The idea was to use one dimension arrays int a[] instead of the 2 dimensional arrays int b[][], to save memory and processing speed of the algorithm. How certain is that this change will accelerate the speed of execution or compilation of my code in c ++?

    Read the article

  • localhost remove HTML extension

    - by Cusa John
    I've been trying to clean up my urls with htaccess but I can't seen to get it to work on my localhost. My website url: localhost/index.html This is the default htaccess file that's in my www folder. #------------------------------------------------------------------------------ # To allow execution of cgi scripts in this directory uncomment next two lines. #------------------------------------------------------------------------------ AddType application/x-httpd-php .html .htm .php AddHandler cgi-script .pl .cgi Options +ExecCGI +FollowSymLinks

    Read the article

  • Hive metadata permission issue

    - by Chandramohan
    We are getting this error on Hive, while creating a DB / table hive> CREATE TABLE pokes (foo INT, bar STRING); FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. NestedThrowables: org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask Hive log : org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:298) at org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:601) at org.datanucleus.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:286) at org.datanucleus.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:182) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at javax.jdo.JDOHelper$16.run(JDOHelper.java:1958) at java.security.AccessController.doPrivileged(Native Method) at javax.jdo.JDOHelper.invoke(JDOHelper.java:1953) at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1159) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:803) at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:698) at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:234) at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:261) at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:196) at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:171) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:354) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:306) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:451) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:232) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:197) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:108) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:1868) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:1878) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:470) ... 15 more Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:114) at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:521) at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:290) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:588) at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:300) at org.datanucleus.ObjectManagerFactoryImpl.initialiseStoreManager(ObjectManagerFactoryImpl.java:161) at org.datanucleus.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:583) ... 42 more Caused by: java.util.NoSuchElementException: Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection. at org.apache.commons.pool.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:1191) at org.apache.commons.dbcp.PoolingDataSource.getConnection(PoolingDataSource.java:106) ... 52 more 2011-08-11 18:02:36,964 ERROR ql.Driver (SessionState.java:printError(343)) - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask

    Read the article

  • Strange Recurrent Excessive I/O Wait

    - by Chris
    I know quite well that I/O wait has been discussed multiple times on this site, but all the other topics seem to cover constant I/O latency, while the I/O problem we need to solve on our server occurs at irregular (short) intervals, but is ever-present with massive spikes of up to 20k ms a-wait and service times of 2 seconds. The disk affected is /dev/sdb (Seagate Barracuda, for details see below). A typical iostat -x output would at times look like this, which is an extreme sample but by no means rare: iostat (Oct 6, 2013) tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16.00 0.00 156.00 9.75 21.89 288.12 36.00 57.60 5.50 0.00 44.00 8.00 48.79 2194.18 181.82 100.00 2.00 0.00 16.00 8.00 46.49 3397.00 500.00 100.00 4.50 0.00 40.00 8.89 43.73 5581.78 222.22 100.00 14.50 0.00 148.00 10.21 13.76 5909.24 68.97 100.00 1.50 0.00 12.00 8.00 8.57 7150.67 666.67 100.00 0.50 0.00 4.00 8.00 6.31 10168.00 2000.00 100.00 2.00 0.00 16.00 8.00 5.27 11001.00 500.00 100.00 0.50 0.00 4.00 8.00 2.96 17080.00 2000.00 100.00 34.00 0.00 1324.00 9.88 1.32 137.84 4.45 59.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22.00 44.00 204.00 11.27 0.01 0.27 0.27 0.60 Let me provide you with some more information regarding the hardware. It's a Dell 1950 III box with Debian as OS where uname -a reports the following: Linux xx 2.6.32-5-amd64 #1 SMP Fri Feb 15 15:39:52 UTC 2013 x86_64 GNU/Linux The machine is a dedicated server that hosts an online game without any databases or I/O heavy applications running. The core application consumes about 0.8 of the 8 GBytes RAM, and the average CPU load is relatively low. The game itself, however, reacts rather sensitive towards I/O latency and thus our players experience massive ingame lag, which we would like to address as soon as possible. iostat: avg-cpu: %user %nice %system %iowait %steal %idle 1.77 0.01 1.05 1.59 0.00 95.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 13.16 25.42 135.12 504701011 2682640656 sda 1.52 0.74 20.63 14644533 409684488 Uptime is: 19:26:26 up 229 days, 17:26, 4 users, load average: 0.36, 0.37, 0.32 Harddisk controller: 01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078 (rev 04) Harddisks: Array 1, RAID-1, 2x Seagate Cheetah 15K.5 73 GB SAS Array 2, RAID-1, 2x Seagate ST3500620SS Barracuda ES.2 500GB 16MB 7200RPM SAS Partition information from df: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb1 480191156 30715200 425083668 7% /home /dev/sda2 7692908 437436 6864692 6% / /dev/sda5 15377820 1398916 13197748 10% /usr /dev/sda6 39159724 19158340 18012140 52% /var Some more data samples generated with iostat -dx sdb 1 (Oct 11, 2013) Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sdb 0.00 15.00 0.00 70.00 0.00 656.00 9.37 4.50 1.83 4.80 33.60 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 12.00 836.00 500.00 100.00 sdb 0.00 0.00 0.00 3.00 0.00 32.00 10.67 9.96 1990.67 333.33 100.00 sdb 0.00 0.00 0.00 4.00 0.00 40.00 10.00 6.96 3075.00 250.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 4.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 2.62 4648.00 500.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 1.00 0.00 16.00 16.00 1.69 7024.00 1000.00 100.00 sdb 0.00 74.00 0.00 124.00 0.00 1584.00 12.77 1.09 67.94 6.94 86.00 Characteristic charts generated with rrdtool can be found here: iostat plot 1, 24 min interval: http://imageshack.us/photo/my-images/600/yqm3.png/ iostat plot 2, 120 min interval: http://imageshack.us/photo/my-images/407/griw.png/ As we have a rather large cache of 5.5 GBytes, we thought it might be a good idea to test if the I/O wait spikes would perhaps be caused by cache miss events. Therefore, we did a sync and then this to flush the cache and buffers: echo 3 > /proc/sys/vm/drop_caches and directly afterwards the I/O wait and service times virtually went through the roof, and everything on the machine felt like slow motion. During the next few hours the latency recovered and everything was as before - small to medium lags in short, unpredictable intervals. Now my question is: does anybody have any idea what might cause this annoying behaviour? Is it the first indication of the disk array or the raid controller dying, or something that can be easily mended by rebooting? (At the moment we're very reluctant to do this, however, because we're afraid that the disks might not come back up again.) Any help is greatly appreciated. Thanks in advance, Chris. Edited to add: we do see one or two processes go to 'D' state in top, one of which seems to be kjournald rather frequently. If I'm not mistaken, however, this does not indicate the processes causing the latency, but rather those affected by it - correct me if I'm wrong. Does the information about uninterruptibly sleeping processes help us in any way to address the problem? @Andy Shinn requested smartctl data, here it is: smartctl -a -d megaraid,2 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:13 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 20 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 1236631092 Blocks received from initiator = 1097862364 Blocks read from cache and sent to initiator = 1383620256 Number of read and write commands whose size <= segment size = 531295338 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36556.93 number of minutes until next internal SMART test = 32 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 509271032 47 0 509271079 509271079 20981.423 0 write: 0 0 0 0 0 5022.039 0 verify: 1870931090 196 0 1870931286 1870931286 100558.708 0 Non-medium error count: 0 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36538 - [- - -] # 2 Background short Completed 16 36514 - [- - -] # 3 Background short Completed 16 36490 - [- - -] # 4 Background short Completed 16 36466 - [- - -] # 5 Background short Completed 16 36442 - [- - -] # 6 Background long Completed 16 36420 - [- - -] # 7 Background short Completed 16 36394 - [- - -] # 8 Background short Completed 16 36370 - [- - -] # 9 Background long Completed 16 36364 - [- - -] #10 Background short Completed 16 36361 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes] smartctl -a -d megaraid,3 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:26 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 19 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 288745640 Blocks received from initiator = 1097848399 Blocks read from cache and sent to initiator = 1304149705 Number of read and write commands whose size <= segment size = 527414694 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36596.83 number of minutes until next internal SMART test = 28 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 610862490 44 0 610862534 610862534 20470.133 0 write: 0 0 0 0 0 5022.480 0 verify: 2861227413 203 0 2861227616 2861227616 100872.443 0 Non-medium error count: 1 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36580 - [- - -] # 2 Background short Completed 16 36556 - [- - -] # 3 Background short Completed 16 36532 - [- - -] # 4 Background short Completed 16 36508 - [- - -] # 5 Background short Completed 16 36484 - [- - -] # 6 Background long Completed 16 36462 - [- - -] # 7 Background short Completed 16 36436 - [- - -] # 8 Background short Completed 16 36412 - [- - -] # 9 Background long Completed 16 36404 - [- - -] #10 Background short Completed 16 36401 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes]

    Read the article

  • Should static analysis warnings fail the CI build?

    - by Cara
    Our team is investigating various options for static analysis in our project, and have mixed opinions about whether we want our Continuous Integration build to fail because of warnings from static analysis. The argument against failing the build is that there are often exceptions to the rules, and attempting to work around them just to make the build succeed reduces productivity. A better approach would be to generate reports with the build, and regularly dedicate developer time to addressing the reported issues. The counter-argument is that it is easy for the technical debt to build up if the bugs are not addressed immediately. Also, if the build fails when a potential bug is introduced, the amount of time required to fix it is reduced. What are your thoughts?

    Read the article

  • Auto_increment values in InnoDB?

    - by Timmy
    I've been using InnoDB for a project, and relying on auto_increment. This is not a problem for most of the tables, but for tables with deletion, this might be an issue: AUTO_INCREMENT Handling in InnoDB particularly this part: AUTO_INCREMENT column named ai_col: After a server startup, for the first insert into a table t, InnoDB executes the equivalent of this statement: SELECT MAX(ai_col) FROM t FOR UPDATE; InnoDB increments by one the value retrieved by the statement and assigns it to the column and to the auto-increment counter for the table. This is a problem because while it ensures that within the table, the key is unique, there are foreign keys to this table where those keys are no longer unique. The mysql server does/should not restart often, but this is breaking. Are there any easy ways around this?

    Read the article

  • Ruby/Rails display general screen when modifications being performed on server

    - by john chan
    I have a ruby on rails app running a server and sometimes it needs to be taken down for updates/etc. As of now, one way I see to have a general display screen during update periods (when the app is down) is to substitute the files within /srv/www/ directory to just have it display a general screen everywhere that the user could possibly navigate to. I also thought of having a central controller file that connects all others (essentially a main) but this seems counter intuitive for rails. There are many external links to these different components of the site that the user could navigate to from outside and I need to make sure that they always receive this general update screen when the app is taken down for a little. I was wondering if anyone had any other ideas.... maybe a library or something like that, I can't seem to find anything online. any suggestions would be appreciated. Thanks

    Read the article

  • Why is XHTML1.1 dated *before* XHTML1.0 ? What is the preferred XHTML today?

    - by Cheeso
    I'm not clear on the status of XHTML - v1.0 and v1.1. Can someone explain which is preferred at this point, and why? The specs from w3c say that XHTML 1.1 predates* XHTML 1.0, which is exactly counter-intuitive, to me. http://www.w3.org/TR/xhtml11/ - W3C Recommendation 31 May 2001 http://www.w3.org/TR/xhtml1/ - W3C Recommentation, updated 1 August 2002 Also, I noted earlier today that the latest version of htmltidy emits XHTML 1.0, when I request xhtml. Hmmm....Even though the XHTML 1.1 spec is 9 years old, it's still not supported by mainstream tools. That suggests that XHTML 1.1 is either completely unnecessary or spurious. Which one should I use if I am authoring pages today? What if I am building tools - should I bother to support both? Or do I need only one? Thanks.

    Read the article

  • First TDD, Simple 2-tier C# Project - what do I unit test?

    - by Joel
    This is probably a stupid question but my googling isn't finding a satisfactory answer. I'm starting a small project in C#, with just a business layer and a data access layer - strangely, the UI will come later, and I have very little (read:no) concept / control over what it will look like. I would like to try TDD for this project. I'm using Visual Studio 2008 (soon to be 2010), I have ReSharper 5, and nUnit. Again, I want to do Test-Driven Development, but not necessarily the entire XP system. My question is - when and where do I write the first unit test? Do I only test logic before I write it, or do I test everything? It seems counter-productive to test things that have no reason to fail (auto-properties, empty constructors)...but it seems like the "No new code without a failing test" maxim requires this. Links or references are fine (but preferably to online resources, not books - I would like to get started ASAP). Thanks in advance for any guidance!

    Read the article

  • Difference between performSelectorInBackground and NSOperation Subclass

    - by AmitSri
    I have created one testing app for running deep counter loop. I run the loop fuction in background thread using performSelectorInBackground and also NSOperation subclass separately. I am also using performSelectorOnMainThread to notify main thread within backgroundthread method and [NSNotificationCenter defaultCenter] postNotificationName within NSOperation subclass to notify main thread for updating UI. Initially both the implementation giving me same result and i am able to update UI without having any problem. The only difference i found is the Thread count between two implementations. The performSelectorInBackground implementation created one thread and got terminated after loop finished and my app thread count again goes to 1. The NSOperation subclass implementation created two new threads and keep exists in the application and i can see 3 threads after loop got finished in main() function. So, my question is why two threads created by NSOperation and why it didn't get terminated just like the first background thread implementation? I am little bit confuse and unable to decide which implementation is best in-terms of performance and memory management. Thanks

    Read the article

  • PerformanceCounters on .NET 4.0 & Windows 7

    - by scott
    I have a program that works fine on VS2008 and Vista, but I'm trying it on Windows 7 and VS2010 / .NET Framework 4.0 and it's not working. Ultimately the problem is that System.Diagnostics.PerformanceCounterCategory.GetCategories() (and other PerformanceCounterCategory methods) is not working. I'm getting a System.InvalidOperationException with the message "Cannot load Counter Name data because an invalid index '' was read from the registry." I can reproduce this with the very simple program shown below: class Program { static void Main(string[] args) { foreach (var pc in System.Diagnostics.PerformanceCounterCategory.GetCategories()) { Console.WriteLine(pc.CategoryName); } } } I did make sure I'm running the program as an admin. It doesn't matter if I run it with VS/Debugger attached or not. I don't have another machine with Windows 7 or VS2010 to test it on, so I'm not sure which is complicating things here (or both?). It is Windows 7 x64 and I've tried forcing the app to run in both x32 and x64 but get the same results.

    Read the article

  • Linux Kernel Module - Creating proc file - proc_root undeclared error

    - by Zach
    I copy and paste code from this URL for creating and reading/writing a proc file using a kernel module and get the error that proc_root is undeclared. This same example is on a few sites so I assume it works. Any ideas why I'd get this error? Does my makefile need something different. Below is my makefile as well: Example code for a basic proc file creation (direct copy and paste to get initial test done): http://tldp.org/LDP/lkmpg/2.6/html/lkmpg.html#AEN769 Makefile I'm using: obj-m := counter.o KDIR := /MY/LINUX/SRC PWD := $(shell pwd) default: $(MAKE) ARCH=um -C $(KDIR) SUBDIRS=$(PWD) modules

    Read the article

  • Printing Stdout In Command Line App Without Overwriting Pending User Input

    - by Chris S
    In a basic Unix-shell app, how would you print to stdout without disturbing any pending user input. e.g. Below is a simple Python app that echos user input. A thread running in the background prints a counter every 1 second. import threading, time class MyThread( threading.Thread ): running = False def run(self): self.running = True i = 0 while self.running: i += 1 time.sleep(1) print i t = MyThread() t.daemon = True t.start() try: while 1: inp = raw_input('command> ') print inp finally: t.running = False Note how the thread mangles the displayed user input as they type it (e.g. hell1o wo2rld3). How would you work around that, so that the shell writes a new line while preserving the line the user's currently typing on?

    Read the article

  • How to save Visual Studio Load test results into database.

    - by SonOfOmer
    Hi everyone, I want to know how can I perform VS2008 load test from five different machines and store test results data into one place, for example one database. It is load test that test unit test. I specify scenario and counter sets and I get result and reports that I can save as .trx files. How can I save specific data from test result from 5 different client computers that run the same test into one database and still know which client computer is data from. Thanks a lot.

    Read the article

  • Update {node_counter} table programatically in drupal

    - by wiifm69
    I currently use the statistics module (core) on my Drupal 6 install. This increments a count in the {node_counter} table every time the node is viewed, and this works. My question is - can I programmatically increment this counter as well? I am looking to achieve this when users interact with content created from views (say click a lightbox), so being able to update the table with AJAX would be ideal. I have done a quick search on d.o and there doesn't appear to be any modules that stick out straight away. Does anyone have any experience with this?

    Read the article

  • Ignoring old multiple asynchronous ajax requests

    - by Travis
    I've got a custom javascript autocomplete script that hits the server with multiple asynchronous ajax requests. (Everytime a key gets pressed.) I've noticed that sometimes an earlier ajax request will be returned after a later requests, which messes things up. The way I handle this now is I have a counter that increments for each ajax request. Requests that come back with a lower count get ignored. I'm wondering: Is this proper? Or is there a better way of dealing with this issue? Thanks in advance, Travis

    Read the article

  • Using SqlBulkCopy in a multithread scenario with ThreadPool issue

    - by Ruben F.
    Hi. I'm facing a dilemma (!). In a first scenario, i implemented a solution that replicates data from one data base to another using SQLBulkCopy synchronously and i had no problem at all. Now, using ThreadPool, i implemented the same in a assynchronously scenario, a thread per table, and all works fine, but past some time (usualy 1hour because the operations of copy takes about the same time), the operations sended to the ThreadPool stop being executed. There are one diferent SQLBulkCopy using one diferent SQLConnection per thread. I already see the number of free threads, and they are all free at the begining of the invocation...I have one AutoResetEvent to wait that the threads finish their job before launching again, and a Semaphore FIFO that hold the counter of active threads. There are some issue that I have forgotten or that I should avaliate when using SqlBulkCopy? I apreciate some help, because my ideas are over;) Thanks

    Read the article

  • What's the Best Practice for Firing Manual OnClick Events?

    - by Tyler Murry
    Hey guys, I've got an XNA project that will be drawing several objects on the screen. I would like the user to be able to interact with those items. So I'm trying to build a method that checks to see which object the mouse is over, out of those which is the top most, and then fire an OnClick event for that object. Checking for the things above is not the problem, but where to actually put that logic is most of the issue. My initial feeling is that the checking should be handled by a master object - since it doesn't make sense for an object, who ideally knows only about itself, to determine information about the other objects. However, calling OnClick events remotely from the master object seems to be counter-intuitive as well. What's the best practice in this situation? Thanks, Tyler

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >