Search Results

Search found 10352 results on 415 pages for 'context switching'.

Page 18/415 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • error: 'void Base::output()' is protected within this context

    - by Bill
    I'm confused about the errors generated by the following code. In Derived::doStuff, I can access Base::output directly by calling it. Why can't I create a pointer to output() in the same context that I can call output()? (I thought protected / private governed whether you could use a name in a specific context, but apparently that is incomplete?) Is my fix of writing callback(this, &Derived::output); instead of callback(this, Base::output) the correct solution? #include <iostream> using std::cout; using std::endl; template <typename T, typename U> void callback(T obj, U func) { ((obj)->*(func))(); } class Base { protected: void output() { cout << "Base::output" << endl; } }; class Derived : public Base { public: void doStuff() { // call it directly: output(); Base::output(); // create a pointer to it: // void (Base::*basePointer)() = &Base::output; // error: 'void Base::output()' is protected within this context void (Derived::*derivedPointer)() = &Derived::output; // call a function passing the pointer: // callback(this, &Base::output); // error: 'void Base::output()' is protected within this context callback(this, &Derived::output); } }; int main() { Derived d; d.doStuff(); }

    Read the article

  • Why do browsers allow switching off Javascript?

    - by gath
    Am curious why modern browsers allow switching off Javascript. It's so clear now that to do any substantial modern web application you need to integrate some high level of Javascript, why cant javascript be made an integral part of the browser? It becomes even more annoying especially when this option is OFF by default (IE!!) My opinion is, it should be made a standard for all the browsers to have javascript option enabled by default. What do you guys think?

    Read the article

  • Switching between landscape views

    - by Isacco
    What is the best way to switch between views that are both in Landscape mode? I've tried with a simple "Push and Pop" methods that work fine when I try them with Portrait views but when I do it with Landscape for some reason it does the switching like the method was supposed to work just for Portrait views and a the end of the switch it autorotates back to Landscape.... if anyone can help that would be greatly appreciated. Thanks in advance.

    Read the article

  • Microbenchmark showing process-switching faster than thread-switching; what's wrong?

    - by Yang
    I have two simple microbenchmarks trying to measure thread- and process-switching overheads, but the process-switching overhead. The code is living here, and r1667 is pasted below: https://assorted.svn.sourceforge.net/svnroot/assorted/sandbox/trunk/src/c/process_switch_bench.c // on zs, ~2.1-2.4us/switch #include <stdlib.h> #include <fcntl.h> #include <stdint.h> #include <stdio.h> #include <semaphore.h> #include <unistd.h> #include <sys/wait.h> #include <sys/types.h> #include <sys/time.h> #include <pthread.h> uint32_t COUNTER; pthread_mutex_t LOCK; pthread_mutex_t START; sem_t *s0, *s1, *s2; void * threads ( void * unused ) { // Wait till we may fire away sem_wait(s2); for (;;) { pthread_mutex_lock(&LOCK); pthread_mutex_unlock(&LOCK); COUNTER++; sem_post(s0); sem_wait(s1); } return 0; } int64_t timeInMS () { struct timeval t; gettimeofday(&t, NULL); return ( (int64_t)t.tv_sec * 1000 + (int64_t)t.tv_usec / 1000 ); } int main ( int argc, char ** argv ) { int64_t start; pthread_t t1; pthread_mutex_init(&LOCK, NULL); COUNTER = 0; s0 = sem_open("/s0", O_CREAT, 0022, 0); if (s0 == 0) { perror("sem_open"); exit(1); } s1 = sem_open("/s1", O_CREAT, 0022, 0); if (s1 == 0) { perror("sem_open"); exit(1); } s2 = sem_open("/s2", O_CREAT, 0022, 0); if (s2 == 0) { perror("sem_open"); exit(1); } int x, y, z; sem_getvalue(s0, &x); sem_getvalue(s1, &y); sem_getvalue(s2, &z); printf("%d %d %d\n", x, y, z); pid_t pid = fork(); if (pid) { pthread_create(&t1, NULL, threads, NULL); pthread_detach(t1); // Get start time and fire away start = timeInMS(); sem_post(s2); sem_post(s2); // Wait for about a second sleep(1); // Stop thread pthread_mutex_lock(&LOCK); // Find out how much time has really passed. sleep won't guarantee me that // I sleep exactly one second, I might sleep longer since even after being // woken up, it can take some time before I gain back CPU time. Further // some more time might have passed before I obtained the lock! int64_t time = timeInMS() - start; // Correct the number of thread switches accordingly COUNTER = (uint32_t)(((uint64_t)COUNTER * 2 * 1000) / time); printf("Number of process switches in about one second was %u\n", COUNTER); printf("roughly %f microseconds per switch\n", 1000000.0 / COUNTER); // clean up kill(pid, 9); wait(0); sem_close(s0); sem_close(s1); sem_unlink("/s0"); sem_unlink("/s1"); sem_unlink("/s2"); } else { if (1) { sem_t *t = s0; s0 = s1; s1 = t; } threads(0); // never return } return 0; } https://assorted.svn.sourceforge.net/svnroot/assorted/sandbox/trunk/src/c/thread_switch_bench.c // From <http://stackoverflow.com/questions/304752/how-to-estimate-the-thread-context-switching-overhead> // on zs, ~4-5us/switch; tried making COUNTER updated only by one thread, but no difference #include <stdlib.h> #include <stdint.h> #include <stdio.h> #include <pthread.h> #include <unistd.h> #include <sys/time.h> uint32_t COUNTER; pthread_mutex_t LOCK; pthread_mutex_t START; pthread_cond_t CONDITION; void * threads ( void * unused ) { // Wait till we may fire away pthread_mutex_lock(&START); pthread_mutex_unlock(&START); int first=1; pthread_mutex_lock(&LOCK); // If I'm not the first thread, the other thread is already waiting on // the condition, thus Ihave to wake it up first, otherwise we'll deadlock if (COUNTER > 0) { pthread_cond_signal(&CONDITION); first=0; } for (;;) { if (first) COUNTER++; pthread_cond_wait(&CONDITION, &LOCK); // Always wake up the other thread before processing. The other // thread will not be able to do anything as long as I don't go // back to sleep first. pthread_cond_signal(&CONDITION); } pthread_mutex_unlock(&LOCK); return 0; } int64_t timeInMS () { struct timeval t; gettimeofday(&t, NULL); return ( (int64_t)t.tv_sec * 1000 + (int64_t)t.tv_usec / 1000 ); } int main ( int argc, char ** argv ) { int64_t start; pthread_t t1; pthread_t t2; pthread_mutex_init(&LOCK, NULL); pthread_mutex_init(&START, NULL); pthread_cond_init(&CONDITION, NULL); pthread_mutex_lock(&START); COUNTER = 0; pthread_create(&t1, NULL, threads, NULL); pthread_create(&t2, NULL, threads, NULL); pthread_detach(t1); pthread_detach(t2); // Get start time and fire away start = timeInMS(); pthread_mutex_unlock(&START); // Wait for about a second sleep(1); // Stop both threads pthread_mutex_lock(&LOCK); // Find out how much time has really passed. sleep won't guarantee me that // I sleep exactly one second, I might sleep longer since even after being // woken up, it can take some time before I gain back CPU time. Further // some more time might have passed before I obtained the lock! int64_t time = timeInMS() - start; // Correct the number of thread switches accordingly COUNTER = (uint32_t)(((uint64_t)COUNTER * 2 * 1000) / time); printf("Number of thread switches in about one second was %u\n", COUNTER); printf("roughly %f microseconds per switch\n", 1000000.0 / COUNTER); return 0; }

    Read the article

  • Switching MySql Replication Format

    - by NNN
    I'm currently using statement-based replication. After upgrading to MySql 5.1, I'm considering using row-based replication. After reading the docs it seems that you can change the format of the master on the fly. Will the slave automatically adapt to whatever type of binary log it is sent? Do I have to make any changes to the slave or master to get ready for switching or can I simply modify the binlog_format variable on the master?

    Read the article

  • Switching 2003 SRV to 2008 caused Asp.net application not to load DLL

    - by Tom
    Switching aged 2003 SRV to 2008 caused my Asp.net 2 application fail: The application is no more loading the required library DLL from /bin/ folder anymore. What should I change in my code or web.config to make this webapp load OK also in new 2008 server? Now I receive this error when I access the application: This type is in IMPORTS ( Dll ). Compiler Error Message: BC30002: Type 'Facebook.Entity.User' is not defined.

    Read the article

  • Tools and tips for switching CMS

    - by Jimmy
    I work for a university, and in the past year we finally broke away from our static HTML site of several thousand pages and moved to a Drupal site. This obviously entails massive amounts of data entry. What if you're already using a CMS and are switching to another one that better suits your needs? How do you minimize the mountain of data entry during such a huge change? Are there tools built for this, or some best practices one should follow?

    Read the article

  • Switching 2003 SRV to 2008 caused Asp.net application not to Import Dll

    - by Tom
    Switching aged 2003 SRV to 2008 caused my Asp.net 2 application fail: The application is no more loading the required library DLL from /bin/ folder anymore. What should I change in my code or web.config to make this webapp load OK also in new 2008 server? Now I receive this error when I access the application: This type is in IMPORTS ( Dll ). Compiler Error Message: BC30002: Type 'Facebook.Entity.User' is not defined.

    Read the article

  • xl2tpd[845]: parse_config: line 13: data 'ipsec sared=yes' occurs with no context

    - by mmc18
    When I executed xl2tpd I amhaving following error. # xl2tpd -D xl2tpd[845]: parse_config: line 13: data 'ipsec sared=yes' occurs with no context xl2tpd[845]: init: Unable to load config file When I remove the "line 13" I having same error with "Line 14" thefore I do not think that the problem is about "ipsec sared" Here is my configuration file xl2tpd.conf. LINUX Ubuntu 12.0.4 ;Openswan IPsec 2.6.37; xl2tpd version: xl2tpd-1.3.1 ; [global] ipsec sared=yes listen-addr=47.168.137.27 ; [lns default] ip range = 192.168.1.10-192.168.1.20 local ip = 192.168.1.1 require chap = yes refuse pap = yes require authentication = yes ppp debug = yes pppoptfile = /etc/ppp/options.xl2tpd length bit = yes name=LinuxIPSECVPN ANSWER:(since have not enough reputation I am writting it over here.) removing the ";" character at the beginning of [global] and [lns default] have solved the issue. At fist I tought that [global] and[lns default] were just a comment.

    Read the article

  • Unable to create context rendering error when run OpenGL application

    - by Rodnower
    Hello, I try to run Mesa gears example and I get following error: freeglut (./gears): Unable to create direct context rendering for window 'Gears' This may hurt performance. though the application runs successfully, but I guess that in future I will have much problems with productivity. I run Linux CentOS 5 on WMvare 7. Mesa's version is 6.5 Relevant output of lspci -v gives: 00:0f.0 VGA compatible controller: VMware SVGA II Adapter (prog-if 00 [VGA controller]) Subsystem: VMware SVGA II Adapter Flags: bus master, medium devsel, latency 64, IRQ 9 I/O ports at 10d0 [size=16] Memory at d0000000 (32-bit, non-prefetchable) [size=128M] Memory at d8000000 (32-bit, non-prefetchable) [size=8M] [virtual] Expansion ROM at 30000000 [disabled] [size=32K] Capabilities: [40] Vendor Specific Information Any one have idea? There is driver of vmvare for CentOS? Thank you for ahead.

    Read the article

  • Unable to create context rendering error whet run OpenGL application

    - by Rodnower
    Hello, I try to run Mesa gears example and I get following error: freeglut (./gears): Unable to create direct context rendering for window 'Gears' This may hurt performance. though the application runs successfully, but I guess that in future I will have much problems with productivity. I run Linux CentOS 5 on WMvare 7. Mesa's version is 6.5 Relevant output of lspci -v gives: 00:0f.0 VGA compatible controller: VMware SVGA II Adapter (prog-if 00 [VGA controller]) Subsystem: VMware SVGA II Adapter Flags: bus master, medium devsel, latency 64, IRQ 9 I/O ports at 10d0 [size=16] Memory at d0000000 (32-bit, non-prefetchable) [size=128M] Memory at d8000000 (32-bit, non-prefetchable) [size=8M] [virtual] Expansion ROM at 30000000 [disabled] [size=32K] Capabilities: [40] Vendor Specific Information Any one have idea? There is driver of vmvare for CentOS? Thank you for ahead.

    Read the article

  • Remove context menu items

    - by velvetkevorkian
    When I right click on a file and select open with, there are a bunch of irrelevant entries that I will never use. A lot of them are bits of the OS (e.g. Core Image Funhouse.app on image files) or at best dubious (I doubt I will ever want to open a .txt using Illustrator). Is there any way to remove items from the Open With menu? This is on OSX Lion 10.7.5. The only questions I could find related to this were about uninstalled programs remaining in the context menu. Thanks!

    Read the article

  • Ghosting context menu clicks in WinXP

    - by Swish
    Let me preface by saying I have a lot of windows open most of the time, although not resource intensive ones, just browsers, ssh sessions, a music player, FTP client, Notepad++, IM clkients, etc. Anyway, I get a lot of weird visual "ghosting" type effects. For example when right-clicking and then selecting an option from a context menu the selected item will remain in view until I right click somewhere on the desktop. Same thing happens when selecting items from the File, Edit, etc. menu in various programs. I'm assuming this is just a result of a less than high quality video card (NVIDIA GeForce FX 5200), all the other hardware in the machine is newer higher quality, that specific video card was added after the fact for multiple monitors. I have looked all over the web for solutions and have increased the number of GDI handles for Windows, reduced the hardware accelaration on the card, etc. Any suggestions other than replace the card?

    Read the article

  • Switch user from locked screensaver?

    - by desert69
    I'm having the same problem as this, but the answer from there doesn't work for me. Neither in my iMac upgraded from Lion to Mountain Lion (currently at 10.8), nor from a Mac Mini with Lion Server (10.7.4). I have "Show fast user switching menu as" enabled at "Users & Groups" Login Options, but when I have to login from screensaver, there's no option to switch user. How can I enable such an option? EDIT: I think I forgot to mention that we believe (just guessing) that it has to do with network accounts. We use network accounts here, and have that issue. My boss says he could solve this issue at his house's Mac, which don't use network accounts. Does it make a difference?

    Read the article

  • Windows Desktop Randomly Switches Resolution

    - by bobber205
    Once per log in a specific account, Windows XP is switching to 800x600 and 8 (looks like) bit color. It then after a few seconds switches back to the regular settings. This, so far, has only happened, once per log in session but it is becoming very annoying for the end user. :P Any ideas? I've tried turning off hardware acceleration as well as getting rid of her picture desktop wallpaper but nothing has made a difference. Thanks for any help!

    Read the article

  • What does CONTROL mean in the context of the Certificate

    - by Ram
    Hi Everyone, I am trying to implement encryption in sql server 2005 through Certificate and Symmetric Key and i came to know that the application user should have the following access in order to Encrypt and Decrypt Data 1) CONTROL permission on Certificate and 2) REFERENCES on the Symmetric Key (Let me know if i am wrong) Now my concern is what does CONTROL mean in the context of Certificate? If my User1 has Control permission on my certificate Cert1 What all can he do, Is there a way to restrict him further, but user1 still be able to Encrypt\Decrypt the data I could not find any good practice doc for certificate and key management so can some one advice the good practice for this Thanks, Ram

    Read the article

  • Java - How to change context root of a dynamic web project in eclipse

    - by Yatendra Goel
    I have developed a dynamic web project in eclipse. Now I can access it through my browser using the following url: http://localhost:8080/MyDynamicWebApp Now I want to change the access url to http://localhost:8080/app I changed the context root from the project properties | Web Project Settings | Context Root But it is not working. The web app still has the access url as earlier. I have re-deployed the application on tomcat, re-started the tomcat and have done everything that should be done but the access url is the same as earlier. I found that there were no server.xml file attached with the WARfile. The how the tomcat is determining that the context root of my web app is /MyDynamicWebApp and is allowing me to access the application through that url

    Read the article

  • Apache - Tomcat ProxyPass VirtualHost - Context Path

    - by Arne
    Hi, I have a problem configuring apache tomcat ProxyPass directive for two applications that have two different Contaxt Pathes in tomcat. The tomcat is running behind an apache and I use the apache to proxy path the requests to tomcat. In apache I want to access both application via a hostname instead of a context path. Scenario: tomcat https://domain:8443/app1 https://domain:8443/app2 in tomcat the applications have the context path app1 and app2 in apache I want to enable both application as follow: https://app1.host/ https://app2.host/ In apache I have created a configuration for each domain: ProxyPass / https://localhost:8443/app1 ProxyPassReverse / https://localhost:/8443/app1 The strange thing is app1 is only available through apache using the context path: https://app1.host/app1 Is it possible to realize such a setup with apache ProxyPass module? Thx for your help.

    Read the article

  • How to dispose data context after usage

    - by Erwin
    Hi fellow programmer I have a member class that returned IQueryable from a data context public static IQueryable<TB_Country> GetCountriesQ() { IQueryable<TB_Country> country; Bn_Master_DataDataContext db = new Bn_Master_DataDataContext(); country = db.TB_Countries .OrderBy(o => o.CountryName); return country; } As you can see I don't delete the data context after usage. Because if I delete it, the code that call this method cannot use the IQueryable (perhaps because of deferred execution?). How to force immediate execution to this method? So I can dispose the data context.. Thank you :D

    Read the article

  • Using sphinx to create context sensitive html help

    - by bluebill
    Hi all, I am currently using AsciiDoc (http://www.methods.co.nz/asciidoc/) for documenting my software projects because it supports pdf and html help generation. I am currently running it through cygwin so that the a2x tool chain functions properly. This works well for me but is a pain to setup on other windows computers. I have been looking for alternative methods and recently revisited Sphinx. Noticing that it now produces html help files I gave it a try and it seems to work well in the small tests I performed. My question is, is there a way to specify map id's for context sensitive help in the text so that my windows programs can call the proper help api and the file is launched and opened to the desired location? In AsciiDoc I am using "pass::[]". By using these constructs a context.h and alias.h are generated along with the other html help files (context sensitive help information).

    Read the article

  • Creating a mask from a graphics context

    - by Magic Bullet Dave
    I want to be able to create a greyscale image with no alpha from a png in the app bundle. This works, and I get an image created: // Create graphics context the size of the overlapping rectangle UIGraphicsBeginImageContext(rectangleOfOverlap.size); CGContextRef ctx = UIGraphicsGetCurrentContext(); // More stuff CGContextDrawImage(ctx, drawRect2, [UIImage imageNamed:@"Image 01.png"].CGImage); // Create the new UIImage from the context UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext(); However the resulting image is 32 bits per pixel and has an alpha channel, so when I use CGCreateImageWithMask it doesn't work. I've tried creating a bitmap context thus: CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray(); CGContextRef ctx =CGBitmapContextCreate(nil, rectangleOfOverlap.size.width, rectangleOfOverlap.size.height, 8, rectangleOfOverlap.size.width , colorSpace, kCGImageAlphaNone); UIGraphicsGetImageFromCurrentImageContext returns zero and the resulting image is not created. Am I doing something dumb here? Any help would be greatly appreciated. Regards Dave

    Read the article

  • State pattern: Why doesn't the context class implement or inherit the State abstract interface/class

    - by Ricket
    I'm reading about the State pattern. I have only just begun, so of course I begin by reading the entire Wikipedia article on it. I noticed that both of the examples in the article have some base abstract class or Java interface for a generic State's methods/functions. Then there are some states which inherit from the base and implement those methods/functions in different ways. Then there's a Context class which has a private member of type State and which, at any time, can be equal to an instance of one of the implementations. That context class also implements the same methods, and passes them onto the current state instance, and then has an additional method to change the state (or depending on design I understand the change of state could be a reaction to one of the implemented methods). Why doesn't this context class specifically "extend" or "implement" the generic State base class/interface?

    Read the article

  • OpenAL device, buffer and context relationship

    - by Markus
    I'm trying to create an object oriented model to wrap OpenAL and have a little problem understanding the devices, buffers and contexts. From what I can see in the Programmer's Guide, there are multiple devices, each of which can have multiple contexts as well as multiple buffers. Each context has a listener, and the alListener*() functions all operate on the listener of the active context. (Meaning that I have to make another context active first if I wanted to change it's listener, if I got that right.) So far, so good. What irritates me though is that I need to pass a device to the alcCreateContext() function, but none to alGenBuffers(). How does this work then? When I open multiple devices, on which device are the buffers created? Are the buffers shared between all devices? What happens to the buffers if I close all open devices? (Or is there something I missed?)

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >