Search Results

Search found 7601 results on 305 pages for 'auto ptr'.

Page 56/305 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Use 3 monitors w/built-in intel adapter + two old nvidia PCI cards on 10.10?

    - by Kendall Gifford
    I'd like to move from windows with my current workstation. The only thing holding me back is that I have 3 monitors connected to the system and I really take advantage of the real estate when working. I just installed Ubuntu 10.10 on the system and one of the monitors is up and running just fine. This monitor is connected to the built-in Intel adapter. I also have two old nVidia GeForce4 MX 4000 (nv19pl) cards in my two PCI slots with two monitors connected to them respectively. I installed the legacy (and proprietary) nVidia drivers (the nvidia-96 package) that claims to support these old cards. Now the question is how to get X configured to use all adapters (using two different drivers) so I can use all three monitors (and is this even possible)? From what I've read, it looks like I'll have to write an xorg.conf file since the nVidia driver doesn't support the auto-magic configuration supported by other drivers. On this site: http://wiki.ubuntu.com/X/Config it says that on 10.10 I just need to write an xorg.conf "containing only those sections and options that you need to override Xorg's autoconfigurated settings". So, does this mean I can get away with only including the nVidia-specific configuration stuff and all else will get auto-configured? Or, will providing a config with a "Device" section overrule the auto-magic from detecting/using the Intel adapter? I ran the included nvidia-xconfig to generate a basic, nVidia-specific xorg.conf but I'm hesitant to reboot with it in place, suspecting I'll have a screwed up display. Also, is there any way (any tool or command) to generate an xorg.conf from the current, auto-configured running state of an X session? If I have to write a full, complete config, I'd rather start with one that includes everything that's been auto-detected thus far (and merge it with my nVidia version). Anyhow, any info and thoughts are greatly appreciated (as are answers).

    Read the article

  • GCC emits extra code for boost::shared_ptr dereference

    - by Checkers
    I have the following code: #include <boost/shared_ptr.hpp> struct Foo { int a; }; static int A; void func_shared(const boost::shared_ptr<Foo> &foo) { A = foo->a; } void func_raw(Foo * const foo) { A = foo->a; } I thought the compiler would create identical code, but for shared_ptr version an extra seemingly redundant instruction is emitted. Disassembly of section .text: 00000000 <func_raw(Foo*)>: 0: 55 push ebp 1: 89 e5 mov ebp,esp 3: 8b 45 08 mov eax,DWORD PTR [ebp+8] 6: 5d pop ebp 7: 8b 00 mov eax,DWORD PTR [eax] 9: a3 00 00 00 00 mov ds:0x0,eax e: c3 ret f: 90 nop 00000010 <func_shared(boost::shared_ptr<Foo> const&)>: 10: 55 push ebp 11: 89 e5 mov ebp,esp 13: 8b 45 08 mov eax,DWORD PTR [ebp+8] 16: 5d pop ebp 17: 8b 00 mov eax,DWORD PTR [eax] 19: 8b 00 mov eax,DWORD PTR [eax] 1b: a3 00 00 00 00 mov ds:0x0,eax 20: c3 ret I'm just curious, is this necessary, or it is just an optimizer's shortcoming? Compiling with g++ 4.1.2, -O3 -NDEBUG.

    Read the article

  • Call a void* as a function without declaring a function pointer

    - by ToxIk
    I've searched but couldn't find any results (my terminology may be off) so forgive me if this has been asked before. I was wondering if there is an easy way to call a void* as a function in C without first declaring a function pointer and then assigning the function pointer the address; ie. assuming the function to be called is type void(void) void *ptr; ptr = <some address>; ((void*())ptr)(); /* call ptr as function here */ with the above code, I get error C2066: cast to function type is illegal in VC2008 If this is possible, how would the syntax differ for functions with return types and multiple parameters?

    Read the article

  • array of pointers

    - by tushar
    char *a[]={"diamonds","clubs","spades","hearts"}; char **p[]={a+3,a+2,a+1,a}; char ***ptr=p; cout<<*ptr[2][2]; why does it display h and please explain how is the 2d array of ptr implementing and its elements

    Read the article

  • Can't get bonding and bridging to work for KVM

    - by user9546
    Hi everyone. I can't for the life of me get bonding and bridging to work for the KVM setup I'm building. I'm using a fresh install (not an upgrade) of Ubuntu Server 10.10. I have 4 NICs on the same subnet (two intended for each of my two VMs). I'm trying to achieve the setup that Uthark describes here. But following his guidelines didn't work for me. My eth0 and eth1 did not come up, and "brctl show" showed that br0 didn't have any interfaces (the bond). I assumed it didn't work because he's using 10.4, and this article says there's a recent change in bonding: [I can't post more than one hyperlink per post because I'm a newbie.] I had to use this article to get my interfaces to work at all on the same subnet, which is why I have the post-up lines on some of my interfaces: [I can't post more than one hyperlink per post because I'm a newbie.] I installed ifenslave and ethtool. I also created /etc/modprobe.d/aliases.conf with the following content: alias bond0 bonding options bonding mode=6 miimon=100 downdelay=200 updelay=200 And I included "bonding" in /etc/modules So, after several approaches, here is my latest interfaces file: auto lo iface lo inet loopback auto eth5 iface eth5 inet manual auto br5 iface br5 inet static post-up /sbin/ip rule add from [network].79 lookup 10 post-up /sbin/ip route add table 10 default via [network].1 src [network].79 dev br5 address [network].79 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports eth5 bridge_stp off bridge_fd 0 bridge_maxwait 0 auto eth2 iface eth2 inet manual auto br2 iface br2 inet static post-up /sbin/ip rule add from [network].78 lookup 11 post-up /sbin/ip route add table 11 default via [network].1 src [network].78 dev br2 address [network].78 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports eth2 bridge_stp off bridge_fd 0 bridge_maxwait 0 iface eth0 inet manual iface eth1 inet manual auto bond0 iface bond0 inet static bond_miimon 100 bond_mode balance-alb up /sbin/ifenslave bond0 eth0 eth1 down /sbin/ifenslave -d bond0 eth0 eth1 auto br0 iface br0 inet static address [network].60 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports bond0 eth2, eth5, br2, and br5 all seem to be working fine. The only other thing I could find that looked suspicious is an error regarding bonding in /var/log/messages: kernel: [ 3.828684] bonding: Warning: either miimon or arp_interval and arp_ip_target module parameters must be specified, otherwise bonding will not detect link failures! see bonding.txt for details. even though there is a bond-miimon line in /etc/network/interfaces (if that's what they're talking about). Also, the bond seems to go in and out of promiscuous mode several times on boot: Jan 20 14:19:02 kvmhost kernel: [ 3.902378] device bond0 entered promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902390] device bond0 left promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902393] device bond0 entered promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902397] device bond0 left promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.998990] device bond0 entered promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999005] device bond0 left promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999008] device bond0 entered promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999012] device bond0 left promiscuous mode Any advice would be greatly appreciated. It seems that this must be possible, based on other posts, but I can't see what I'm doing wrong. Thanks.

    Read the article

  • UTF-8 to Unicode conversion

    - by sandeep
    Hi, I am having problems with converting UTF-8 to Unicode. Below is the code: int charset_convert( char * string, char * to_string,char* charset_from, char* charset_to) { char *from_buf, *to_buf, *pointer; size_t inbytesleft, outbytesleft, ret; size_t TotalLen; iconv_t cd; if (!charset_from || !charset_to || !string) /* sanity check */ return -1; if (strlen(string) < 1) return 0; /* we are done, nothing to convert */ cd = iconv_open(charset_to, charset_from); /* Did I succeed in getting a conversion descriptor ? */ if (cd == (iconv_t)(-1)) { /* I guess not */ printf("Failed to convert string from %s to %s ", charset_from, charset_to); return -1; } from_buf = string; inbytesleft = strlen(string); /* allocate max sized buffer, assuming target encoding may be 4 byte unicode */ outbytesleft = inbytesleft *4 ; pointer = to_buf = (char *)malloc(outbytesleft); memset(to_buf,0,outbytesleft); memset(pointer,0,outbytesleft); ret = iconv(cd, &from_buf, &inbytesleft, &pointer, &outbytesleft);ing memcpy(to_string,to_buf,(pointer-to_buf); } main(): int main() { char UTF []= {'A', 'B'}; char Unicode[1024]= {0}; char* ptr; int x=0; iconv_t cd; charset_convert(UTF,Unicode,"UTF-8","UNICODE"); ptr = Unicode; while(*ptr != '\0') { printf("Unicode %x \n",*ptr); ptr++; } return 0; } It should give A and B but i am getting: ffffffff fffffffe 41 Thanks, Sandeep

    Read the article

  • When does printf("%s", char*) stop printing?

    - by remagen
    In my class we are writing our own copy of C's malloc() function. To test my code (which can currently allocate space fine) I was using: char* ptr = my_malloc(6*sizeof(char)); memcpy(ptr, "Hello\n", 6*sizeof(char)); printf("%s", ptr); The output would typically be this: Hello Unprintable character Some debugging figured that my code wasn't causing this per say, as ptr's memory is as follows: [24 bytes of meta info][Number of requested bytes][Padding] So I figured that printf was reaching into the padding, which is just garbage. So I ran a test of: printf("%s", "test\nd"); and got: test d Which makes me wonder, when DOES printf("%s", char*) stop printing chars?

    Read the article

  • Ubunti 12.10 wake on lan not working with Realtek 8139

    - by f.cipriani
    My pc doesn't wake up when receiving a magic packet from a pc connected to the same router. ethtool: fcipriani@ubuntu:~$ sudo ethtool eth0 Settings for eth0: Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Advertised pause frame use: No Advertised auto-negotiation: Yes Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Link partner advertised pause frame use: No Link partner advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: MII PHYAD: 32 Transceiver: internal Auto-negotiation: on Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000007 (7) drv probe link Link detected: yes I have enabled all the wake up features in my bios, and I have verified the magic packet gets to the pc. I suspect the main problem is that the NIC light is completely turned off after the shutdown, but even after spending a lot of time researching I can't understand if this is a limit of my network card, my mobo, or something in the OS which needs to be configured correctly in order to leave the NIC in stand by mode with the light flashing. the NIC is Realtek 8139, the motherboard Asus P5L13L-X

    Read the article

  • bash-completion for xelatex

    - by andreas-h
    I'm on Ubuntu 12.04. When using bash as my shell, I can just type latex my_do <TAB> to compile a file my_document.tex, as *bash_completion* does the auto-complete. However, this auto-completion does not work for the xelatex executable. So I would like to add the same auto-complete functionality for xelatex as exists for latex. I could find that the latex auto-complete feature comes from the file /etc/bash_completion, where I could find a line complete -f -X '!*.@(?(la)tex|texi|dtx|ins|ltx)' tex latex slitex jadetex pdfjadetex pdftex pdflatex texi2dvi Now I could of course just add xelatex to that line and everything is fine. However, I'm wondering if I could instead put a file in /etc/bash_completion.d, as this would leave the system file untouched. Unfortunately, I'm at a complete loss about the syntax -- but maybe someone can help me here?

    Read the article

  • Accessing structure elements using pointers

    - by Arun Nadesh
    Hi Everybody, Greetings! I got surprised when the following program did not crash. typedef struct _x{ int a; char b; int c; }x; main() { x *ptr=0; char *d=&ptr->b; } As per my understanding the -> operator has higher precedence over & operator. So I expected the program to crash at the below statement when we try to dereference the NULL pointer tr. char *d=&ptr->b; But the statement &ptr->b evaluates to a valid address. Could somebody please explain where I'm wrong? Thanks & Regards, Arun

    Read the article

  • Compiled Haskell libraries with FFI imports are invalid when imported into GHCI

    - by John Millikin
    I am using GHC 6.12.1, in Ubuntu 10.04 When I try to use the FFI syntax for static storage, only modules running in interpreted mode (ie GHCI) work properly. Compiled modules have invalid pointers, and do not work. I'd like to know whether anybody can reproduce the problem, whether this an error in my code or GHC, and (if the latter) whether it's a known issue. I'm using sys_siglist because it's present in a standard library on my system, but I don't believe the actual storage used matters (I discovered this while writing a binding to libidn). If it helps, sys_siglist is defined in <signal.h> as: extern __const char *__const sys_siglist[_NSIG]; I thought this type might be the problem, so I also tried wrapping it in a plain C procedure: #include<stdio.h> const char **test_ffi_import() { printf("C think sys_siglist = %X\n", sys_siglist); return sys_siglist; } However, importing that doesn't change the result, and the printf() call prints the same pointer value as show siglist_a. My suspicion is that it's something to do with static and dynamic library loading. Update: somebody in #haskell suggested this might be 64-bit specific; if anybody tries to reproduce it, can you mention your architecture and whether it worked in a comment? Code as follows: -- A.hs {-# LANGUAGE ForeignFunctionInterface #-} module A where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_a :: Ptr CString -- -- B.hs {-# LANGUAGE ForeignFunctionInterface #-} module B where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_b :: Ptr CString -- -- Main.hs {-# LANGUAGE ForeignFunctionInterface #-} module Main where import Foreign import Foreign.C import A import B foreign import ccall "&sys_siglist" siglist_main :: Ptr CString main = do putStrLn $ "siglist_a = " ++ show siglist_a putStrLn $ "siglist_b = " ++ show siglist_b putStrLn $ "siglist_main = " ++ show siglist_main peekSiglist "a " siglist_a peekSiglist "b " siglist_b peekSiglist "main" siglist_main peekSiglist name siglist = do ptr <- peekElemOff siglist 2 str <- maybePeek peekCString ptr putStrLn $ "siglist_" ++ name ++ "[2] = " ++ show str I would expect something like this output, where all pointer values identical and valid: $ runhaskell Main.hs siglist_a = 0x00007f53a948fe00 siglist_b = 0x00007f53a948fe00 siglist_main = 0x00007f53a948fe00 siglist_a [2] = Just "Interrupt" siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt" However, if I compile A.hs (with ghc -c A.hs), then the output changes to: $ runhaskell Main.hs siglist_a = 0x0000000040378918 siglist_b = 0x00007fe7c029ce00 siglist_main = 0x00007fe7c029ce00 siglist_a [2] = Nothing siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt"

    Read the article

  • Why no switch on pointers?

    - by meeselet
    For instance: #include <stdio.h> void why_cant_we_switch_him(void *ptr) { switch (ptr) { case NULL: printf("NULL!\n"); break; default: printf("%p!\n", ptr); break; } } int main(void) { void *foo = "toast"; why_cant_we_switch_him(foo); return 0; } gcc test.c -o test test.c: In function 'why_cant_we_switch_him': test.c:5: error: switch quantity not an integer test.c:6: error: pointers are not permitted as case values Just curious. Is this a technical limitation? EDIT People seem to think there is only one constant pointer expression. Is that is really true, though? For instance, here is a common paradigm in Objective-C (it is really only C aside from NSString, id and nil, which are merely a pointers, so it is still relevant — I just wanted to point out that there is, in fact, a common use for it, despite this being only a technical question): #include <stdio.h> #include <Foundation/Foundation.h> static NSString * const kMyConstantObject = @"Foo"; void why_cant_we_switch_him(id ptr) { switch (ptr) { case kMyConstantObject: // (Note that we are comparing pointers, not string values.) printf("We found him!\n"); break; case nil: printf("He appears to be nil (or NULL, whichever you prefer).\n"); break; default: printf("%p!\n", ptr); break; } } int main(void) { NSString *foo = @"toast"; why_cant_we_switch_him(foo); foo = kMyConstantObject; why_cant_we_switch_him(foo); return 0; } gcc test.c -o test -framework Foundation test.c: In function 'why_cant_we_switch_him': test.c:5: error: switch quantity not an integer test.c:6: error: pointers are not permitted as case values It appears that the reason is that switch only allows integral values (as the compiler warning said). So I suppose a better question would be to ask why this is the case? (though it is probably too late now.)

    Read the article

  • Why do compiled Haskell libraries see invalid static FFI storage?

    - by John Millikin
    I am using GHC 6.12.1, in Ubuntu 10.04 When I try to use the FFI syntax for static storage, only modules running in interpreted mode (ie GHCI) work properly. Compiled modules have invalid pointers, and do not work. I'd like to know whether anybody can reproduce the problem, whether this an error in my code or GHC, and (if the latter) whether it's a known issue. Given the following three modules: -- A.hs {-# LANGUAGE ForeignFunctionInterface #-} module A where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_a :: Ptr CString -- -- B.hs {-# LANGUAGE ForeignFunctionInterface #-} module B where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_b :: Ptr CString -- -- Main.hs {-# LANGUAGE ForeignFunctionInterface #-} module Main where import Foreign import Foreign.C import A import B foreign import ccall "&sys_siglist" siglist_main :: Ptr CString main = do putStrLn $ "siglist_a = " ++ show siglist_a putStrLn $ "siglist_b = " ++ show siglist_b putStrLn $ "siglist_main = " ++ show siglist_main peekSiglist "a " siglist_a peekSiglist "b " siglist_b peekSiglist "main" siglist_main peekSiglist name siglist = do ptr <- peekElemOff siglist 2 str <- maybePeek peekCString ptr putStrLn $ "siglist_" ++ name ++ "[2] = " ++ show str I would expect something like this output, where all pointer values identical and valid: $ runhaskell Main.hs siglist_a = 0x00007f53a948fe00 siglist_b = 0x00007f53a948fe00 siglist_main = 0x00007f53a948fe00 siglist_a [2] = Just "Interrupt" siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt" However, if I compile A.hs (with ghc -c A.hs), then the output changes to: $ runhaskell Main.hs siglist_a = 0x0000000040378918 siglist_b = 0x00007fe7c029ce00 siglist_main = 0x00007fe7c029ce00 siglist_a [2] = Nothing siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt"

    Read the article

  • using one data template in another data template in WPF

    - by Sowmya
    Hi, I have two data templates, one of which is the subset of another like below: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:igEditors="http://infragistics.com/Editors" xmlns:sys="clr-namespace:System;assembly=mscorlib" xmlns:controls="clr-namespace:Client.UI.WPF;assembly=Client.UI.WPF" xmlns:d="http://schemas.microsoft.com/expression/blend/2006" > <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="pack://application:,,,/Client.Resources.WPF.Styles;Component/Styles/CommonStyles.xaml"/> </ResourceDictionary.MergedDictionaries> <DataTemplate x:Key="XYZDataTemplate"> <Grid x:Name="_rootGrid" DataContext="{Binding DataContext}" HorizontalAlignment="Left" VerticalAlignment="Top"> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> <controls:ValueDisplay Grid.Row="0" Grid.Column="0" LabelText="Build number" x:Name="buildNumber" HorizontalAlignment="Left" VerticalAlignment="Top" Width="120" Margin="5,10,0,0"> <igEditors:XamTextEditor /> </controls:ValueDisplay> <controls:ValueDisplay Grid.Row="0" Grid.Column="1" LabelText="Tool version" x:Name="toolVersion" HorizontalAlignment="Left" VerticalAlignment="Top" Width="120" Margin="20,10,0,0"> <igEditors:XamTextEditor IsReadOnly="True"/> </controls:ValueDisplay> </Grid> </DataTemplate> and the other is like below: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:igEditors="http://infragistics.com/Editors" xmlns:sys="clr-namespace:System;assembly=mscorlib" xmlns:controls="clr-namespace:BHI.ULSS.Client.UI.WPF;assembly=ULSS.Client.UI.WPF" xmlns:d="http://schemas.microsoft.com/expression/blend/2006" > <DataTemplate x:Key="ABCDataTemplate" > <Grid x:Name="_rootGrid" DataContext="{Binding DataContext}" HorizontalAlignment="Left" VerticalAlignment="Top"> <Grid.RowDefinitions> <RowDefinition/> <RowDefinition/> <RowDefinition/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> <controls:ValueDisplay Grid.Row="0" Grid.Column="0" LabelText="Build number" x:Name="buildNumber" HorizontalAlignment="Left" VerticalAlignment="Top" Width="120" Margin="5,10,0,0"> <igEditors:XamTextEditor /> </controls:ValueDisplay> <controls:ValueDisplay Grid.Row="0" Grid.Column="1" LabelText="Tool version" x:Name="toolVersion" HorizontalAlignment="Left" VerticalAlignment="Top" Width="120" Margin="20,10,0,0"> <igEditors:XamTextEditor IsReadOnly="True"/> </controls:ValueDisplay> <controls:ValueDisplay Grid.Row="0" Grid.Column="2" LabelText="Size" ShowUnit="True" x:Name="size" HorizontalAlignment="Left" VerticalAlignment="Top" Width="120" Margin="20,10,0,0"> <igEditors:XamTextEditor/> </controls:ValueDisplay> </Grid> </DataTemplate> XYZDataTemplate is a subset of the ABCDataTemplate as the first two fields in both the data templates are common, so I was wondering if it is possible to replace the redundant code in the ABCDataTemplate with that of the XYZDataTemplate for code maintainability? Could anyone please suggest if would this be a right approach, if so how can I acheive that? Thanks in advance, Sowmya

    Read the article

  • Bumblebee [ERROR]Cannot access secondary GPU - error: [XORG]

    - by Lunchbox
    Though this may seem like a duplicate question, none of the suggestions I've seen have worked for me, however nearly all posters get good results. I'll start with hardware: Metabox W350ST notebook Intel Core i7 4700 16GB RAM GTX 765M (with Optimus) 128GB SSD 1TB SSHD My initial error output when trying to optirun a game is: [ERROR]Cannot access secondary GPU - error: [XORG] (EE) NVIDIA(0): Failed to initialize the NVIDIA GPU at PCI:1:0:0. Please [133.973920] [ERROR]Aborting because fallback start is disabled. If anything else is needed to troubleshoot this just let me know. Adding bumblebee.conf: # Configuration file for Bumblebee. Values should **not** be put between quotes ## Server options. Any change made in this section will need a server restart # to take effect. [bumblebeed] # The secondary Xorg server DISPLAY number VirtualDisplay=:8 # Should the unused Xorg server be kept running? Set this to true if waiting # for X to be ready is too long and don't need power management at all. KeepUnusedXServer=false # The name of the Bumbleblee server group name (GID name) ServerGroup=bumblebee # Card power state at exit. Set to false if the card shoud be ON when Bumblebee # server exits. TurnCardOffAtExit=false # The default behavior of '-f' option on optirun. If set to "true", '-f' will # be ignored. NoEcoModeOverride=false # The Driver used by Bumblebee server. If this value is not set (or empty), # auto-detection is performed. The available drivers are nvidia and nouveau # (See also the driver-specific sections below) Driver=nvidia # Directory with a dummy config file to pass as a -configdir to secondary X XorgConfDir=/etc/bumblebee/xorg.conf.d ## Client options. Will take effect on the next optirun executed. [optirun] # Acceleration/ rendering bridge, possible values are auto, virtualgl and # primus. Bridge=auto # The method used for VirtualGL to transport frames between X servers. # Possible values are proxy, jpeg, rgb, xv and yuv. VGLTransport=proxy # List of paths which are searched for the primus libGL.so.1 when using # the primus bridge PrimusLibraryPath=/usr/lib/x86_64-linux-gnu/primus:/usr/lib/i386-linux-gnu/primus # Should the program run under optirun even if Bumblebee server or nvidia card # is not available? AllowFallbackToIGC=false # Driver-specific settings are grouped under [driver-NAME]. The sections are # parsed if the Driver setting in [bumblebeed] is set to NAME (or if auto- # detection resolves to NAME). # PMMethod: method to use for saving power by disabling the nvidia card, valid # values are: auto - automatically detect which PM method to use # bbswitch - new in BB 3, recommended if available # switcheroo - vga_switcheroo method, use at your own risk # none - disable PM completely # https://github.com/Bumblebee-Project/Bumblebee/wiki/Comparison-of-PM-methods ## Section with nvidia driver specific options, only parsed if Driver=nvidia [driver-nvidia] # Module name to load, defaults to Driver if empty or unset KernelDriver=nvidia PMMethod=auto # colon-separated path to the nvidia libraries LibraryPath=/usr/lib/nvidia-current:/usr/lib32/nvidia-current # comma-separated path of the directory containing nvidia_drv.so and the # default Xorg modules path XorgModulePath=/usr/lib/nvidia-current/xorg,/usr/lib/xorg/modules XorgConfFile=/etc/bumblebee/xorg.conf.nvidia ## Section with nouveau driver specific options, only parsed if Driver=nouveau [driver-nouveau] KernelDriver=nouveau PMMethod=auto XorgConfFile=/etc/bumblebee/xorg.conf.nouveau DRIVER VERSION - Output of jockey-text -l: nvidia_304_updates - nvidia_304_updates (Proprietary, Enabled, Not in use)

    Read the article

  • Game Patching Mac/PC

    - by Centurion Games
    Just wondering what types of solutions are available to handle patching of PC/Mac games that don't have any sort of auto updater built into them. In windows do you just spin off some sort of new install shield for the game that includes the updated files, hope you can read a valid registry key to point to the right directory, and overwrite files? If so how does that translate over to Mac where the game is normally just distributed as straight up .app file? Is there a better approach than the above for an already released product? (Assuming direct sells, and not through a marketplace that features auto-updating like Steam.) Are there any off the shelf auto-updater type libraries that could also be easily integrated with a C/C++ code base even after a game has been shipped to make this a lot simpler, and that are cross platform? Also how do auto-updaters work with new OS's that want applications and files digitally signed?

    Read the article

  • How do I separate codes with classes?

    - by Trycon
    I have this main class: package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; public class tests extends BasicGameState{ public boolean render=false; tests1 test = new tests1(); public tests(int test) { // TODO Auto-generated constructor stub } @Override public void init(GameContainer arg0, StateBasedGame arg1) throws SlickException { // TODO Auto-generated method stub } @Override public void render(GameContainer arg0, StateBasedGame arg1, Graphics g) throws SlickException { // TODO Auto-generated method stub if(render==true) { g.drawString("Hello",100,100); } } @Override public void update(GameContainer gc, StateBasedGame s, int delta) throws SlickException { // TODO Auto-generated method stub test.render=render; test.update(gc, s, delta); } @Override public int getID() { // TODO Auto-generated method stub return 1000; } } and its sub-class: package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Input; import org.newdawn.slick.state.StateBasedGame; public class tests1 { public boolean render; public void update(GameContainer gc, StateBasedGame s, int delta) { Input input = gc.getInput(); if(input.isKeyPressed(Input.KEY_X)) { render=true; } } } I was finding a way to prevent many codes in one class. I'm new to java. When I try running my game, then when I press X, it does not work. How am I suppose to fix that?

    Read the article

  • template specialization of a auto_ptr<T>

    - by Chris Kaminski
    Maybe I'm overcomplicating things, but then again, I do sort of like clean interfaces. Let's say I want a specialization of auto_ptr for an fstream - I want a default fstream for the generic case, but allow a replacement pointer? tempate <> class auto_ptr<fstream> static fstream myfStream; fstream* ptr; public: auto_ptr() { // set ptr to &myfStream; } reset(fstream* newPtr) { // free old ptr if not the static one. ptr = newPtr }; } Would you consider something different or more elegant? And how would you keep something like the above from propagating outside this particular compilation unit? [The actual template is a boost::scoped_ptr.] EDIT: It's a contrived example. Ignore the fstream - it's about providing a default instance of object for an auto_ptr. I may not want to provide a specialized instance, but would like to keep the auto_ptr semantics for this static default object. class UserClass { public: auto_ptr<fstream> ptr; UserClass() { } } I may not provide an dynamic object at construction time - I still want it to have a meaningful default. Since I'm not looking at ownership-transfer semantics, it really shouldn't matter that my pointer class is pointing to a statically allocated object, no?

    Read the article

  • c++-to-python swig caused memory leak! Related to Py_BuildValue and SWIG_NewPointerObj

    - by usfree74
    Hey gurus, I have the following Swig code that caused memory leak. PyObject* FindBestMatch(const Bar& fp) { Foo* ptr(new Foo()); float match; // call a function to fill the foo pointer return Py_BuildValue( "(fO)", match, SWIG_NewPointerObj(ptr, SWIGTYPE_p_Foo, 0 /* own */)); } I figured that ptr is not freed properly. So I did the following: PyObject* FindBestMatch(const Bar& fp) { Foo* ptr(new Foo()); float match; // call a function to fill the foo pointer *PyObject *o = SWIG_NewPointerObj(ptr, SWIGTYPE_p_Foo, 1 /* own */);* <------- 1 means pass the ownership to python PyObject *result = Py_BuildValue("(fO)", match, o); Py_XDECREF(o); return result; } But I am not very sure whether this will cause memory corruption. Here, Py_XDECREF(o) will decrease the ref count, which can free memory used by object "o". But o is part of the return value "result". Freeing "o" can cause data corrupt, I guess? I tried my change. It works fine and the caller (python code) does see the expected data. But this could be because nobody else overwrites to that memory area. So what's the right way to deal with memory management of the above code? I search the swig docs, but don't see very concrete description. Please help! Thanks, xin

    Read the article

  • C++ converting binary(P5) image to ascii(P2) image (.pgm)

    - by tubby
    I am writing a simple program to convert grayscale binary (P5) to grayscale ascii (P2) but am having trouble reading in the binary and converting it to int. #include <iostream> #include <fstream> #include <sstream> using namespace::std; int usage(char* arg) { // exit program cout << arg << ": Error" << endl; return -1; } int main(int argc, char* argv[]) { int rows, cols, size, greylevels; string filetype; // open stream in binary mode ifstream istr(argv[1], ios::in | ios::binary); if(istr.fail()) return usage(argv[1]); // parse header istr >> filetype >> rows >> cols >> greylevels; size = rows * cols; // check data cout << "filetype: " << filetype << endl; cout << "rows: " << rows << endl; cout << "cols: " << cols << endl; cout << "greylevels: " << greylevels << endl; cout << "size: " << size << endl; // parse data values int* data = new int[size]; int fail_tracker = 0; // find which pixel failing on for(int* ptr = data; ptr < data+size; ptr++) { char t_ch; // read in binary char istr.read(&t_ch, sizeof(char)); // convert to integer int t_data = static_cast<int>(t_ch); // check if legal pixel if(t_data < 0 || t_data > greylevels) { cout << "Failed on pixel: " << fail_tracker << endl; cout << "Pixel value: " << t_data << endl; return usage(argv[1]); } // if passes add value to data array *ptr = t_data; fail_tracker++; } // close the stream istr.close(); // write a new P2 binary ascii image ofstream ostr("greyscale_ascii_version.pgm"); // write header ostr << "P2 " << rows << cols << greylevels << endl; // write data int line_ctr = 0; for(int* ptr = data; ptr < data+size; ptr++) { // print pixel value ostr << *ptr << " "; // endl every ~20 pixels for some readability if(++line_ctr % 20 == 0) ostr << endl; } ostr.close(); // clean up delete [] data; return 0; } sample image - Pulled this from an old post. Removed the comment within the image file as I am not worried about this functionality now. When compiled with g++ I get output: $> ./a.out a.pgm filetype: P5 rows: 1024 cols: 768 greylevels: 255 size: 786432 Failed on pixel: 1 Pixel value: -110 a.pgm: Error The image is a little duck and there's no way the pixel value can be -110...where am I going wrong? Thanks.

    Read the article

  • Update to Alert on Java Runtime Environment (JRE) for EBS end-users on Windows

    - by user793553
    To ensure that Java users remain on a secure version, Windows systems that rely on auto-update will be auto-updated from JRE 6 to JRE 7. Until E-Business Suite is certified with JRE 7, EBS users should not rely on the Windows auto-update mechanism for their client machines and should manually keep the JRE up to date with the latest version of JRE 6 until further notice.   Click here for more details and for instructions on how to get the latest version of JRE 6  

    Read the article

  • Wake-on-lan under Ubuntu 12.04

    - by iUngi
    I would like to setup the wake-on-lan, the two PCs are connected through a switch. Here is the configuration of the eth0, in the BIOS I couldn't find any information regarding the wake-on-lan. Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Half 1000baseT/Full Advertised pause frame use: Symmetric Receive-only Advertised auto-negotiation: Yes Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Link partner advertised pause frame use: Symmetric Link partner advertised auto-negotiation: Yes Speed: 1000Mb/s Duplex: Full Port: MII PHYAD: 0 Transceiver: internal Auto-negotiation: on Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000033 (51) drv probe ifdown ifup Link detected: yes After I shut down the PC, I used different tools to send out the magic package, but nothing happens. Any suggestion?

    Read the article

  • Dangling pointer

    - by viswanathan
    Does this piece of code lead to dangling pointer. My guess is no. class Sample { public: int *ptr; Sample(int i) { ptr = new int(i); } ~Sample() { delete ptr; } void PrintVal() { cout << "The value is " << *ptr; } }; void SomeFunc(Sample x) { cout << "Say i am in someFunc " << endl; } int main() { Sample s1 = 10; SomeFunc(s1); s1.PrintVal(); }

    Read the article

  • How to access class member from bison action

    - by yodhevauhe
    I am calling yyparse from a member function. How to access the member variables/function from the bison action. I am currently doing as %{ #include "myclass.h" #include "parse.tab.hh" MyClass *ptr=NULL; void MyClass::evaluate(string expression) { ptr=this; yy_scan_string(expression.c_str()); yyparse(); } %} %% EXPR : EXPR PLUS EXPR { $$ = ptr->memberFunction("+",$1,$3); }

    Read the article

  • Problems with passing an anonymous temporary function-object to a templatized constructor.

    - by Akanksh
    I am trying to attach a function-object to be called on destruction of a templatized class. However, I can not seem to be able to pass the function-object as a temporary. The warning I get is (if the comment the line xi.data = 5;): warning C4930: 'X<T> xi2(writer (__cdecl *)(void))': prototyped function not called (was a variable definition intended?) with [ T=int ] and if I try to use the constructed object, I get a compilation error saying: error C2228: left of '.data' must have class/struct/union I apologize for the lengthy piece of code, but I think all the components need to be visible to assess the situation. template<typename T> struct Base { virtual void run( T& ){} virtual ~Base(){} }; template<typename T, typename D> struct Derived : public Base<T> { virtual void run( T& t ) { D d; d(t); } }; template<typename T> struct X { template<typename R> X(const R& r) { std::cout << "X(R)" << std::endl; ptr = new Derived<T,R>(); } X():ptr(0) { std::cout << "X()" << std::endl; } ~X() { if(ptr) { ptr->run(data); delete ptr; } else { std::cout << "no ptr" << std::endl; } } Base<T>* ptr; T data; }; struct writer { template<typename T> void operator()( const T& i ) { std::cout << "T : " << i << std::endl; } }; int main() { { writer w; X<int> xi2(w); //X<int> xi2(writer()); //This does not work! xi2.data = 15; } return 0; }; The reason I am trying this out is so that I can "somehow" attach function-objects types with the objects without keeping an instance of the function-object itself within the class. Thus when I create an object of class X, I do not have to keep an object of class writer within it, but only a pointer to Base<T> (I'm not sure if I need the <T> here, but for now its there). The problem is that I seem to have to create an object of writer and then pass it to the constructor of X rather than call it like X<int> xi(writer(); I might be missing something completely stupid and obvious here, any suggestions?

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >