Search Results

Search found 19878 results on 796 pages for 'bit pirate'.

Page 96/796 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • broadcom 5722 NIC not installed on Ubuntu Server, although driver present

    - by Bastien
    Hello, I just installed Ubuntu Server 10.04 LTS, running kernel 2.6.32-24-server, on a brand new Dell T110 server, supposedly fully compatible with Ubuntu Server. I have two NICs: one ONBOARD, the other additional on PCI. both of them are Broadcom netXtreme 5572. on the first boot of the system, I could see both cards as eth0 and eth1 (with ifconfig) I configured eth0 as static IP (as planned), and did not configure eth1. after rebooting, one of the two NICs "disappeared": it does not appear in ifconfig at all. the one that disappeared is the ONBOARD one. I investigated a bit and found the following things: the card is SEEN, but not "installed", it appears as "UNCLAIMED" in lshw: *-network UNCLAIMED description: Ethernet controller product: NetXtreme BCM5722 Gigabit Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:04:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress cap_list configuration: latency=0 resources: memory:df9f0000-df9fffff *-network description: Ethernet interface product: NetXtreme BCM5722 Gigabit Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 00 serial: 00:10:18:60:23:64 size: 100MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.102 duplex=full firmware=5722-v3.09 ip=10.129.167.25 latency=0 link=yes multicast=yes port=twisted pair speed=100MB/s resources: irq:35 memory:dfaf0000-dfafffff so I checked my dmesg and found a few strange lines, showing, there actually is a problem bringing up this card: [ 3.737506] tg3: Could not obtain valid ethernet address, aborting. [ 3.737527] tg3 0000:04:00.0: PCI INT A disabled [ 3.737535] tg3: probe of 0000:04:00.0 failed with error -22 [ 3.737553] alloc irq_desc for 17 on node -1 [ 3.737555] alloc kstat_irqs on node -1 [ 3.737560] tg3 0000:05:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [ 3.737566] tg3 0000:05:00.0: setting latency timer to 64 [ 3.793529] eth0: Tigon3 [partno(BCM95722A2202G) rev a200] (PCI Express) MAC address 00:10:18:60:23:64 [ 3.793532] eth0: attached PHY is 5722/5756 (10/100/1000Base-T Ethernet) (WireSpeed[1]) [ 3.793534] eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] TSOcap[1] [ 3.793536] eth0: dma_rwctrl[76180000] dma_mask[64-bit] that actually shows that one NIC is recognized, the other is not. I researched a bit more, with lspci -v: 04:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Subsystem: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Flags: fast devsel, IRQ 16 Memory at df9f0000 (64-bit, non-prefetchable) [size=64K] Capabilities: [48] Power Management version 3 Capabilities: [50] Vital Product Data <?> Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable- Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number 00-00-00-fe-ff-00-00-00 Kernel modules: tg3 05:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Subsystem: Broadcom Corporation NetXtreme BCM5722 Gigabit Ethernet PCI Express Flags: bus master, fast devsel, latency 0, IRQ 35 Memory at dfaf0000 (64-bit, non-prefetchable) [size=64K] Expansion ROM at <ignored> [disabled] Capabilities: [48] Power Management version 3 Capabilities: [50] Vital Product Data <?> Capabilities: [58] Vendor Specific Information <?> Capabilities: [e8] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable+ Capabilities: [d0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting <?> Capabilities: [13c] Virtual Channel <?> Capabilities: [160] Device Serial Number 64-23-60-fe-ff-18-10-00 Capabilities: [16c] Power Budgeting <?> Kernel driver in use: tg3 Kernel modules: tg3 here I could see that the MAC address is 00-00-00-FE-FF-00-00-00, which, according to some forum posts on several websites, could be an issue. I've researched everything I could on the net, and found out several people having slightly comparable issues, but they usually involve different HW, and do not provide a proper explanation / solution... I would appreciate if anyone around here has some info to share ! thanks

    Read the article

  • Openvpn issue with linux

    - by catsy
    So I've tried to setup openvpn, I followed some guide but it's stuck att "initialization sequence completed" with no connection and I can't find any working solution... here's the log: $Sun Sep 23 19:14:32 2012 OpenVPN 2.1.0 i486-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Jul 20 2010 Enter Auth Username:pumpedup Enter Auth Password: Sun Sep 23 19:14:37 2012 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info. Sun Sep 23 19:14:37 2012 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Sun Sep 23 19:14:37 2012 LZO compression initialized Sun Sep 23 19:14:37 2012 Control Channel MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Sun Sep 23 19:14:38 2012 Data Channel MTU parms [ L:1542 D:1450 EF:42 EB:135 ET:0 EL:0 AF:3/1 ] Sun Sep 23 19:14:38 2012 Local Options hash (VER=V4): '41690919' Sun Sep 23 19:14:38 2012 Expected Remote Options hash (VER=V4): '530fdded' Sun Sep 23 19:14:38 2012 Socket Buffers: R=[163840-131072] S=[163840-131072] Sun Sep 23 19:14:38 2012 UDPv4 link local: [undef] Sun Sep 23 19:14:38 2012 UDPv4 link remote: [AF_INET]192.162.102.162:1194 Sun Sep 23 19:14:38 2012 TLS: Initial packet from [AF_INET]192.162.102.162:1194, sid=87a95723 a6d7b7f9 Sun Sep 23 19:14:38 2012 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this Sun Sep 23 19:14:38 2012 VERIFY OK: depth=1, /C=NV/ST=NV/L=nVPN/O=nVpn/CN=nVpn_CA/[email protected] Sun Sep 23 19:14:38 2012 VERIFY OK: depth=0, /C=NV/ST=NV/L=nVPN/O=nVpn/CN=server/[email protected] Sun Sep 23 19:14:39 2012 WARNING: 'link-mtu' is used inconsistently, local='link-mtu 1542', remote='link-mtu 6042' Sun Sep 23 19:14:39 2012 WARNING: 'tun-mtu' is used inconsistently, local='tun-mtu 1500', remote='tun-mtu 6000' Sun Sep 23 19:14:39 2012 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Sun Sep 23 19:14:39 2012 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Sun Sep 23 19:14:39 2012 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Sun Sep 23 19:14:39 2012 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Sun Sep 23 19:14:39 2012 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Sun Sep 23 19:14:39 2012 [server] Peer Connection Initiated with [AF_INET]192.162.102.162:1194 Sun Sep 23 19:14:41 2012 SENT CONTROL [server]: 'PUSH_REQUEST' (status=1) Sun Sep 23 19:14:41 2012 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1,dhcp-option DNS 8.8.8.8,dhcp-option DNS 8.8.8.8,route 10.102.162.1,topology net30,ping 10,ping-restart 120,ifconfig 10.102.162.6 10.102.162.5' Sun Sep 23 19:14:41 2012 OPTIONS IMPORT: timers and/or timeouts modified Sun Sep 23 19:14:41 2012 OPTIONS IMPORT: --ifconfig/up options modified Sun Sep 23 19:14:41 2012 OPTIONS IMPORT: route options modified Sun Sep 23 19:14:41 2012 OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified Sun Sep 23 19:14:41 2012 ROUTE default_gateway=10.0.2.2 Sun Sep 23 19:14:41 2012 TUN/TAP device tun0 opened Sun Sep 23 19:14:41 2012 TUN/TAP TX queue length set to 100 Sun Sep 23 19:14:41 2012 /sbin/ifconfig tun0 10.102.162.6 pointopoint 10.102.162.5 mtu 1500 Sun Sep 23 19:14:41 2012 /sbin/route add -net 192.162.102.162 netmask 255.255.255.255 gw 10.0.2.2 Sun Sep 23 19:14:41 2012 /sbin/route add -net 0.0.0.0 netmask 128.0.0.0 gw 10.102.162.5 Sun Sep 23 19:14:41 2012 /sbin/route add -net 128.0.0.0 netmask 128.0.0.0 gw 10.102.162.5 Sun Sep 23 19:14:41 2012 /sbin/route add -net 10.102.162.1 netmask 255.255.255.255 gw 10.102.162.5 Sun Sep 23 19:14:41 2012 Initialization Sequence Completed

    Read the article

  • Should I upgrade to "Ubuntu 14.04 'Trusty Tahr'" from "Ubuntu 12.04 LTS" and what care do I need to take if I upgrade?

    - by PHPLover
    I'm basically a Web Developer(PHP Developer) by profession. I mainly work on PHP, jQuery, AJAX, Smarty, HTML and CSS, Bootstrap front-end web development framework. I've also installed and using IDEs/editors like Sublime Text, NetBeans. I'm also using Git repository for my website development as a versioning tool. I'm using "Ubuntu 12.04 LTS" on my machine almost since last two years. My machine configuraion is as follows: Memory : 3.7 GiB Processor : Intel® Core™ i3 CPU M 370 @ 2.40GHz × 4 Graphics : Unknown OS type : 64-bit Disk : 64-bit The important softwares present on my machine and which I'm using daily for my work are as follows: PHP : PHP 5.3.10-1ubuntu3.13 with Suhosin-Patch (cli) (built: Jul 7 2014 18:54:55) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies Apache web server : /usr/sbin/apachectl: 87: ulimit: error setting limit (Operation not permitted) Server version: Apache/2.2.22 (Ubuntu) Server built: Jul 22 2014 14:35:25 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.6, APR-Util 1.3.12 Compiled using: APR 1.4.6, APR-Util 1.3.12 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/etc/apache2" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="/var/run/apache2/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="mime.types" -D SERVER_CONFIG_FILE="apache2.conf" MySQL : 5.5.38-0ubuntu0.12.04.1 Smarty : 2.6.18 **NetBeans :** NetBeans IDE 8.0 (Build 201403101706) Sublime Text 2 : Version 2.0.2, Build 2221 Yesterday suddenly a pop-up message appeared on my screen asking me to upgrade to "Ubuntu 14.04 'Trusty Tahr'". I'd also be very happy to upgrade my system to "Ubuntu 14.04 'Trusty Tahr'". Following are the issues about which I'm little bit scared about and I need you all talented people's expert advice/help/suggestions on it: Will upgrading to "Ubuntu 14.04 'Trusty Tahr'" affect the softwares I mentioned above? I mean will I need to re-install/un-install and install these softwares too? Do I really need to and is it really a worth to upgrade to "Ubuntu 14.04 'Trusty Tahr'" from "Ubuntu 12.04 LTS" now? If I upgrade to "Ubuntu 14.04 'Trusty Tahr'" what advantage I'll get from web developer's point of view? Will the upgrade be hassle free and will I be ablr to continue my on-going work without any difficulties? Is "Ubuntu 14.04 'Trusty Tahr'" a LTS version and if yes till when it's going to provide support? These are the five crucial queries I have. If you want any further explanation from me please feel free to ask me. Thanks for spending some of your vaulable time in reading and understanding my issue. Any kind of help/comment/suggestion/answer would be highly appreciated. Though if someone gives canonical, precise and up to the mark answer, it will be of great help to me as well as other web developers using Ubuntu around the world. Once again thank you so much you great people around the globe. Waiting for your precious replies.

    Read the article

  • How to set default xrandr settings?

    - by echo-flow
    I'm trying to enable dual monitors in Ubuntu. This is working fine, but every time I do it, desktop effects is disabled. I think I've found the reason why, though: https://wiki.ubuntu.com/X/Config/Multihead/ As with the GNOME XRandR configuration method, setting Virtual to too large a value may result in a loss of hardware acceleration, and thus an inability to use Compiz and its desktop effects. When I use the GNOME monitor applet, or the Monitors configuration in the System menu, the default xrandr settings puts the second monitor to the right of the first, and, as I found with this bug, for most monitors this creates a virtual desktop larger than the maximum 2048 horizontal resolution needed for hardware acceleration on my netbook hardware. So, it seems like if I can modify xrandr's default settings so that it places the new desktop above or below (north or south of) the main LVDS display, then hardware acceleration, and therefore compiz will continue to work. Can anyone tell me, what is the easiest way to achieve this? UPDATE: I have confirmed that multihead support with desktop effects and hardware acceleration works when I move the external monitor display north of the main LVDS display. Right now this involves the following process: plugging in the external monitor, starting the Monitors configuration menu, desktop effects are disabled automatically (and all of the windows on my workspaces are moved to the first workspace), repositioning the external display so that it is north of LVDS display and clicking apply, and then navigating to the Appearance menu and telling it to reenable desktop effects. Is there a simpler way do this? UPDATE 2: OK, so I thought that perhaps the GNOME Monitors configuration screen was trying to be clever, and might be disbling desktop effects. So, I just tried using the xrandr command-line client instead, as follows: xrandr --output VGA1 --above LVDS1 When I do that, desktop effects are still disabled, and I need to manually reenable them. This, despite the fact that hardware acceleration works, and there is never a point where hardware acceleration stops working because the horizontal dimension of the virtual display is too large. So what program is trying to be clever, and is turning off desktop effects when it doesn't need to? And how do I make it stop? If there were a way to re-enable desktop effects from the command line, which I could then put into a script along with the proper xrandr invocation, I would accept that as a workaround. UPDATE 3: OK, here's my script to enable a second monitor with desktop effects. It might be evil, I'm not sure: second-monitor.sh xrandr --output VGA1 --above LVDS1 sleep 3 compiz --replace & The sleep statement might not be necessary. If there's a better way to do this, please let me know. UPDATE 4: This is a Dell Mini Inspiron 1012. Here are my system specifications: lspci -vv 00:02.0 VGA compatible controller: Intel Corporation N10 Family Integrated Graphics Controller Subsystem: Dell Device 041a Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 29 Region 0: Memory at f0b00000 (32-bit, non-prefetchable) [size=512K] Region 1: I/O ports at 18d0 [size=8] Region 2: Memory at d0000000 (32-bit, prefetchable) [size=256M] Region 3: Memory at f0900000 (32-bit, non-prefetchable) [size=1M] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: i915 00:02.1 Display controller: Intel Corporation N10 Family Integrated Graphics Controller Subsystem: Dell Device 041a Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Region 0: Memory at f0b80000 (32-bit, non-prefetchable) [size=512K] Capabilities: <access denied> lsmod | grep i915 i915 287458 2 drm_kms_helper 29329 1 i915 drm 162409 3 i915,drm_kms_helper intel_agp 24375 2 i915 i2c_algo_bit 5028 1 i915 video 17375 1 i915

    Read the article

  • SharePoint 2013 Developer Ramp-Up - Part 1

    Today I had a little spare time during the morning hours and I decided that after checking MVA that I'm going to query the available course material over at Pluralsight. Wow, thanks to fantastic corporations and acquisitions there are lots of courses available. Nicely split by SharePoint version as well as particular interest group. Additionally, I found a couple of online blogs and community sites that I'm going to visit regularly during the next couple of weeks. Today's resource(s) Of course, I'm "all in" for the latest developer resources: SharePoint 2013 Developer Ramp-Up - Part 1 - Understanding the Platform and Developer Experience SharePoint 2013 Developer Ramp-Up - Part 2 SharePoint 2013 Developer Ramp-Up - Part 3 SharePoint 2013 Developer Ramp-Up - Part 4 SharePoint 2013 Developer Ramp-Up - Part 5 SharePoint 2013 Developer Ramp-Up - Part 6 I guess, I'm going to stick to the Pluralsight library until the end of this week. We'll see... Anyway, apart from the video material I came across a couple of other websites which I'd like to list here, too. That's mainly for personal reference instead of bookmarking in the browser, I'll use my own blog for that purpose. Atkinson's SharePoint Blog Düsseldorfer Jung Doerflers SharePoint Blog SharePoint Community Absolute SharePoint The links are in no preferential order and I added them as soon as I found them. Most probably, I'm going to report about specific articles from those resources during this challenge. So, stay tuned and I try to provide more details on certain topics. Takeaway First contact with the 'real stuff' in order to get an idea about software development in Microsoft SharePoint and beyond. Unfortunately and as already expected, the marketing department over at Microsoft seemed to have nothing better to do than to invent new names and baptise literally the same product with every release. Luckily, the release cycles between versions have been three years (roughly) - 2007, 2010, and 2013. Nonetheless, there will be a lot of version-specfic issues to tackle during this learning phase. Especially, when it's about historical expressions like 'WSS'* like I had it yesterday... It's going to be exciting and demanding to catch up with roughly 6-7 years of development and changes. Okay, let's face it. * WSS stands for Windows SharePoint Services 3.0 which forms the 'core engine' of SharePoint 2007. Part 1 of Andrew Connell's series on SharePoint 2013 for developers provides a brief history and overview of the various product names and their relation to the actual SharePoint version. I guess, I might create a cheat-sheet or something comparable in order to reduce the level of confusion while reading through other material: SharePoint 2007 (aka SharePoint v3 aka SharePoint 12) Windows SharePoint Services (WSS) 3.0 Microsoft Office SharePoint Server (MOSS) 2007 .NET Framework 3.0, 32-bit or 64-bit OS SharePoint 2010 (aka SharePoint v4 aka SharePoint 14) Microsoft SharePoint Foundation (SPF) 2010 Microsoft SharePoint Server (SPS) 2010 .NET Framework 3.5 SP1, 64-bit OS only SharePoint 2013 Microsoft SharePoint Foundation (SPF) 2013 Microsoft SharePoint Server (SPS) 2013 .NET Framework 4.5, 64-bit OS only After this quick excursion it is getting more interesting. SharePoint 2013 has a number of Development Practices and Techniques under the hood, and it will be quite a decision process depending on the task requirements to choose the correct path to go. At the moment, the following two options seem to be my future fields of operation: Client-Side Object Model (CSOM) REST API and OData syntax As part of my job assignment, I see myself developing within Visual Studio 2012/2013. Most probably the client development in C# will be using CSOM but of course I'll keep an eye on the REST API, too. JavaScript has quite a momentum since a while and it would a shame to ignore this type of opportunity and possibilities.

    Read the article

  • How to get faster graphics in KVM? VNC is painfully slow with Haiku OS guest, Spice won't install and SDL doesn't work

    - by Don Quixote
    I've been coming up to speed on the Haiku operating system, an Open Source clone of BeOS 5 Pro. I'm using an Apple MacBook Pro as my development machine. Apple's BootCamp BIOS does not support more than four partitions on the internal hard drive. While I can set up extended and logical partitions, doing so will prevent any of the installed operating systems from booting. To run Haiku directly on the iron, I boot it off a USB stick. Using external storage is also helpful because I am perpetually out of filesystem space. While VirtualBox is documented to allow access to physical drives, I could not actually get it to work. Also VirtualBox can only use one of the host CPU's cores. While VB guests can be configured for more than one CPU, they are only emulated. A full build of the Haiku OS takes 4.5 under VB. I had the hope of reducing build times by using KVM instead, but it's not working nearly as well as VirtualBox did. The Linux Kernel Virtual Machine is broken in all manner of fundamental ways as seen from Haiku. But I'm a coder; maybe I could contribute to fixing some of those problems. The first problem I've got is that Haiku's video in virt-manager is quite painfully slow. When I drag Haiku windows around the desktop, they lag quite far behind where my mouse is. It's quite difficult to move a window to a precise position on the screen. Just imagine that the mouse was connected to the window title bar with a really stretchy spring. Also Haiku's mouse lags quite far behind where I have moved it. I found lots of Personal Package Archives that enable Spice from QEMU / KVM at the Ubuntu Personal Package Arhives. I tried a few of the PPAs but none of them worked; with one of them, the command "add-apt-repository" crashed with a traceback. There is a Wiki page about Spice, but it says that it only works on 64-bit. My Early 2006 MacBook Pro is 32-bit. Its Apple Model Identifier is MacBookPro1,1; these use Core Duos NOT Core 2 Duos. I don't mind building a source deb for 32-bit if I can expect it to work. Is there some reason that Spice should be 64-bit only? Does it need features of the x86_64 Instruction Set Architecture that x86 does not have? When I try using SDL from virt-manager, the configuration for Local SDL Window says "Xauth: /home/mike/.Xauthority". When I try to start my guest, virt-manager emits an error. When I Googled the error message, the usual solution was to make ~/.Xauthority readible. However, .Xauthorty does not exist in my home directory. Instead I have a $XAUTHORITY environment variable. There is no way to configure SDL in virt-manager to use $XAUTHORITY instead of ~/.Xauthority. Neither does it work to copy the value of $XAUTHORITY into the file. I am ready to scream, because I've been five fscking days trying to make KVM work for Haiku development. There is a whole lot more that is broken than the slow video. All I really want to do for now is speed up my full builds of Haiku by using "jam -j2" to use both cores in my CPU. I may try Xen next, but the last time I monkeyed with Xen it was far, far more broken than I am finding KVM to be. Just for now, I would be satisfied if there were some way to use my USB stick as a drive in VirtualBox. VB does allow me to configure /dev/sdb as a drive, but it always causes a fatal error when I try to launch the guest. Thank You For Any Advice You Can Give Me. -

    Read the article

  • GameStateManagement and inputs not being recognized

    - by Dave Voyles
    EDIT: I've removed a bit of code from the input class to make this more readable, and updated my StartScreen class, which is now at the bottom. I have the same issues though, but they are explained in my comments on the bottom of this page. It won't let me paste my additional code here (the format comes out crazy), so I've linked to pastebin with the code pastebin I've been trying to implement the MS provided GameStateManagement sample with my game, but it has proven a bit difficult. Really, I'm using Oneksoft's Starter Kit, which uses the MS provided sample, so they are identical, except for my splash screen. I'm able to get the splash screen to launch, where it informs the player to press A to advance the screen, but this doesn't seem to accept any of my inputs. I’ve also added Console.Writeline(“Pressing A”) under the IsMenuPressed method in Input.cs to verify that it is getting called, but for some reason it is constantly spamming my log, rather than just appearing each time I press it. Not sure why this is happening. I have a bit too much code to post it all here, so I’ve attached a link to my .rar with my classes, but I’ll also leave a bit here which I thinkmay be applicable. https://www.dropbox.com/sh/6ek4uru2jc2ch0k/JTeBWN_3PQ What do you guys think the issue is? namespace Pong { public class Input { public const int MaxInputs = 4; public readonly KeyboardState[] CurrentKeyboardState; public readonly GamePadState[] CurrentGamePadState; public KeyboardState[] LastKeyboardState; public GamePadState[] LastGamePadState; public readonly bool[] GamePadWasConnected; public Input() { // Get input state CurrentKeyboardState = new KeyboardState[MaxInputs]; CurrentGamePadState = new GamePadState[MaxInputs]; // Preserving last states to check for isKeyUp events LastKeyboardState = CurrentKeyboardState; LastGamePadState = CurrentGamePadState; } /// <summary> /// Checks for a "menu select" input action. /// The controllingPlayer parameter specifies which player to read input for. /// If this is null, it will accept input from any player. When the action /// is detected, the output playerIndex reports which player pressed it. /// </summary> public bool IsMenuSelect(PlayerIndex? controllingPlayer, out PlayerIndex playerIndex) { Console.WriteLine("Pressing A"); return IsNewKeyPress(Keys.Space, controllingPlayer, out playerIndex) || IsNewKeyPress(Keys.Enter, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.A, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.Start, controllingPlayer, out playerIndex); } /// <summary> /// Checks for a "menu cancel" input action. /// The controllingPlayer parameter specifies which player to read input for. /// If this is null, it will accept input from any player. When the action /// is detected, the output playerIndex reports which player pressed it. /// </summary> public bool IsMenuCancel(PlayerIndex? controllingPlayer, out PlayerIndex playerIndex) { return IsNewKeyPress(Keys.Escape, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.B, controllingPlayer, out playerIndex) || IsNewButtonPress(Buttons.Back, controllingPlayer, out playerIndex); }

    Read the article

  • How To Create Your Own x86 Operating System for Modern PC Computers

    - by mudge
    I'd like to create a new operating system for x86 PC computers. I'd like it to be 64-bit but possibly run as 32-bit as well. I have these kinds of questions: What kinds of things do you start working on first? Knowing where to start in writing your own operating system seems to me to be a tricky subject, so I am interested in your input. Generally how to go about making your own 32-bit/64-bit operating system, or good resources that mention useful information about going about writing your own operating system for x86 computers. I don't care how old sources are as long as they are still relevant and useful to what I am doing. I know that I will want it to have kernel drivers that access peripheral hardware directly. Where should I look for advice and documentation for programming and understanding the interface to peripheral hardware the operating system will communicate with? I will need to understand how the operating system will receive input and interact with keyboards, mice, computer monitors, hard drives, USB, etc. etc. This is probably the area I know least about. I have the Intel instruction set manuals and have been getting more familiar with assembly programming, so the CPU side of things is what I know the most about. At this point I'm thinking that I'd like to implement the Linux system calls within my operating system so that programs that run on Linux can run on my operating system. I want my operating system to use the ELF binary format. I wonder what obstacles I have to overcome to achieve this Linux compatibility. Are the main things implementing the system calls that Linux provides, and using the ELF format? What else? I am also interested in people's thoughts about why it might not be a good idea to make your own operating system, and why it is a good idea to make your own operating system. Thank you for any input.

    Read the article

  • pyscripter Rpyc error

    - by jf328
    pyscripter 2.5.3.0 x64, python 2.7.7 anaconda 2.0.1, windows 7 I was using pyscripter and EPD python happily in 32 bit, no problem. Just changed to 64 bit anaconda version and re-installed everything but now pyscripter cannot import rpyc -- it runs with internal engine (no anaconda), but no such error in pure python. Thanks very much! btw, there is a similar SO post few years ago, but the answer there does not work. *** Python 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] on win32. *** Internal Python engine is active *** *** Internal Python engine is active *** >>> import rpyc Traceback (most recent call last): File "<interactive input>", line 1, in <module> File "C:\Anaconda\lib\site-packages\rpyc\__init__.py", line 44, in <module> from rpyc.core import (SocketStream, TunneledSocketStream, PipeStream, Channel, File "C:\Anaconda\lib\site-packages\rpyc\core\__init__.py", line 1, in <module> from rpyc.core.stream import SocketStream, TunneledSocketStream, PipeStream File "C:\Anaconda\lib\site-packages\rpyc\core\stream.py", line 7, in <module> import socket File "C:\Anaconda\Lib\socket.py", line 47, in <module> import _socket ImportError: DLL load failed: The specified procedure could not be found. >>> C:\research>python Python 2.7.7 |Anaconda 2.0.1 (64-bit)| (default, Jun 11 2014, 10:40:02) [MSC v.1500 64bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. Anaconda is brought to you by Continuum Analytics. Please check out: http://continuum.io/thanks and https://binstar.org >>> import rpyc >>>

    Read the article

  • Installing SQLServer 2005 on Windows 7 64bit

    - by Mostafa
    Hi , It's 3 days I'm trying to install SqlServer 2005 under Windows 7 64 bit on my machine . First let me tell you what I've done and what I've got till now . 1-I Installed Windows 7 64 Bit on my computer 2-I tried to install SQl Server 2005 "Developer Edition" 2.1 But in "System Configuration Check" Page i recieved 2 warning , One for "IIS Feature Requirement" and another for "ASP.NET Version Registration Rquired" . 2.1.1 . I installed "Internet Information Services" from "Turn Windows features on or off" section in control panel 2.1.2 I Enabled reporting service 32 bit from "Inetpub= AdminScripts = adsutil.vbs 2.2 At this stage There was no waring in System Configuration Check 3- So I installed SQl Server 2005 Developer Edition By all default settings 4- I installed Sql Server 2005 Service Pack 3 64 bit Now when when i run "Management Studio" There is no name in "Server name" section . I typed my Computer name Or "." and i got this Error : A network -related instance-specific error occurred while establishinga connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (Provider: Named Pipes Provider , error :40 - Could not open a connection to SQL Server ) ( Microsoft SQL Server , Error :2) . I googled some for this Error and some people said follow this instruction: Startsql server 2005Configuration toolsSql Server Surface Configuration AreaSurface Area Configuration for services and Connections But i got this Error : No SQl SErver 2005 Components were found on the specified computer . Either no components are installed , or you are not a administrator on this computer (SQLSAC) I'm really tired because of that , and i don't know what's wrong with this . Some more information : I have no additonal software on my computer , like Antivirus or Proxy I tried all step with "Standard Edition" either , but no difference My user is Administrator I tried more than 5 times all those steps including re-installing Windows 7 . Please help me , I'm losing all my hair

    Read the article

  • SQL Compact error: Unable to load DLL 'sqlceme35.dll'. The specified module could not be found

    - by Ciaran Bruen
    Hi - I'm developing a Winforms application using Visual Studio 2008 C# that uses a SQL compact 3.5 database on the client. The client will most likely be 32 bit XP or Vista machines. I'm using a standard Windows Installer project that creates an msi file and setup.exe to install the app on a client machine. I'm new to SQL compact so I haven't had to distribute a client database like this before now. When I run the setup.exe (on new Windows XP 32 bit with SP2 and IE 7) it installs fine but when I run the app I get the error below: Unable to load DLL 'sqlceme35.dll'. The specified module could not be found I spent a few hours searching this error already but all I can find are issues relating to installing on 64 bit Windows, none relating to normal 32 bit that I'm using. The install app copies the all the dependant files that it found into the specified install directory, including the System.Data.SqlServerCe.dll file (assembly version 3.5.1.0). The database file is in a directory called 'data' off the application directory, and the connection string for it is <add name="Tickets.ieOutlet.Properties.Settings.TicketsLocalConnectionString" connectionString="Data Source=|DataDirectory|\data\TicketsLocal.sdf" providerName="Microsoft.SqlServerCe.Client.3.5" /> Some questions I have: should the app be able to find the dll if it's in the same directory i.e. local to the app, or do I need to install it in the GAC? (If so cam I use the Windows Installer to install a dll in the GAC?) is there anything else I need to distribute with the app in order to use a Sql Compact database? there are other dlls also such as MS interop for exporting data to Excel on the client. Do these need to be installed in the GAC or will locating them in the application directory suffice? TIA, Ciaran.

    Read the article

  • 2-step user registration with Django

    - by David S
    I'm creating a website with Django and want a fairly common 2-step user registration. What I mean by this is that the user fills in the some basic user information + some application specific information (sort of like a coupon value). Upon submit, an email is sent to ensure email address is valid. This email should contain a link to click on to "finish" the registration. When the link is clicked, the user is marked as validated and they are directed to a new page to complete optional "user profile" type information. So, pretty basic stuff. I have done some research and found django-registration by James Bennett. I do know who James is and have seen him at PyCons and DjanoCons in the past. There is obviously very few people in the world that know Django better than James (so, I know the quality of the code/app is good). But, it almost seems like a bit of over kill. I've read through the docs and was a bit confused (maybe I'm just being a bit dense today). I believe that if I do use django-registration, I will need to have some custom forms, etc. Is there anything else out there I should evaluate? Or are there any good tutorials or videos on using django-registration? I've done a bit of googling, but haven't found anything. But, I suspect that it might be a case of a lot of very common words that don't really find what you are looking for (django user registration tutorial/example). Or is just a case where it would be just about as easy to build your own solution with Django forms, etc? Here is the tech stack I'm using: Python 2.7.2 Django 1.3.1 PostgreSQL 9.1 psycopg2 2.4.1 Twitter Bootstrap 2.0.2

    Read the article

  • Why can't nvcc find my Visual C++ installation?

    - by Jack Lloyd
    I'm running Windows 7 Pro x64 on a Core i5 with a NVIDIA 3100m, which is CUDA compatible. I've tried installing both the 32-bit and 64-bit CUDA toolkits from NVIDIA, unfortunately from with either of them I cannot compile anything; nvcc says "cannot find a supported cl version. Only MSVC 8.0 and MSVC 9.0 are supported". I have the x86 and x86-64 compilers installed via the Windows 7 SDK (compiler version 15.00.30729.01 for both arches). Both compilers are operating correctly; I've built and tested C and C++ code using them. I've tried running nvcc from command shells set up for both 32 bit and 64 bit compilation, and using the -ccbin command line option to nvcc to point it at the Visual C++ install directory. What is the right way of handling this setup? Is there some way I make nvcc be more verbose about what is going on? The -v flag isn't terrible helpful. Ideally some way to make it show what it is finding versus what it's expecting to find. Will this work better if I install Visual C++ Express instead? Or is only a commercial version of VC++ supported for use with CUDA?

    Read the article

  • Log off from Remote Desktop Session does not closing Session, showing the login screen again on Wind

    - by Santhosha
    Hi As per requirement we have written one custom GINA. I have observed one interesting behavior in Windows XP 32 Bit(SP2). Customized GINA internally calls windows default Windows GINA(msgina.dll) and shows one extra window as per our requirement. I used to do remote desktop to XP machine from my machine. After replacing Windows GINA with customized GINA I tried to log off from the XP Machine(I am Using Remote Desktop Connection to log in), Log off completes successfully(After showing saving your settings, Closing network connections etc) and I will get log in screen which we get during log on, this is not expected comapred to other flavors of Windows OD. Where as in other operating systems such as Windows XP 64 Bit/ Windows 2003 32/64 Bit even after replacing the Windows Gina with custom GINA remote desktop session closes after log off from the machine. I have tried installing Novell GINA on Windows XP 32 Bit but i have not find any issue with that. I have Tried upgrading XP SP2 to SP3, still i am facing the same issue. Whether anyone faced Such issues when worked with Windows GINA? Thanks in advance Santhosha K S

    Read the article

  • C & MinGW: Hello World gives me the error "programm too big to fit in memory"

    - by user1692088
    I'm new here. Here's my problem: I installed MinGW on my Windows 7 Home Premium 32-bit Netbook with Intel Atom CPU N550, 1.50GHz and 2GB RAM. Now I made a file named hello.h and tried to compile it via CMD with the following command: "gcc c:\workspace\c\helloworld\hello.h -o out.exe" It compiles with no error, but when I try to run out.exe, it gives me following error: "program too big to fit in memory" Things I have checked: I have added "C:\MinGW\bin" to the Windows PATH Variable I have googled for about one hour, but ever since I'm a newbie, I can't really figure out what the problem is. I have compiled the same code on my 64-bit machine, compiles perfectly, but cannot be run due to 64-bit <- 16-bit problematic. I'd really appreciate, if someone could figure out, what the problem is. Btw, here's my hello.h: #include <stdio.h> int main(void){ printf("Hello, World\n"); } ... That's it. Thanks for your replies. Cheers, Boris

    Read the article

  • Convert pre-IEEE-574 C++ floating-point numbers to/from C#

    - by Richard Kucia
    Before .Net, before math coprocessors, before IEEE-574, Microsoft defined a bit pattern for floating-point numbers. Old versions of the C++ compiler happily used that definition. I am writing a C# app that needs to read/write such floating-point numbers in a file. How can I do the conversions between the 2 bit formats? I need conversion methods in both directions. This app is going to run in a PocketPC/WinCE environment. Changing the structure of the file is out-of-scope for this project. Is there a C++ compiler option that instructs it to use the old FP format? That would be ideal. I could then exchange data between the C# code and C++ code by using a null-terminated text string, and the C++ methods would be simple wrappers around sprintf and atof functions. At the very least, I'm hoping someone can reply with the bit definitions for the old FP format, so I can put together a low-level bit manipulation algorithm if necessary. Thanks.

    Read the article

  • Am I making the right choice in choosing Yii as my PHP Framework?

    - by Bara
    I am about to begin development of a new website and have been doing research on PHP Frameworks. I'm not an advanced PHP developer, but I have been developing web sites and apps (in asp.net) for a few years now. My website will primarily be AJAX-based (using jQuery) and making lots of calls to web services. After some research, here's what I came up with: CakePHP: Originally started developing in this, but found it too complex. The fact that it forces you to use and learn all this new stuff just to use it was a bit daunting, so I put it aside for the time being. Zend: The performance of the framework leaves me a bit skeptical, but I heard it has great support for creating web services. I also heard it was a bit complex. CodeIgniter: No real reason for not using this one. Based on what I've read CodeIgniter and Yii are very similar, but Yii is a bit faster and doesn't have un-needed code for PHP4 (since I plan on developing exclusively in PHP5). As far as Yii, the only things that scare me about it are that it is newer than the other frameworks so it has a smaller community. It also doesn't seem to have a ton of web service support (only SOAP, from my understanding) as opposed to Zend. So my questions come down to: Should these things worry me? (not as big of a community, poor web service support) Is there anything else I should look into? Is my choice of Yii over the other frameworks ok for a primarily AJAX-based web app? Bara

    Read the article

  • Is the size of a struct required to be an exact multiple of the alignment of that struct?

    - by Steve314
    Once again, I'm questioning a longstanding belief. Until today, I believed that the alignment of the following struct would normally be 4 and the size would normally be 5... struct example { int m_Assume_32_Bits; char m_Assume_8_Bit_Bytes; }; Because of this assumption, I have data structure code that uses offsetof to determine the distance in bytes between two adjacent items in an array. Today, I spotted some old code that was using sizeof where it shouldn't, couldn't understand why I hadn't had bugs from it, coded up a unit test - and the test surprised me by passing. A bit of investigation showed that the sizeof the type I used for the test (similar to the struct above) was an exact multiple of the alignment - ie 8 bytes. It had padding after the final member. Here is an example of why I never expected this... struct example2 { example m_Example; char m_Why_Cant_This_Be_At_Offset_6_Bytes; }; A bit of Googling showed examples that make it clear that this padding after the final member is allowed - for example http://en.wikipedia.org/wiki/Data_structure_alignment#Data_structure_padding (the "or at the end of the structure" bit). This is a bit embarrassing, as I recently posted this comment - Use of struct padding (my first comment to that answer). What I can't seem to determine is whether this padding to an exact multiple of the alignment is guaranteed by the C++ standard, or whether it is just something that is permitted and that some (but maybe not all) compilers do. So - is the size of a struct required to be an exact multiple of the alignment of that struct according to the C++ standard? If the C standard makes different guarantees, I'm interested in that too, but the focus is on C++.

    Read the article

  • Project builds skipped with Any CPU build platform

    - by JMarsch
    All: We are using Visual Studio 2010, and we have recently upgraded our workstations to Windows 7/64-bit. I have a question: When I create a new solution, it seems to want to use the x86 platform. If I change the solution to "any cpu" and then I add a new project to the solution, the project will not have an "any cpu" build option, and it will be deselected from building (in configuration manager). Something seems wrong here. Here's what I want to have (assuming that it is supported): I want my solutions' platforms to default to "Any CPU" (I believe that means that at JIT time, the assembly will be either x86 or 64-bit, based on the machine that loaded it). When I add a new project to the solution, I want for it to have an "any cpu" solution, and I want for that projec to build by default. (basically, the same behavior that we had in VS 2008 on 32-bit workstations). How do I do that? Is there some additional thing that I need to know now that I am using a 64-bit workstation?

    Read the article

  • Multi-Precision Arithmetic on MIPS

    - by Rob
    Hi, I am just trying to implement multi-precision arithmetic on native MIPS. Assume that one 64-bit integer is in register $12 and $13 and another is in registers $14 and $15. The sum is to be placed in registers $10 and $11. The most significant word of the 64-bit integer is found in the even-numbered registers, and the least significant word is found in the odd-numbered registers. On the internet, it said, this is the shortest possible implementation. addu $11, $13, $15 # add least significant word sltu $10, $11, $15 # set carry-in bit addu $10, $10, $12 # add in first most significant word addu $10, $10, $14 # add in second most significant word I just wanna double check that I understand correctly. The sltu checks if the sum of the two least significant words is smaller or equal than one of the operands. If this is the case, than did a carry occur, is this right? To check if there occured a carry when adding the two most significant words and store the result in $9 I have to do: sltu $9, $10, $12 # set carry-in bit Does this make any sense?

    Read the article

  • Partial class or "chained inheritance"

    - by Charlie boy
    Hi From my understanding partial classes are a bit frowned upon by professional developers, but I've come over a bit of an issue; I have made an implementation of the RichTextBox control that uses user32.dll calls for faster editing of large texts. That results in quite a bit of code. Then I added spellchecking capabilities to the control, this was made in another class inheriting RichTextBox control as well. That also makes up a bit of code. These two functionalities are quite separate but I would like them to be merged so that I can drop one control on my form that has both fast editing capabilities and spellchecking built in. I feel that simply adding the code form one class to the other would result in a too large code file, especially since there are two very distinct areas of functionality, so I seem to need another approach. Now to my question; To merge these two classes should I make the spellchecking RichTextBox inherit from the fast edit one, that in turn inherits RichTextBox? Or should I make the two classes partials of a single class and thus making them more “equal” so to speak? This is more of a question of OO principles and exercise on my part than me trying to reinvent the wheel, I know there are plenty of good text editing controls out there. But this is just a hobby for me and I just want to know how this kind of solution would be managed by a professional. Thanks!

    Read the article

  • How to use a TFileStream to read 2D matrices into dynamic array?

    - by Robert Frank
    I need to read a large (2000x2000) matrix of binary data from a file into a dynamic array with Delphi 2010. I don't know the dimensions until run-time. I've never read raw data like this, and don't know IEEE so I'm posting this to see if I'm on track. I plan to use a TFileStream to read one row at a time. I need to be able to read as many of these formats as possible: 16-bit two's complement binary integer 32-bit two's complement binary integer 64-bit two's complement binary integer IEEE single precision floating-point For 32-bit two's complement, I'm thinking something like the code below. Changing to Int64 and Int16 should be straight forward. How can I read the IEEE? Am I on the right track? Any suggestions on this code, or how to elegantly extend it for all 4 data types above? Since my post-processing will be the same after reading this data, I guess I'll have to copy the matrix into a common format when done. I have no problem just having four procedures (one for each data type) like the one below, but perhaps there's an elegant way to use RTTI or buffers and then move()'s so that the same code works for all 4 datatypes? Thanks! type TRowData = array of Int32; procedure ReadMatrix; var Matrix: array of TRowData; NumberOfRows: Cardinal; NumberOfCols: Cardinal; CurRow: Integer; begin NumberOfRows := 20; // not known until run time NumberOfCols := 100; // not known until run time SetLength(Matrix, NumberOfRows); for CurRow := 0 to NumberOfRows do begin SetLength(Matrix[CurRow], NumberOfCols); FileStream.ReadBuffer(Matrix[CurRow], NumberOfCols * SizeOf(Int32)) ); end; end;

    Read the article

  • Do I need to using locking against integers in c++ threads

    - by Shane MacLaughlin
    The title says it all really. If I am accessing a single integer type (e.g. long, int, bool, etc...) in multiple threads, do I need to use a synchronisation mechanism such as a mutex to lock them. My understanding is that as atomic types, I don't need to lock access to a single thread, but I see a lot of code out there that does use locking. Profiling such code shows that there is a significant performance hit for using locks, so I'd rather not. So if the item I'm accessing corresponds to a bus width integer (e.g. 4 bytes on a 32 bit processor) do I need to lock access to it when it is being used across multiple threads? Put another way, if thread A is writing to integer variable X at the same time as thread B is reading from the same variable, is it possible that thread B could end up a few bytes of the previous value mixed in with a few bytes of the value being written? Is this architecture dependent, e.g. ok for 4 byte integers on 32 bit systems but unsafe on 8 byte integers on 64 bit systems? Edit: Just saw this related post which helps a fair bit.

    Read the article

  • Has anyone ever had OpenCV work with Python 2.7 on MacOS 10.6?

    - by ?????
    I've been trying on and off for the past 6 months to get OpenCV to work with Python on MacOS. Every time there's a new release, I try again and fail. I've tried both 64-bit and 32-bit, and both the xcode gcc and gcc installed via macports. I just spend the past two days on it, hopeful that the latest OpenCV release, that appears to include Python support directly would work. It doesn't. I've also tried and failed to use this: http://code.google.com/p/pyopencv/ I've been using OpenCV with C++ or Microsoft C++/CLI for the past few years, but I'd love to use it with Python on a Mac because that is my primary development environment. I'd love to hear from anyone who's actually been able to get the opencv python examples to run under Mac OS 10.6, either 32 or 64-bit. My last attempt was to follow the instructions on this page http://recursive-design.com/blog/2010/12/14/face-detection-with-osx-and-python/ with a clean, fresh install of 10.6 on a 64-bit capable Mac. My PYTHONPATH is set, and I can see the cv library in it. But an "import cv" from python fails. Previously, the closest I've ever gotten (again, staring on a clean, fresh 10.6 install) was this: Python 2.7.1 (r271:86882M, Nov 30 2010, 10:35:34) [GCC 4.2.1 (Apple Inc. build 5664)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import cv Fatal Python error: Interpreter not initialized (version mismatch?) Abort trap thrilllap-2:~ swirsky$ I've seen a lot of folks answering similar questions here, but have never seen an definitive answer for it.

    Read the article

  • How to access parents' members from a inner class in WPF?

    - by black sensei
    Hello Experts! I'm trying to do scheduled operation let's say to check for user's credit left via web service call and update the user interface. i've tried with quartz.net to implement the scheduling bit.i created an inner class in the window class i need to update.That inner class has the method that calls the webservice and the result needs to be displayed back to the UI. here is an example of what i did. public partial class Window2 : Window { private int i; public Window2() { InitializeComponent(); } public class Myclass :IJob { public void Execute(JobExecutionContext context) { string result = doMyOperation(); //i'll like to call parent label member of name lblNotif //is something like parent.lblNotif.Content = result; //possible? } public string doMyOperation() { //calling the wermethod to retreive user's balance return result = service.GetUsersBalace(user); } } } Well the quartz bit is working and this post is not about quartz. here are my questions Question 1 : How is it possible to access Window2 controls, for instace lable lblNotif? Question 2 : If my thinking about this is wrong, what is done as best practice to solve my kind of problem, where an application need to do an operation let's say every 5mn and update the the UI. Question 3 : i at first tried to use the backgroundworker and i felt like i can't do the scheduling bit with it.Is that correct or i'm wrong. thanks for those who commented already and sorry for those who didn't get the meaning of my post.I hope this will be a bit clearer.Thanks for reading

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >