Search Results

Search found 5772 results on 231 pages for 'authorized keys'.

Page 71/231 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • OpenVPN: Connection established but can’t connect to server

    - by Maik
    I am trying to set up OpenVPN to allow me to connect a number of laptops to my network in a way that allows the laptops to connect to specific computers via HTTP (to e.g. a server management page) and windows shares (to access files) In the test environment my laptops live in a network with a 192.168.1.X address range. The host-network has a 10.66.77.X address range The server hosting the OpenVPN server has address 10.77.10.20. I need to access some application server web pages on this machine, accessible on various ports The server with the windows shares as well as some other web based pages I need to access is on address 10.66.77.20 The config files for server and laptop are attached below. The laptop establishes the VPN connection without problems, but I cannot access any of the machines, even a simple ping fails. Maybe a routing problem? The routing table for the laptop is shown below as well - every idea is appreciated! Thanks! Maik Server config file port 1194 dev tun tls-server ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/projects.crt key /etc/openvpn/keys/projects.key dh /etc/openvpn/keys/dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "route 10.66.77.0 255.255.255.0" keepalive 10 60 inactive 600 route 10.8.0.1 255.255.255.0 user openvpn group openvpn persist-tun persist-key verb 4 client config file dev tun proto udp remote SERVERADDR 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert accountingLaptop.crt key accountingLaptop.key ns-cert-type server comp-lzo verb 3 Resulting routing table on client laptop C:\Documents and Settings\User>route print =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...00 23 5a 9b 64 9b ...... Atheros AR8132 PCI-E Fast Ethernet Controller - Packet Scheduler Miniport 0x3 ...00 24 2c 35 c9 6b ...... Dell Wireless 1395 WLAN Mini-Card - Packet Sched uler Miniport 0x4 ...00 ff 5e 03 43 9b ...... TAP-Win32 Adapter V9 - Packet Scheduler Miniport =========================================================================== =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.129 25 10.8.0.1 255.255.255.255 10.8.0.5 10.8.0.6 1 10.8.0.4 255.255.255.252 10.8.0.6 10.8.0.6 30 10.8.0.6 255.255.255.255 127.0.0.1 127.0.0.1 30 10.66.77.0 255.255.255.0 10.8.0.5 10.8.0.6 1 10.255.255.255 255.255.255.255 10.8.0.6 10.8.0.6 30 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 192.168.1.0 255.255.255.0 192.168.1.129 192.168.1.129 25 192.168.1.129 255.255.255.255 127.0.0.1 127.0.0.1 25 192.168.1.255 255.255.255.255 192.168.1.129 192.168.1.129 25 224.0.0.0 240.0.0.0 10.8.0.6 10.8.0.6 30 224.0.0.0 240.0.0.0 192.168.1.129 192.168.1.129 25 255.255.255.255 255.255.255.255 10.8.0.6 2 1 255.255.255.255 255.255.255.255 10.8.0.6 10.8.0.6 1 255.255.255.255 255.255.255.255 192.168.1.129 192.168.1.129 1 Default Gateway: 192.168.1.1 =========================================================================== Persistent Routes: None

    Read the article

  • Cisco VPN Client dropping connection

    - by IT Team
    Using Windows XP and Cisco VPN client version 5.0.4.xxx to connect to a remote customer site. We are able to establish the connection and start an RDP session, but within 1-2 minutes the connection drops and the VPN connection disconnects. The PC making the connection is on a DMZ which is NATed to a public IP address. If we move the PC directly onto the internet without being on the DMZ the connection works and we don't encounter any disconnects. We use a PIX 515E running 7.2.4 and don't have any problems with similar setups connecting to other customer sites from the DMZ. The VPN setup on the client side is pretty basic, using IPSec over TCP port 10000. Not sure what device they are using on the peer, but my guess would be an ASA. Any idea as to what the problem would be? Below is the logs from the VPN client when the problem occurs. The real IP address has been changed to: RemotePeerIP. 4 14:39:30.593 09/23/09 Sev=Info/4 CM/0x63100024 Attempt connection with server "RemotePeerIP" 5 14:39:30.593 09/23/09 Sev=Info/6 CM/0x6310002F Allocated local TCP port 1942 for TCP connection. 6 14:39:30.796 09/23/09 Sev=Info/4 IPSEC/0x63700008 IPSec driver successfully started 7 14:39:30.796 09/23/09 Sev=Info/4 IPSEC/0x63700014 Deleted all keys 8 14:39:30.796 09/23/09 Sev=Info/6 IPSEC/0x6370002C Sent 256 packets, 0 were fragmented. 9 14:39:30.796 09/23/09 Sev=Info/6 IPSEC/0x63700020 TCP SYN sent to RemotePeerIP, src port 1942, dst port 10000 10 14:39:30.796 09/23/09 Sev=Info/6 IPSEC/0x6370001C TCP SYN-ACK received from RemotePeerIP, src port 10000, dst port 1942 11 14:39:30.796 09/23/09 Sev=Info/6 IPSEC/0x63700021 TCP ACK sent to RemotePeerIP, src port 1942, dst port 10000 12 14:39:30.796 09/23/09 Sev=Warning/3 IPSEC/0xA370001C Bad cTCP trailer, Rsvd 26984, Magic# 63697672h, trailer len 101, MajorVer 13, MinorVer 10 13 14:39:30.796 09/23/09 Sev=Info/4 CM/0x63100029 TCP connection established on port 10000 with server "RemotePeerIP" 14 14:39:31.296 09/23/09 Sev=Info/4 CM/0x63100024 Attempt connection with server "RemotePeerIP" 15 14:39:31.296 09/23/09 Sev=Info/6 IKE/0x6300003B Attempting to establish a connection with RemotePeerIP. 16 14:39:31.296 09/23/09 Sev=Info/4 IKE/0x63000013 SENDING ISAKMP OAK AG (SA, KE, NON, ID, VID(Xauth), VID(dpd), VID(Frag), VID(Unity)) to RemotePeerIP 17 14:39:36.296 09/23/09 Sev=Info/4 IKE/0x63000021 Retransmitting last packet! 18 14:39:36.296 09/23/09 Sev=Info/4 IKE/0x63000013 SENDING ISAKMP OAK AG (Retransmission) to RemotePeerIP 19 14:39:41.296 09/23/09 Sev=Info/4 IKE/0x63000021 Retransmitting last packet! 20 14:39:41.296 09/23/09 Sev=Info/4 IKE/0x63000013 SENDING ISAKMP OAK AG (Retransmission) to RemotePeerIP 21 14:39:46.296 09/23/09 Sev=Info/4 IKE/0x63000021 Retransmitting last packet! 22 14:39:46.296 09/23/09 Sev=Info/4 IKE/0x63000013 SENDING ISAKMP OAK AG (Retransmission) to RemotePeerIP 23 14:39:51.328 09/23/09 Sev=Info/4 IKE/0x63000017 Marking IKE SA for deletion (I_Cookie=AEFC3FFF0405BBD6 R_Cookie=0000000000000000) reason = DEL_REASON_PEER_NOT_RESPONDING 24 14:39:51.828 09/23/09 Sev=Info/4 IKE/0x6300004B Discarding IKE SA negotiation (I_Cookie=AEFC3FFF0405BBD6 R_Cookie=0000000000000000) reason = DEL_REASON_PEER_NOT_RESPONDING 25 14:39:51.828 09/23/09 Sev=Info/4 CM/0x63100014 Unable to establish Phase 1 SA with server "RemotePeerIP" because of "DEL_REASON_PEER_NOT_RESPONDING" 26 14:39:51.828 09/23/09 Sev=Info/5 CM/0x63100025 Initializing CVPNDrv 27 14:39:51.828 09/23/09 Sev=Info/4 CM/0x6310002D Resetting TCP connection on port 10000 28 14:39:51.828 09/23/09 Sev=Info/6 CM/0x63100030 Removed local TCP port 1942 for TCP connection. 29 14:39:51.828 09/23/09 Sev=Info/6 CM/0x63100046 Set tunnel established flag in registry to 0. 30 14:39:51.828 09/23/09 Sev=Info/4 IKE/0x63000001 IKE received signal to terminate VPN connection 31 14:39:52.328 09/23/09 Sev=Info/6 IPSEC/0x63700023 TCP RST sent to RemotePeerIP, src port 1942, dst port 10000 32 14:39:52.328 09/23/09 Sev=Info/4 IPSEC/0x63700014 Deleted all keys 33 14:39:52.328 09/23/09 Sev=Info/4 IPSEC/0x63700014 Deleted all keys 34 14:39:52.328 09/23/09 Sev=Info/4 IPSEC/0x63700014 Deleted all keys 35 14:39:52.328 09/23/09 Sev=Info/4 IPSEC/0x6370000A IPSec driver successfully stopped Thank you for any help you can provide.

    Read the article

  • Connection Reset on MySQL query

    - by sunwukung
    OK, I'm flummoxed.(i've asked this question over on Stack too - but I need to get it fixed so I'm asking here too - any help is GREATLY appreciated) I'm trying to execute a query on a database (locally) and I keep getting a connection reset error. I've been using the method below in a generic DAO class to build a query string and pass to Zend_Db API. public function insert($params) { $loop = false; $keys = $values = ''; foreach($params as $k => $v){ if($loop == true){ $keys .= ','; $values .= ','; } $keys .= $this->db->quoteIdentifier($k); $values .= $this->db->quote($v); $loop = true; } $sql = "INSERT INTO " . $this->table_name . " ($keys) VALUES ($values)"; //formatResult returns an array of info regarding the status and any result sets of the query //I've commented that method call out anyway, so I don't think it's that try { $this->db->query($sql); return $this->formatResult(array( true, 'New record inserted into: '.$this->table_name )); }catch(PDOException $e) { return $this->formatResult($e); } } So far, this has worked fine - the errors have been occurring since we generated new tables to record user input. The insert string looks like this: INSERT INTO tablename(`id`,`title`,`summary`,`description`,`keywords`,`type_id`,`categories`) VALUES ('5539','Sample Title','Sample content',' \'Lorem ipsum dolor sit amet, consectetur adipiscing elit. In et pellentesque mauris. Curabitur hendrerit, leo id ultrices pellentesque, est purus mattis ligula, vitae imperdiet neque ligula bibendum sapien. Curabitur aliquet nisi et odio pharetra tincidunt. Phasellus sed iaculis nisl. Fusce commodo mauris et purus vehicula dictum. Nulla feugiat molestie accumsan. Donec fermentum libero in risus tempus elementum aliquam et magna. Fusce vitae sem metus. Aenean commodo pharetra risus, nec pellentesque augue ullamcorper nec. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos. Nullam vel elit libero. Vestibulum in turpis nunc.\'','this,is,a,sample,array',1,'category title') Here are the parameters it's getting before assembling the query (var_dump): array 'id' => string '1' (length=4) 'title' => string 'Sample Title' (length=12) 'summary' => string 'Sample content' (length=14) 'description' => string '<p>'Lorem ipsum dolor sit amet, consectetur adipiscing elit. In et pellentesque mauris. Curabitur hendrerit, leo id ultrices pellentesque, est purus mattis ligula, vitae imperdiet neque ligula bibendum sapien. Curabitur aliquet nisi et odio pharetra tincidunt. Phasellus sed iaculis nisl. Fusce commodo mauris et purus vehicula dictum. Nulla feugiat molestie accumsan. Donec fermentum libero in risus tempus elementum aliquam et magna. Fusce vitae sem metus. Aenean commodo pharetra risus, nec pellentesque augue'... (length=677) 'keywords' => string 'this,is,a,sample,array' (length=22) 'type_id' => int 1 'categories' => string 'category title' (length=43) The next port of call was checking the limits on the table, since it seems to insert if the length of "description" is around the 300 mark (it varies between 310 - 330). The field limit is set to VARCHAR(1500) and the validation on this field won't allow anything past bigger than 1200 with HTML, 800 without. The real kicker is that if I take this sql string and execute it via the command line, it works fine - so I can't for the life of me figure out what's wrong. I've tried extending the server parameters i.e. http://stackoverflow.com/questions/1964554/unexpected-connection-reset-a-php-or-an-apache-issue So, in a nutshell, I'm stumped. Any ideas?

    Read the article

  • Remote NX login to Ubuntu, Gnome can't mount CD/DVD drive

    - by T.J. Crowder
    Even though I'm sitting next to it, I log into my Ubuntu 10.04 LTS system via NX Free Edition from another system at the moment (this is temporary, not worth buying a KVM for). Curiously, though, when I do that Gnome's auto-mounting fails for CD/DVD media (I haven't tried other kinds) with a "Not Authorized" error. For instance here's what happens when I put the Ubuntu 10.04 LTS installation CD in: This does not happen if I log into it locally (not via NX) with the same user account. When using NX, I can mount the media if I go to mount directly: tjc@midnight:~$ sudo mkdir /media/dvd tjc@midnight:~$ sudo mount -r -t iso9660 /dev/sr0 /media/dvd tjc@midnight:~$ ls /media/dvd autorun.inf casper dists install isolinux md5sum.txt pics pool preseed README.diskdefines ubuntu wubi.exe ...which, along with the "not authorized" error, suggests some kind of permissions problem to me (doh). What I find odd is that the same user is involved in both cases (local and via NX). I'm new to Ubuntu on the desktop (used it and other distros on servers for years), so I'm afraid I don't know how this auto-mounting is happening. I think it's handled by the gvfs package and its daemon, but that's about as far as I got (and perhaps I've taken a left turn even getting that far). Although I can work around it with mount, does anyone know how I might get auto-mounting to work?

    Read the article

  • BOINC error code -1200 when opening...

    - by Erik Vold
    I installed boinc 6.10.21 on my macosx 10.5 in order to upgrade from a 6.6 version that I was running today, and I am the admin user, and I was logged in as the admin user. As I was installing 6.10.21 I was asked if non admin users should be allowed to use boinc, and I said 'yes' to this. Then when I tried to open boinc I got a message like the following: "You currently are not authorized to manage the client. Either re-install and allow non-admin users or contact your administrator to add you to the 'boinc_master' user group." So I tried to reinstall first, and I was not asked if non admin users should be allowed to use boinc.. so I retried a few times and got no different result.. So I downloaded 6.10.43 and installed that, and again I was not asked if non admin users should be allowed to use boinc.. and when I tried to run boinc I got the same message like: "You currently are not authorized to manage the client. Either re-install and allow non-admin users or contact your administrator to add you to the 'boinc_master' user group." So I did a google search trying to figure out how to add my admin user to the bonic_master user group and found this which suggested I run the following in terminal: "sudo dscl . -append /Groups/boinc_master GroupMembership <your user's short name> CR" So I did this and now I get the following error: BOINC ownership or permissions are not set properly; please reinstall BOINC (Error code -1200) So I reinstall and I am ever asked the question about allowing non admin users again, and I still get this error message every after every reinstall attempt.. What should I do?..

    Read the article

  • pam debugging "check pass; user unknown"

    - by lvc
    I am attempting to get Prosody authenticating with its auth_pam module. It is configured to use the pam service name xmpp. The pam.d/xmpp file is copied straight from the one configured for dovecot (originally taken from, I think, dovecot's documentation), which is known to be working: # cat /etc/pam.d/xmpp auth required pam_unix.so nullok debug account required pam_unix.so debug Logging in with dovecot works wonderfully. Logging in with prosody, with exactly the same username and password, causes Prosody to return 'Not authorized', and the following in journalctl -f: Oct 29 22:12:14 riscque.net prosody[9396]: c2s1d010b0: Client sent opening <stream:stream> to riscque.net Oct 29 22:12:14 riscque.net prosody[9396]: c2s1d010b0: Sent reply <stream:stream> to client Oct 29 22:12:14 riscque.net prosody[9396]: [178B blob data] Oct 29 22:12:14 riscque.net unix_chkpwd[9408]: check pass; user unknown Oct 29 22:12:14 riscque.net prosody[9396]: pam_unix(xmpp:auth): conversation failed Oct 29 22:12:14 riscque.net prosody[9396]: pam_unix(xmpp:auth): unable to obtain a password Oct 29 22:12:14 riscque.net prosody[9396]: pam_unix(xmpp:auth): auth could not identify password for [lvc] Oct 29 22:12:14 riscque.net prosody[9396]: riscque.net:saslauth: sasl reply: <failure xmlns='urn:ietf:params:xml:ns:xmpp-sasl'><not-authorized/><text>Unable to authorize you with the authentication credentials you&apos;ve sent.</text></failure> This series of errors seems mutually contradictory - first it says "user unknown", but then that it can't obtain the password for lvc - this username certainly exists on the system. What is likely going on here, and how would I debug this further?

    Read the article

  • Boinc permissions problem on OS X

    - by Erik Vold
    I installed boinc 6.10.21 on my OS X 10.5 in order to upgrade from a 6.6 version that I was running today, and I am the admin user, and I was logged in as the admin user. As I was installing 6.10.21 I was asked if non admin users should be allowed to use Boinc, and I said 'yes' to this. Then when I tried to open Boinc I got a message like the following: "You currently are not authorized to manage the client. Either re-install and allow non-admin users or contact your administrator to add you to the 'boinc_master' user group." So I tried to reinstall first, and I was not asked if non admin users should be allowed to use Boinc.. so I retried a few times and got no different result.. So I downloaded 6.10.43 and installed that, and again I was not asked if non admin users should be allowed to use boinc.. and when I tried to run Boinc I got the same message like: "You currently are not authorized to manage the client. Either re-install and allow non-admin users or contact your administrator to add you to the 'boinc_master' user group." So I did a Google search trying to figure out how to add my admin user to the bonic_master user group and found this which suggested I run the following in terminal: "sudo dscl . -append /Groups/boinc_master GroupMembership <your user's short name> CR" So I did this and now I get the following error: BOINC ownership or permissions are not set properly; please reinstall BOINC (Error code -1200) So I reinstall and I am ever asked the question about allowing non admin users again, and I still get this error message every after every reinstall attempt.. What should I do?..

    Read the article

  • how does openvpn decide which interface to get IP addrs from

    - by bkrupa
    Using ubuntu 10.04 on both ends. We have a client and server machine on the SAME network attempting to make a vpn connection. We use the config files from here and made minimal changes. The server and client start and seem to connect without any trouble. The server looks like: Wed Feb 23 22:13:22 2011 MULTI: multi_create_instance called Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Re-using SSL/TLS context Wed Feb 23 22:13:22 2011 192.168.1.55:47166 LZO compression initialized Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Control Channel MTU parms [ L:1574 D:138 EF:38 EB:0 ET:0 EL:0 ] Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Data Channel MTU parms [ L:1574 D:1450 EF:42 EB:135 ET:32 EL:0 AF:3/1 ] Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Local Options hash (VER=V4): 'f7df56b8' Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Expected Remote Options hash (VER=V4): 'd79ca330' Wed Feb 23 22:13:22 2011 192.168.1.55:47166 TLS: Initial packet from 192.168.1.55:47166, sid=69112e42 5458135b *...* Wed Feb 23 22:13:22 2011 192.168.1.55:47166 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Feb 23 22:13:22 2011 192.168.1.55:47166 [client1] Peer Connection Initiated with 192.168.1.55:47166 On the client side the connection looks like: Wed Feb 23 22:20:07 2011 [server] Peer Connection Initiated with [AF_INET]192.168.1.41:1194 Wed Feb 23 22:20:10 2011 SENT CONTROL [server]: 'PUSH_REQUEST' (status=1) Wed Feb 23 22:20:10 2011 PUSH: Received control message: 'PUSH_REPLY,route-gateway 10.8.0.4,ping 10,ping-restart 120,ifconfig 10.8.0.50 255.255.255.0' ... Wed Feb 23 22:20:10 2011 /sbin/ifconfig tap0 10.8.0.50 netmask 255.255.255.0 mtu 1500 broadcast 10.8.0.255 Wed Feb 23 22:20:10 2011 Initialization Sequence Completed The openvpn server has been configured to assign ip addresses in the range 10.8.0.* and the client has been given 10.8.0.50. When I run the following nmap from the client: Starting Nmap 5.00 ( http://nmap.org ) at 2011-02-23 22:04 EST Host 10.8.0.50 is up (0.00047s latency). Nmap done: 256 IP addresses (1 host up) scanned in 30.34 seconds Host 192.168.1.1 is up (0.0025s latency). Host 192.168.1.18 is up (0.074s latency). Host 192.168.1.41 is up (0.0024s latency). Host 192.168.1.55 is up (0.00018s latency). Nmap done: 256 IP addresses (4 hosts up) scanned in 6.33 seconds If I run an nmap from the server on 10.8.0.* I get nothing. If the client has two interfaces (wireless and tap device) when you look for a certain ip address, how does it decide which interface to connect on? edit I am trying to set up a vpn so that I can connect to my home network from a remote network. It seems like openvpn is connecting but none of the computers on my home network appear as network machines even after the connection is "Established". Stripped versions of the client and server config files are posted below. Thanks for any help you can offer. server.conf port 1194 proto udp dev tap ca /etc/openvpn/easy-rsa/keys/ca.crt cert /etc/openvpn/easy-rsa/keys/server.crt key /etc/openvpn/easy-rsa/keys/server.key # This file should be kept secret dh /etc/openvpn/easy-rsa/keys/dh1024.pem ifconfig-pool-persist ipp.txt server-bridge 10.8.0.4 255.255.255.0 10.8.0.50 10.8.0.100 keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 3 client.conf client dev tap dev-node tap0901 proto udp remote ********** 1194 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client1.crt key client1.key comp-lzo verb 3 one other thing that might be helpful, I tried to connect using the openvpn gui for windows and the connection stalls out on "obtaining configuration" and the bar just scrolls forever.

    Read the article

  • Cisco VPNClient from Mac won't connect using iPhone Tethering

    - by Dan Short
    I just set up iPhone tethering from my Snow Leopard Macbook Pro to my iPhone 3GS with the Datapro 4GB plan from AT&T. When attempting to connect to my corporate VPN from the MacBook Pro with Cisco VPNClient 4.9.01 (0100) I get the following log information: Cisco Systems VPN Client Version 4.9.01 (0100) Copyright (C) 1998-2006 Cisco Systems, Inc. All Rights Reserved. Client Type(s): Mac OS X Running on: Darwin 10.6.0 Darwin Kernel Version 10.6.0: Wed Nov 10 18:13:17 PST 2010; root:xnu-1504.9.26~3/RELEASE_I386 i386 Config file directory: /etc/opt/cisco-vpnclient 1 13:02:50.791 02/22/2011 Sev=Info/4 CM/0x43100002 Begin connection process 2 13:02:50.791 02/22/2011 Sev=Warning/2 CVPND/0x83400011 Error -28 sending packet. Dst Addr: 0x0AD337FF, Src Addr: 0x0AD33702 (DRVIFACE:1158). 3 13:02:50.791 02/22/2011 Sev=Warning/2 CVPND/0x83400011 Error -28 sending packet. Dst Addr: 0x0A2581FF, Src Addr: 0x0A258102 (DRVIFACE:1158). 4 13:02:50.792 02/22/2011 Sev=Info/4 CM/0x43100004 Establish secure connection using Ethernet 5 13:02:50.792 02/22/2011 Sev=Info/4 CM/0x43100024 Attempt connection with server "209.235.253.115" 6 13:02:50.792 02/22/2011 Sev=Info/4 CVPND/0x43400019 Privilege Separation: binding to port: (500). 7 13:02:50.793 02/22/2011 Sev=Info/4 CVPND/0x43400019 Privilege Separation: binding to port: (4500). 8 13:02:50.793 02/22/2011 Sev=Info/6 IKE/0x4300003B Attempting to establish a connection with 209.235.253.115. 9 13:02:51.293 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 10 13:02:51.894 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 11 13:02:52.495 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 12 13:02:53.096 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 13 13:02:53.698 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 14 13:02:54.299 02/22/2011 Sev=Warning/2 CVPND/0x83400018 Output size mismatch. Actual: 0, Expected: 237. (DRVIFACE:1319) 15 13:02:54.299 02/22/2011 Sev=Info/4 IKE/0x43000075 Unable to acquire local IP address after 5 attempts (over 5 seconds), probably due to network socket failure. 16 13:02:54.299 02/22/2011 Sev=Warning/2 IKE/0xC300009A Failed to set up connection data 17 13:02:54.299 02/22/2011 Sev=Info/4 CM/0x4310001C Unable to contact server "209.235.253.115" 18 13:02:54.299 02/22/2011 Sev=Info/5 CM/0x43100025 Initializing CVPNDrv 19 13:02:54.300 02/22/2011 Sev=Info/4 CVPND/0x4340001F Privilege Separation: restoring MTU on primary interface. 20 13:02:54.300 02/22/2011 Sev=Info/4 IKE/0x43000001 IKE received signal to terminate VPN connection 21 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700008 IPSec driver successfully started 22 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700014 Deleted all keys 23 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x4370000D Key(s) deleted by Interface (192.168.0.171) 24 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700014 Deleted all keys 25 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700014 Deleted all keys 26 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x43700014 Deleted all keys 27 13:02:54.300 02/22/2011 Sev=Info/4 IPSEC/0x4370000A IPSec driver successfully stopped The key line is 15: 15 13:02:54.299 02/22/2011 Sev=Info/4 IKE/0x43000075 Unable to acquire local IP address after 5 attempts (over 5 seconds), probably due to network socket failure. I can't find anything online about this. I found a single entry for the error message in Google, and it was a swedish (or some other nordic language site) that didn't have an answer to the question. I've tried connecting through both USB and Bluetooth tethering to the iPhone, and they both return the exact same results. I don't have direct control over the firewall, but if changes are necessary to make it work, I may be able to get the powers-that-be to make adjustments. A solution that doesn't require reconfiguring the firewall would be far better of course... Does anyone know what I can do to make this behave? Thanks, Dan

    Read the article

  • Linux: Apple Wireless A1314 Fn key not registered, looks like software bug

    - by ramplank
    I'm trying to set up my Apple Wireless Keyboard with my Kubuntu systems. These are PC hardware powered by Intel Atom and Intel i5 respectively. The keyboard has a US keyboard layout and has model number A1314 written on the back. It takes two AA batteries. I'm saying that because it appears there are multiple types of model A1314. I have tried this on a 10.04, 11.04, 11.10 and 12.04 system with no success. Every time using a bluetooth dongle and the KDE bluetooth notification tray applet, the keyboard can be connected. In both cases it shows up as "Apple Wireless Keyboard". Almost everything works as expected, in fact, I'm typing on it right now. But one thing doesn't: The Fn key. I'd like to use Fn + Down Arrow as PgDn / Page Down, I understand this is default behaviour on Apple keyboards. And of course I'd like the same for Page Up, Home and End. I'll stick to Page Down in my example. I used the xev tool to see the keycodes the system receives, and if I press on Fn nothing happens, and nothing is registered. If I press Fn + Down Arrow, xev only registers the down arrow. Here's the output from my 11.04 system to illustrate: Press just the Fn key: no output Press Down Arrow key: KeyPress event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2699773, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2699860, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False Press Fn+Down Arrow Keys together: KeyPress event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2701548, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2701623, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False I've been searching this forum and other Linux-related forums for hours but I still have not found a solution. I mostly found advice on how to fix this when using an actual apple laptop or desktop, but I don't have that. They said to try something like the following echo 2 > /sys/module/hid_apple/ ... But since there's no hid_apple directory present on my systems, I've needed to modprobe hid_apple first. That didn't help either. I'm cool with changing some config files, or compiling my own patched kernel if that's necessary. I currently have a 10.04 and 12.04 system available to test. The same issue occurs when hooked up to Windows 7. Fn key still does nothing, not by itself or in combination with other keys. With some AutoHotkey fiddling, I was able to confirm the key is registered as pressed, but ignored by default. A custom AutoHotkey script can fix that. But AutoHotkey is only for Windows, I want my problem fixed on Linux. Hooked up to an iPad 2 it only works in combination with the F1-F12 keys. Not with the arrow keys. If the screen of the ipad is off, and I press just the Fn key, the screen will come on, so the key itself is registered as pressed. So to sum up my question: Can anyone help me get Page Up, Page Down, Home and End to work on this keyboard, when that requires me to use an Fn key which is currently not registered?

    Read the article

  • HttpContext.Items and Server.Transfer/Execute

    - by Rick Strahl
    A few days ago my buddy Ben Jones pointed out that he ran into a bug in the ScriptContainer control in the West Wind Web and Ajax Toolkit. The problem was basically that when a Server.Transfer call was applied the script container (and also various ClientScriptProxy script embedding routines) would potentially fail to load up the specified scripts. It turns out the problem is due to the fact that the various components in the toolkit use request specific singletons via a Current property. I use a static Current property tied to a Context.Items[] entry to handle this type of operation which looks something like this: /// <summary> /// Current instance of this class which should always be used to /// access this object. There are no public constructors to /// ensure the reference is used as a Singleton to further /// ensure that all scripts are written to the same clientscript /// manager. /// </summary> public static ClientScriptProxy Current { get { if (HttpContext.Current == null) return new ClientScriptProxy(); ClientScriptProxy proxy = null; if (HttpContext.Current.Items.Contains(STR_CONTEXTID)) proxy = HttpContext.Current.Items[STR_CONTEXTID] as ClientScriptProxy; else { proxy = new ClientScriptProxy(); HttpContext.Current.Items[STR_CONTEXTID] = proxy; } return proxy; } } The proxy is attached to a Context.Items[] item which makes the instance Request specific. This works perfectly fine in most situations EXCEPT when you’re dealing with Server.Transfer/Execute requests. Server.Transfer doesn’t cause Context.Items to be cleared so both the current transferred request and the original request’s Context.Items collection apply. For the ClientScriptProxy this causes a problem because script references are tracked on a per request basis in Context.Items to check for script duplication. Once a script is rendered an ID is written into the Context collection and so considered ‘rendered’: // No dupes - ref script include only once if (HttpContext.Current.Items.Contains( STR_SCRIPTITEM_IDENTITIFIER + fileId ) ) return; HttpContext.Current.Items.Add(STR_SCRIPTITEM_IDENTITIFIER + fileId, string.Empty); where the fileId is the script name or unique identifier. The problem is on the Transferred page the item will already exist in Context and so fail to render because it thinks the script has already rendered based on the Context item. Bummer. The workaround for this is simple once you know what’s going on, but in this case it was a bitch to track down because the context items are used in many places throughout this class. The trick is to determine when a request is transferred and then removing the specific keys. The first issue is to determine if a script is in a Trransfer or Execute call: if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) Context.Handler is the original handler and CurrentHandler is the actual currently executing handler that is running when a Transfer/Execute is active. You can also use Context.PreviousHandler to get the last handler and chain through the whole list of handlers applied if Transfer calls are nested (dog help us all for the person debugging that). For the ClientScriptProxy the full logic to check for a transfer and remove the code looks like this: /// <summary> /// Clears all the request specific context items which are script references /// and the script placement index. /// </summary> public void ClearContextItemsOnTransfer() { if (HttpContext.Current != null) { // Check for Server.Transfer/Execute calls - we need to clear out Context.Items if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) { List<string> Keys = HttpContext.Current.Items.Keys.Cast<string>().Where(s => s.StartsWith(STR_SCRIPTITEM_IDENTITIFIER) || s == STR_ScriptResourceIndex).ToList(); foreach (string key in Keys) { HttpContext.Current.Items.Remove(key); } } } } along with a small update to the Current property getter that sets a global flag to indicate whether the request was transferred: if (!proxy.IsTransferred && HttpContext.Current.Handler != HttpContext.Current.CurrentHandler) { proxy.ClearContextItemsOnTransfer(); proxy.IsTransferred = true; } return proxy; I know this is pretty ugly, but it works and it’s actually minimal fuss without affecting the behavior of the rest of the class. Ben had a different solution that involved explicitly clearing out the Context items and replacing the collection with a manually maintained list of items which also works, but required changes through the code to make this work. In hindsight, it would have been better to use a single object that encapsulates all the ‘persisted’ values and store that object in Context instead of all these individual small morsels. Hindsight is always 20/20 though :-}. If possible use Page.Items ClientScriptProxy is a generic component that can be used from anywhere in ASP.NET, so there are various methods that are not Page specific on this component which is why I used Context.Items, rather than the Page.Items collection.Page.Items would be a better choice since it will sidestep the above Server.Transfer nightmares as the Page is reloaded completely and so any new Page gets a new Items collection. No fuss there. So for the ScriptContainer control, which has to live on the page the behavior is a little different. It is attached to Page.Items (since it’s a control): /// <summary> /// Returns a current instance of this control if an instance /// is already loaded on the page. Otherwise a new instance is /// created, added to the Form and returned. /// /// It's important this function is not called too early in the /// page cycle - it should not be called before Page.OnInit(). /// /// This property is the preferred way to get a reference to a /// ScriptContainer control that is either already on a page /// or needs to be created. Controls in particular should always /// use this property. /// </summary> public static ScriptContainer Current { get { // We need a context for this to work! if (HttpContext.Current == null) return null; Page page = HttpContext.Current.CurrentHandler as Page; if (page == null) throw new InvalidOperationException(Resources.ERROR_ScriptContainer_OnlyWorks_With_PageBasedHandlers); ScriptContainer ctl = null; // Retrieve the current instance ctl = page.Items[STR_CONTEXTID] as ScriptContainer; if (ctl != null) return ctl; ctl = new ScriptContainer(); page.Form.Controls.Add(ctl); return ctl; } } The biggest issue with this approach is that you have to explicitly retrieve the page in the static Current property. Notice again the use of CurrentHandler (rather than Handler which was my original implementation) to ensure you get the latest page including the one that Server.Transfer fired. Server.Transfer and Server.Execute are Evil All that said – this fix is probably for the 2 people who are crazy enough to rely on Server.Transfer/Execute. :-} There are so many weird behavior problems with these commands that I avoid them at all costs. I don’t think I have a single application that uses either of these commands… Related Resources Full source of ClientScriptProxy.cs (repository) Part of the West Wind Web Toolkit Static Singletons for ASP.NET Controls Post © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • SQL SERVER – Guest Post – Architecting Data Warehouse – Niraj Bhatt

    - by pinaldave
    Niraj Bhatt works as an Enterprise Architect for a Fortune 500 company and has an innate passion for building / studying software systems. He is a top rated speaker at various technical forums including Tech·Ed, MCT Summit, Developer Summit, and Virtual Tech Days, among others. Having run a successful startup for four years Niraj enjoys working on – IT innovations that can impact an enterprise bottom line, streamlining IT budgets through IT consolidation, architecture and integration of systems, performance tuning, and review of enterprise applications. He has received Microsoft MVP award for ASP.NET, Connected Systems and most recently on Windows Azure. When he is away from his laptop, you will find him taking deep dives in automobiles, pottery, rafting, photography, cooking and financial statements though not necessarily in that order. He is also a manager/speaker at BDOTNET, Asia’s largest .NET user group. Here is the guest post by Niraj Bhatt. As data in your applications grows it’s the database that usually becomes a bottleneck. It’s hard to scale a relational DB and the preferred approach for large scale applications is to create separate databases for writes and reads. These databases are referred as transactional database and reporting database. Though there are tools / techniques which can allow you to create snapshot of your transactional database for reporting purpose, sometimes they don’t quite fit the reporting requirements of an enterprise. These requirements typically are data analytics, effective schema (for an Information worker to self-service herself), historical data, better performance (flat data, no joins) etc. This is where a need for data warehouse or an OLAP system arises. A Key point to remember is a data warehouse is mostly a relational database. It’s built on top of same concepts like Tables, Rows, Columns, Primary keys, Foreign Keys, etc. Before we talk about how data warehouses are typically structured let’s understand key components that can create a data flow between OLTP systems and OLAP systems. There are 3 major areas to it: a) OLTP system should be capable of tracking its changes as all these changes should go back to data warehouse for historical recording. For e.g. if an OLTP transaction moves a customer from silver to gold category, OLTP system needs to ensure that this change is tracked and send to data warehouse for reporting purpose. A report in context could be how many customers divided by geographies moved from sliver to gold category. In data warehouse terminology this process is called Change Data Capture. There are quite a few systems that leverage database triggers to move these changes to corresponding tracking tables. There are also out of box features provided by some databases e.g. SQL Server 2008 offers Change Data Capture and Change Tracking for addressing such requirements. b) After we make the OLTP system capable of tracking its changes we need to provision a batch process that can run periodically and takes these changes from OLTP system and dump them into data warehouse. There are many tools out there that can help you fill this gap – SQL Server Integration Services happens to be one of them. c) So we have an OLTP system that knows how to track its changes, we have jobs that run periodically to move these changes to warehouse. The question though remains is how warehouse will record these changes? This structural change in data warehouse arena is often covered under something called Slowly Changing Dimension (SCD). While we will talk about dimensions in a while, SCD can be applied to pure relational tables too. SCD enables a database structure to capture historical data. This would create multiple records for a given entity in relational database and data warehouses prefer having their own primary key, often known as surrogate key. As I mentioned a data warehouse is just a relational database but industry often attributes a specific schema style to data warehouses. These styles are Star Schema or Snowflake Schema. The motivation behind these styles is to create a flat database structure (as opposed to normalized one), which is easy to understand / use, easy to query and easy to slice / dice. Star schema is a database structure made up of dimensions and facts. Facts are generally the numbers (sales, quantity, etc.) that you want to slice and dice. Fact tables have these numbers and have references (foreign keys) to set of tables that provide context around those facts. E.g. if you have recorded 10,000 USD as sales that number would go in a sales fact table and could have foreign keys attached to it that refers to the sales agent responsible for sale and to time table which contains the dates between which that sale was made. These agent and time tables are called dimensions which provide context to the numbers stored in fact tables. This schema structure of fact being at center surrounded by dimensions is called Star schema. A similar structure with difference of dimension tables being normalized is called a Snowflake schema. This relational structure of facts and dimensions serves as an input for another analysis structure called Cube. Though physically Cube is a special structure supported by commercial databases like SQL Server Analysis Services, logically it’s a multidimensional structure where dimensions define the sides of cube and facts define the content. Facts are often called as Measures inside a cube. Dimensions often tend to form a hierarchy. E.g. Product may be broken into categories and categories in turn to individual items. Category and Items are often referred as Levels and their constituents as Members with their overall structure called as Hierarchy. Measures are rolled up as per dimensional hierarchy. These rolled up measures are called Aggregates. Now this may seem like an overwhelming vocabulary to deal with but don’t worry it will sink in as you start working with Cubes and others. Let’s see few other terms that we would run into while talking about data warehouses. ODS or an Operational Data Store is a frequently misused term. There would be few users in your organization that want to report on most current data and can’t afford to miss a single transaction for their report. Then there is another set of users that typically don’t care how current the data is. Mostly senior level executives who are interesting in trending, mining, forecasting, strategizing, etc. don’t care for that one specific transaction. This is where an ODS can come in handy. ODS can use the same star schema and the OLAP cubes we saw earlier. The only difference is that the data inside an ODS would be short lived, i.e. for few months and ODS would sync with OLTP system every few minutes. Data warehouse can periodically sync with ODS either daily or weekly depending on business drivers. Data marts are another frequently talked about topic in data warehousing. They are subject-specific data warehouse. Data warehouses that try to span over an enterprise are normally too big to scope, build, manage, track, etc. Hence they are often scaled down to something called Data mart that supports a specific segment of business like sales, marketing, or support. Data marts too, are often designed using star schema model discussed earlier. Industry is divided when it comes to use of data marts. Some experts prefer having data marts along with a central data warehouse. Data warehouse here acts as information staging and distribution hub with spokes being data marts connected via data feeds serving summarized data. Others eliminate the need for a centralized data warehouse citing that most users want to report on detailed data. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Business Intelligence, Data Warehousing, Database, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • XNA 2D Collision with specific tiles

    - by zenzero
    I am new to game programming and to these sites for help. I am making a 2D game but I can't seem to get the collision between my character and certain tiles. I have a map filled with grass tiles and water tiles and I want to keep my character from walking on the water tiles. I have a Tiles class that I use so that the tiles are objects and also has the collision method in it, a TileEngine class used create the map and it also holds a list of Tiles, and the class James which is for my character. I also have a Camera class that centers the camera on my character if that has anything to do with the problem. The character's movement is intended to be restricted to 4 directions(up, down, left, right). As an extra note, the bottom right water tile does have collision, but the collision does not occur for any of the other water tiles. Here is my TileEngine class using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace Test2DGame2 { class TileEngine : Microsoft.Xna.Framework.Game { //makes a list of Tiles objects public List<Tiles> tilesList = new List<Tiles>(); public TileEngine() {} public static int tileWidth = 64; public static int tileHeight = 64; public int[,] map = { {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,}, }; public void drawMap(SpriteBatch spriteBatch) { for (int y = 0; y < map.GetLength(0); y++) { for (int x = 0; x < map.GetLength(1); x++) { //make a Rectangle tilesList[map[y, x]].rectangle = new Rectangle(x * tileWidth, y * tileHeight, tileWidth, tileHeight); //draw the Tiles objects spriteBatch.Draw(tilesList[map[y, x]].texture, tilesList[map[y, x]].rectangle, Color.White); } } } } } Here is my Tiles class using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace Test2DGame2 { class Tiles { public Texture2D texture; public Rectangle rectangle; public Tiles(Texture2D texture) { this.texture = texture; } //check to see if james collides with the tile from the right side public void rightCollision(James james) { if (james.GetBounds().Intersects(rectangle)) { james.position.X = rectangle.Left - james.front.Width; } } } } I have a method for rightCollision because I could only figure out how to get the collisions from specifying directions. and here is the James class for my character using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace Test2DGame2 { class James { public Texture2D front; public Texture2D back; public Texture2D left; public Texture2D right; public Vector2 center; public Vector2 position; public James(Texture2D front) { position = new Vector2(0, 0); this.front = front; center = new Vector2(front.Width / 2, front.Height / 2); } public James(Texture2D front, Vector2 newPosition) { this.front = front; position = newPosition; center = new Vector2(front.Width / 2, front.Height / 2); } public void move(GameTime gameTime) { KeyboardState keyboard = Keyboard.GetState(); float SCALE = 20.0f; float speed = gameTime.ElapsedGameTime.Milliseconds / 100.0f; if (keyboard.IsKeyDown(Keys.Up)) { position.Y -=speed * SCALE; } else if (keyboard.IsKeyDown(Keys.Down)) { position.Y += speed * SCALE; } else if (keyboard.IsKeyDown(Keys.Left)) { position.X -= speed * SCALE; } else if (keyboard.IsKeyDown(Keys.Right)) { position.X += speed * SCALE; } } public void draw(SpriteBatch spriteBatch) { spriteBatch.Draw(front, position, null, Color.White, 0, center, 1.0f, SpriteEffects.None, 0.0f); } //get the boundingbox for James public Rectangle GetBounds() { return new Rectangle( (int)position.X, (int)position.Y, front.Width, front.Height); } } }

    Read the article

  • Single Key Multiple Values Data Structure for one to many mapping

    - by nijhawan.saurabh
    Dictionaries are good, they are great to store Key / Value pairs but what if you want to store multiple values for a single key? Dictionaries would not allow duplicate keys. I came across a nice way to represent such a Data Structure using one of the Extension Method (ToLookup) present in System.Linq Namespace which converts an IEnumerable<T> to an ILookup<TKey, TElement>.   Now, there are two parameters this method expects (The other overload expects 3 parameters): IEnumerable<TSource> - This list would contain the actual data. Func<TSource, TKey> keySelector - The Delegate which which computes the keys   The method returns the following: ILookup<TKey, TElement>   This DS would store Keys and multiple values along those keys.   Let's see a small example:        12  using System;    13     using System.Collections.Generic;    14     using System.Linq;    15     16     /// <summary>    17     /// </summary>    18     internal class Program    19     {    20         #region Methods    21     22         /// <summary>    23         /// </summary>    24         /// <param name="args">    25         /// The args.    26         /// </param>    27         private static void Main(string[] args)    28         {    29             // Create an array of strings.    30             var list = new List<string> { "IceCream1", "Chocolate Moose", "IceCream2" };    31     32             // Generate a lookup Data Structure    33             ILookup<int, string> lookupDs = list.ToLookup(item => item.Length);    34     35           // Enumerate groupings.    36             foreach (var group in lookupDs)    37             {    38                 foreach (string element in group)    39                 {    40                     Console.WriteLine(element);    41                 }    42             }    43         } Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Performance of delegate and method group

    - by BlueFox
    Hi I was investigating the performance hit of creating Cachedependency objects, so I wrote a very simple test program as follows: using System; using System.Collections.Generic; using System.Diagnostics; using System.Web.Caching; namespace Test { internal class Program { private static readonly string[] keys = new[] {"Abc"}; private static readonly int MaxIteration = 10000000; private static void Main(string[] args) { Debug.Print("first set"); test7(); test6(); test5(); test4(); test3(); test2(); Debug.Print("second set"); test2(); test3(); test4(); test5(); test6(); test7(); } private static void test2() { DateTime start = DateTime.Now; var list = new List<CacheDependency>(); for (int i = 0; i < MaxIteration; i++) { list.Add(new CacheDependency(null, keys)); } Debug.Print("test2 Time: " + (DateTime.Now - start)); } private static void test3() { DateTime start = DateTime.Now; var list = new List<Func<CacheDependency>>(); for (int i = 0; i < MaxIteration; i++) { list.Add(() => new CacheDependency(null, keys)); } Debug.Print("test3 Time: " + (DateTime.Now - start)); } private static void test4() { var p = new Program(); DateTime start = DateTime.Now; var list = new List<Func<CacheDependency>>(); for (int i = 0; i < MaxIteration; i++) { list.Add(p.GetDep); } Debug.Print("test4 Time: " + (DateTime.Now - start)); } private static void test5() { var p = new Program(); DateTime start = DateTime.Now; var list = new List<Func<CacheDependency>>(); for (int i = 0; i < MaxIteration; i++) { list.Add(() => { return p.GetDep(); }); } Debug.Print("test5 Time: " + (DateTime.Now - start)); } private static void test6() { DateTime start = DateTime.Now; var list = new List<Func<CacheDependency>>(); for (int i = 0; i < MaxIteration; i++) { list.Add(GetDepSatic); } Debug.Print("test6 Time: " + (DateTime.Now - start)); } private static void test7() { DateTime start = DateTime.Now; var list = new List<Func<CacheDependency>>(); for (int i = 0; i < MaxIteration; i++) { list.Add(() => { return GetDepSatic(); }); } Debug.Print("test7 Time: " + (DateTime.Now - start)); } private CacheDependency GetDep() { return new CacheDependency(null, keys); } private static CacheDependency GetDepSatic() { return new CacheDependency(null, keys); } } } But I can't understand why these result looks like this: first set test7 Time: 00:00:00.4840277 test6 Time: 00:00:02.2041261 test5 Time: 00:00:00.1910109 test4 Time: 00:00:03.1401796 test3 Time: 00:00:00.1820105 test2 Time: 00:00:08.5394884 second set test2 Time: 00:00:07.7324423 test3 Time: 00:00:00.1830105 test4 Time: 00:00:02.3561347 test5 Time: 00:00:00.1750100 test6 Time: 00:00:03.2941884 test7 Time: 00:00:00.1850106 In particular: 1. Why is test4 and test6 much slower than their delegate version? I also noticed that Resharper specifically has a comment on the delegate version suggesting change test5 and test7 to "Covert to method group". Which is the same as test4 and test6 but they're actually slower? 2. I don't seem a consistent performance difference when calling test4 and test6, shouldn't static calls to be always faster?

    Read the article

  • Unable to Calculate Position within Owner-Draw Text

    - by Jonathan Wood
    I'm trying to use Visual Studio 2012 to create a Windows Forms application that can place the caret at the current position within a owner-drawn string. However, I've been unable to find a way to accurately calculate that position. I've done this successfully before in C++. I've now tried numerous methods in C#. Originally, I tried using .NET classes to determine the correct position, but then I tried accessing the Windows API directly. In some cases, I came close, but after some time I still cannot place the caret accurately. I've created a small test program and posted key parts below. I've also posted the entire project here. The exact font used is not important to me; however, my application assumes a mono-spaced font. Any help is appreciated. Form1.cs This is my main form. public partial class Form1 : Form { private string TestString; private int AveCharWidth; private int Position; public Form1() { InitializeComponent(); TestString = "123456789012345678901234567890123456789012345678901234567890"; AveCharWidth = GetFontWidth(); Position = 0; } private void Form1_Load(object sender, EventArgs e) { Font = new Font(FontFamily.GenericMonospace, 12, FontStyle.Regular, GraphicsUnit.Pixel); } protected override void OnGotFocus(EventArgs e) { Windows.CreateCaret(Handle, (IntPtr)0, 2, (int)Font.Height); Windows.ShowCaret(Handle); UpdateCaretPosition(); base.OnGotFocus(e); } protected void UpdateCaretPosition() { Windows.SetCaretPos(Padding.Left + (Position * AveCharWidth), Padding.Top); } protected override void OnLostFocus(EventArgs e) { Windows.HideCaret(Handle); Windows.DestroyCaret(); base.OnLostFocus(e); } protected override void OnPaint(PaintEventArgs e) { e.Graphics.DrawString(TestString, Font, SystemBrushes.WindowText, new PointF(Padding.Left, Padding.Top)); } protected override bool IsInputKey(Keys keyData) { switch (keyData) { case Keys.Right: case Keys.Left: return true; } return base.IsInputKey(keyData); } protected override void OnKeyDown(KeyEventArgs e) { switch (e.KeyCode) { case Keys.Left: Position = Math.Max(Position - 1, 0); UpdateCaretPosition(); break; case Keys.Right: Position = Math.Min(Position + 1, TestString.Length); UpdateCaretPosition(); break; } base.OnKeyDown(e); } protected int GetFontWidth() { int AverageCharWidth = 0; using (var graphics = this.CreateGraphics()) { try { Windows.TEXTMETRIC tm; var hdc = graphics.GetHdc(); IntPtr hFont = this.Font.ToHfont(); IntPtr hOldFont = Windows.SelectObject(hdc, hFont); var a = Windows.GetTextMetrics(hdc, out tm); var b = Windows.SelectObject(hdc, hOldFont); var c = Windows.DeleteObject(hFont); AverageCharWidth = tm.tmAveCharWidth; } catch { } finally { graphics.ReleaseHdc(); } } return AverageCharWidth; } } Windows.cs Here are my Windows API declarations. public static class Windows { [Serializable, StructLayout(LayoutKind.Sequential, CharSet = CharSet.Auto)] public struct TEXTMETRIC { public int tmHeight; public int tmAscent; public int tmDescent; public int tmInternalLeading; public int tmExternalLeading; public int tmAveCharWidth; public int tmMaxCharWidth; public int tmWeight; public int tmOverhang; public int tmDigitizedAspectX; public int tmDigitizedAspectY; public short tmFirstChar; public short tmLastChar; public short tmDefaultChar; public short tmBreakChar; public byte tmItalic; public byte tmUnderlined; public byte tmStruckOut; public byte tmPitchAndFamily; public byte tmCharSet; } [DllImport("user32.dll")] public static extern bool CreateCaret(IntPtr hWnd, IntPtr hBitmap, int nWidth, int nHeight); [DllImport("User32.dll")] public static extern bool SetCaretPos(int x, int y); [DllImport("User32.dll")] public static extern bool DestroyCaret(); [DllImport("User32.dll")] public static extern bool ShowCaret(IntPtr hWnd); [DllImport("User32.dll")] public static extern bool HideCaret(IntPtr hWnd); [DllImport("gdi32.dll", CharSet = CharSet.Auto)] public static extern bool GetTextMetrics(IntPtr hdc, out TEXTMETRIC lptm); [DllImport("gdi32.dll")] public static extern IntPtr SelectObject(IntPtr hdc, IntPtr hgdiobj); [DllImport("GDI32.dll")] public static extern bool DeleteObject(IntPtr hObject); }

    Read the article

  • Restricting Input in HTML Textboxes to Numeric Values

    - by Rick Strahl
    Ok, here’s a fairly basic one – how to force a textbox to accept only numeric input. Somebody asked me this today on a support call so I did a few quick lookups online and found the solutions listed rather unsatisfying. The main problem with most of the examples I could dig up was that they only include numeric values, but that provides a rather lame user experience. You need to still allow basic operational keys for a textbox – navigation keys, backspace and delete, tab/shift tab and the Enter key - to work or else the textbox will feel very different than a standard text box. Yes there are plug-ins that allow masked input easily enough but most are fixed width which is difficult to do with plain number input. So I took a few minutes to write a small reusable plug-in that handles this scenario. Imagine you have a couple of textboxes on a form like this: <div class="containercontent"> <div class="label">Enter a number:</div> <input type="text" name="txtNumber1" id="txtNumber1" value="" class="numberinput" /> <div class="label">Enter a number:</div> <input type="text" name="txtNumber2" id="txtNumber2" value="" class="numberinput" /> </div> and you want to restrict input to numbers. Here’s a small .forceNumeric() jQuery plug-in that does what I like to see in this case: [Updated thanks to Elijah Manor for a couple of small tweaks for additional keys to check for] <script type="text/javascript"> $(document).ready(function () { $(".numberinput").forceNumeric(); }); // forceNumeric() plug-in implementation jQuery.fn.forceNumeric = function () { return this.each(function () { $(this).keydown(function (e) { var key = e.which || e.keyCode; if (!e.shiftKey && !e.altKey && !e.ctrlKey && // numbers key >= 48 && key <= 57 || // Numeric keypad key >= 96 && key <= 105 || // comma, period and minus key == 190 || key == 188 || key == 109 || // Backspace and Tab and Enter key == 8 || key == 9 || key == 13 || // Home and End key == 35 || key == 36 || // left and right arrows key == 37 || key == 39 || // Del and Ins key == 46 || key == 45) return true; return false; }); }); } </script> With the plug-in in place in your page or an external .js file you can now simply use a selector to apply it: $(".numberinput").forceNumeric(); The plug-in basically goes through each selected element and hooks up a keydown() event handler. When a key is pressed the handler is fired and the keyCode of the event object is sent. Recall that jQuery normalizes the JavaScript Event object between browsers. The code basically white-lists a few key codes and rejects all others. It returns true to indicate the keypress is to go through or false to eat the keystroke and not process it which effectively removes it. Simple and low tech, and it works without too much change of typical text box behavior.© Rick Strahl, West Wind Technologies, 2005-2011Posted in JavaScript  jQuery  HTML  

    Read the article

  • bluez 5.19 PS4 controller

    - by Athanase
    I currently have a problem when pairing my computer with a PS4 remote. On my Ubuntu 14.04 I removed everything related with bluez and bluetooth, and I built and installed bluez 5.19. Here are some useful command outputs: jean@system ~ hcitool hcitool - HCI Tool ver 5.19 jean@system ~ hcitool dev Devices: hci0 00:15:83:4C:0C:BB jean@system ~ bluetoothctl [bluetooth]# version Version 5.19 jean@system ~ bluetoothctl [NEW] Controller 00:15:83:4C:0C:BB BlueZ [default] jean@system ~ lsusb Bus 003 Device 012: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) So here is what happens. When I try to hard pair the controller with the computer, by holding the share and ps button for a while, everything works as expected and the pairing is done properly. After a hard pairing if I try the pairing by pressing the ps button only, nothings happen. In order to go it, I first power up the bluetooth dongle: jean@system ~ sudo hciconfig hciX up and then I run the bluetooh deamon bluetoothd: jean@system /usr/libexec/bluetooth ~ ./bluetoothd -d -n bluetoothd[11270]: Bluetooth daemon 5.19 bluetoothd[11270]: src/main.c:parse_config() parsing main.conf bluetoothd[11270]: src/main.c:parse_config() discovto=0 bluetoothd[11270]: src/main.c:parse_config() pairto=0 bluetoothd[11270]: src/main.c:parse_config() auto_to=60 bluetoothd[11270]: src/main.c:parse_config() name=%h-%d bluetoothd[11270]: src/main.c:parse_config() class=0x000100 bluetoothd[11270]: src/main.c:parse_config() Key file does not have key 'DeviceID' bluetoothd[11270]: src/gatt.c:gatt_init() Starting GATT server bluetoothd[11270]: src/adapter.c:adapter_init() sending read version command bluetoothd[11270]: Starting SDP server bluetoothd[11270]: src/sdpd-service.c:register_device_id() Adding device id record for 0002:1d6b:0246:0513 ... bluetoothd[11270]: src/adapter.c:adapter_service_insert() /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:add_uuid() sending add uuid command for index 0 bluetoothd[11270]: profiles/audio/a2dp.c:a2dp_sink_server_probe() path /org/bluez/hci0 bluetoothd[11270]: profiles/audio/a2dp.c:a2dp_source_server_probe() path /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:btd_adapter_unblock_address() hci0 00:00:00:00:00:00 bluetoothd[11270]: src/adapter.c:get_ltk_info() A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_create_from_storage() address A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_new() address A4:15:66:C1:0D:2A bluetoothd[11270]: src/device.c:device_new() Creating device /org/bluez/hci0/dev_A4_15_66_C1_0D_2A bluetoothd[11270]: src/device.c:device_set_bonded() bluetoothd[11270]: src/adapter.c:get_ltk_info() A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_create_from_storage() address A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_new() address A4:15:66:88:5E:9A bluetoothd[11270]: src/device.c:device_new() Creating device /org/bluez/hci0/dev_A4_15_66_88_5E_9A bluetoothd[11270]: src/device.c:device_set_bonded() bluetoothd[11270]: src/adapter.c:load_link_keys() hci0 keys 2 debug_keys 0 bluetoothd[11270]: src/adapter.c:load_ltks() hci0 keys 0 bluetoothd[11270]: src/adapter.c:load_connections() sending get connections command for index 0 bluetoothd[11270]: src/adapter.c:adapter_service_insert() /org/bluez/hci0 bluetoothd[11270]: src/adapter.c:add_uuid() sending add uuid command for index 0 bluetoothd[11270]: src/adapter.c:set_did() hci0 source 2 vendor 1d6b product 246 version 513 bluetoothd[11270]: src/adapter.c:adapter_register() Adapter /org/bluez/hci0 registered bluetoothd[11270]: src/adapter.c:set_dev_class() sending set device class command for index 0 bluetoothd[11270]: src/adapter.c:set_name() sending set local name command for index 0 bluetoothd[11270]: src/adapter.c:set_mode() sending set mode command for index 0 bluetoothd[11270]: src/adapter.c:set_mode() sending set mode command for index 0 bluetoothd[11270]: src/adapter.c:adapter_start() adapter /org/bluez/hci0 has been enabled bluetoothd[11270]: src/adapter.c:trigger_passive_scanning() bluetoothd[11270]: plugins/hostname.c:property_changed() static hostname: system bluetoothd[11270]: plugins/hostname.c:property_changed() pretty hostname: bluetoothd[11270]: plugins/hostname.c:update_name() name: system bluetoothd[11270]: src/adapter.c:adapter_set_name() name: system bluetoothd[11270]: plugins/hostname.c:property_changed() chassis: desktop bluetoothd[11270]: plugins/hostname.c:update_class() major: 0x01 minor: 0x01 bluetoothd[11270]: src/adapter.c:load_link_keys_complete() link keys loaded for hci0 bluetoothd[11270]: src/adapter.c:load_ltks_complete() LTKs loaded for hci0 bluetoothd[11270]: src/adapter.c:get_connections_complete() Connection count: 0 And then I press the ps button of the PS4 controller bluetoothd[11270]: src/adapter.c:connected_callback() hci0 device A4:15:66:C1:0D:2A connected eir_len 5 bluetoothd[11270]: profiles/input/server.c:connect_event_cb() Incoming connection from A4:15:66:C1:0D:2A on PSM 17 bluetoothd[11270]: profiles/input/device.c:input_device_set_channel() idev (nil) psm 17 bluetoothd[11270]: Refusing input device connect: No such file or directory (2) bluetoothd[11270]: profiles/input/server.c:confirm_event_cb() bluetoothd[11270]: Refusing connection from A4:15:66:C1:0D:2A: unknown device bluetoothd[11270]: src/adapter.c:dev_disconnected() Device A4:15:66:C1:0D:2A disconnected, reason 3 bluetoothd[11270]: src/adapter.c:adapter_remove_connection() bluetoothd[11270]: plugins/policy.c:disconnect_cb() reason 3 bluetoothd[11270]: src/adapter.c:bonding_attempt_complete() hci0 bdaddr A4:15:66:C1:0D:2A type 0 status 0xe bluetoothd[11270]: src/device.c:device_bonding_complete() bonding (nil) status 0x0e bluetoothd[11270]: src/device.c:device_bonding_failed() status 14 bluetoothd[11270]: src/adapter.c:resume_discovery() So I don't know what is happening here and a bit of help would be appreciated.

    Read the article

  • MySQL Cluster 7.3: On-Demand Webinar and Q&A Available

    - by Mat Keep
    The on-demand webinar for the MySQL Cluster 7.3 Development Release is now available. You can learn more about the design, implementation and getting started with all of the new MySQL Cluster 7.3 features from the comfort and convenience of your own device, including: - Foreign Key constraints in MySQL Cluster - Node.js NoSQL API  - Auto-installation of higher performance distributed, clusters We received some great questions over the course of the webinar, and I wanted to share those for the benefit of a broader audience. Q. What Foreign Key actions are supported: A. The core referential actions defined in the SQL:2003 standard are implemented: CASCADE RESTRICT NO ACTION SET NULL Q. Where are Foreign Keys implemented, ie data nodes or SQL nodes? A. They are implemented in the data nodes, therefore can be enforced for both the SQL and NoSQL APIs Q. Are they compatible with the InnoDB Foreign Key implementation? A. Yes, with the following exceptions: - InnoDB doesn’t support “No Action” constraints, MySQL Cluster does - You can choose to suspend FK constraint enforcement with InnoDB using the FOREIGN_KEY_CHECKS parameter; at the moment, MySQL Cluster ignores that parameter. - You cannot set up FKs between 2 tables where one is stored using MySQL Cluster and the other InnoDB. - You cannot change primary keys through the NDB API which means that the MySQL Server actually has to simulate such operations by deleting and re-adding the row. If the PK in the parent table has a FK constraint on it then this causes non-ideal behaviour. With Restrict or No Action constraints, the change will result in an error. With Cascaded constraints, you’d want the rows in the child table to be updated with the new FK value but, the implicit delete of the row from the parent table would remove the associated rows from the child table and the subsequent implicit insert into the parent wouldn’t reinstate the child rows. For this reason, an attempt to add an ON UPDATE CASCADE where the parent column is a primary key will be rejected. Q. Does adding or dropping Foreign Keys cause downtime due to a schema change? A. Nope, this is an online operation. MySQL Cluster supports a number of on-line schema changes, ie adding and dropping indexes, adding columns, etc. Q. Where can I see an example of node.js with MySQL Cluster? A. Check out the tutorial and download the code from GitHub Q. Can I use the auto-installer to support remote deployments? How about setting up MySQL Cluster 7.2? A. Yes to both! Q. Can I get a demo Check out the tutorial. You can download the code from http://labs.mysql.com/ Go to Select Build drop-down box Q. What is be minimum internet speen required for Geo distributed cluster with synchronous replication? A. if you're splitting you cluster between sites then we recommend a network latency of 20ms or less. Alternatively, use MySQL asynchronous replication where the latency of your WAN doesn't impact the latency of your reads/writes. Q. Where you can one learn more about the PayPal project with MySQL Cluster? A. Take a look at the following - you'll find press coverage, a video and slides from their keynote presentation  So, if you want to learn more, listen to the new MySQL Cluster 7.3 on-demand webinar  MySQL Cluster 7.3 is still in the development phase, so it would be great to get your feedback on these new features, and things you want to see!

    Read the article

  • All libGDX input statements are returning TRUE at once

    - by MowDownJoe
    I'm fooling around with Box2D and libGDX and running into a peculiar problem with polling for input. Here's the code for the Screen's render() loop: @Override public void render(float delta) { Gdx.gl20.glClearColor(0, 0, .2f, 1); Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT); camera.update(); game.batch.setProjectionMatrix(camera.combined); debugRenderer.render(world, camera.combined); if(Gdx.input.isButtonPressed(Keys.LEFT)){ Gdx.app.log("Input", "Left is being pressed."); pushyThingyBody.applyForceToCenter(-10f, 0); } if(Gdx.input.isButtonPressed(Keys.RIGHT)){ Gdx.app.log("Input", "Right is being pressed."); pushyThingyBody.applyForceToCenter(10f, 0); } world.step((1f/45f), 6, 2); } And the constructor is largely just setting up the World, Box2DDebugRenderer, and all the Bodies in the world: public SandBox(PhysicsSandboxGame game) { this.game = game; camera = new OrthographicCamera(800, 480); camera.setToOrtho(false); world = new World(new Vector2(0, -9.8f), true); debugRenderer = new Box2DDebugRenderer(); BodyDef bodyDef = new BodyDef(); bodyDef.type = BodyType.DynamicBody; bodyDef.position.set(100, 300); body = world.createBody(bodyDef); CircleShape circle = new CircleShape(); circle.setRadius(6f); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = circle; fixtureDef.density = .5f; fixtureDef.friction = .4f; fixtureDef.restitution = .6f; fixture = body.createFixture(fixtureDef); circle.dispose(); BodyDef groundBodyDef = new BodyDef(); groundBodyDef.position.set(new Vector2(0, 10)); groundBody = world.createBody(groundBodyDef); PolygonShape groundBox = new PolygonShape(); groundBox.setAsBox(camera.viewportWidth, 10f); groundBody.createFixture(groundBox, 0f); groundBox.dispose(); BodyDef pushyThingyBodyDef = new BodyDef(); pushyThingyBodyDef.type = BodyType.DynamicBody; pushyThingyBodyDef.position.set(new Vector2(400, 30)); pushyThingyBody = world.createBody(pushyThingyBodyDef); PolygonShape pushyThingyShape = new PolygonShape(); pushyThingyShape.setAsBox(40f, 10f); FixtureDef pushyThingyFixtureDef = new FixtureDef(); pushyThingyFixtureDef.shape = pushyThingyShape; pushyThingyFixtureDef.density = .4f; pushyThingyFixtureDef.friction = .1f; pushyThingyFixtureDef.restitution = .5f; pushyFixture = pushyThingyBody.createFixture(pushyThingyFixtureDef); pushyThingyShape.dispose(); } Testing this on the desktop. Basically, whenever I hit the appropriate keys, neither of the if statements in the loop return true. However, when I click in the window, both statements return true, resulting in a 0 net force on the body. Why is this?

    Read the article

  • Difficulties with rotation of a sprite

    - by Johnny
    I want to program a dolphin that jumps and rotates like a real dolphin. Jumping is not the problem, but I don't know how to make the rotation. At the moment, my dolphin rotates a little weird. But I want that it rotates like a real dolphin does. How can I improve the rotation? public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Texture2D image, water; float Gravity = 5.0F; float Acceleration = 20.0F; Vector2 Position = new Vector2(1200,720); Vector2 Velocity; float rotation = 0; SpriteEffects flip; Vector2 Speed = new Vector2(0, 0); public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; graphics.PreferredBackBufferWidth = 1280; graphics.PreferredBackBufferHeight = 720; } protected override void Initialize() { base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); image = Content.Load<Texture2D>("cartoondolphin"); water = Content.Load<Texture2D>("background"); flip = SpriteEffects.None; } protected override void Update(GameTime gameTime) { float VelocityX = 0f; float VelocityY = 0f; float time = (float)gameTime.ElapsedGameTime.TotalSeconds; KeyboardState kbState = Keyboard.GetState(); if(kbState.IsKeyDown(Keys.Left)) { rotation = 0; flip = SpriteEffects.None; VelocityX += -5f; } if(kbState.IsKeyDown(Keys.Right)) { rotation = 0; flip = SpriteEffects.FlipHorizontally; VelocityX += 5f; } // jump if the dolphin is under water if(Position.Y >= 670) { if (kbState.IsKeyDown(Keys.A)) { if (flip == SpriteEffects.None) { rotation += 0.01f; VelocityY += 40f; } else { rotation -= 0.01f; VelocityY += 40f; } } } else { if (flip == SpriteEffects.None) { rotation -= 0.01f; VelocityY += -10f; } else { rotation += 0.01f; VelocityY += -10f; } } float deltaY = 0; float deltaX = 0; deltaY = Gravity * (float)gameTime.ElapsedGameTime.TotalSeconds; deltaX += VelocityX * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; deltaY += -VelocityY * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration; Speed = new Vector2(Speed.X + deltaX, Speed.Y + deltaY); Position += Speed * (float)gameTime.ElapsedGameTime.TotalSeconds; Velocity.X = 0; if (Position.Y + image.Height/2 > graphics.PreferredBackBufferHeight) Position.Y = graphics.PreferredBackBufferHeight - image.Height/2; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); spriteBatch.Draw(water, new Rectangle(0, graphics.PreferredBackBufferHeight -100, graphics.PreferredBackBufferWidth, 100), Color.White); spriteBatch.Draw(image, Position, null, Color.White, rotation, new Vector2(image.Width / 2, image.Height / 2), 1, flip, 1); spriteBatch.End(); base.Draw(gameTime); } }

    Read the article

  • Physics like asteroides

    - by user2933016
    I try to make a ship that has the physic properties like asteroides. I have this for now(All in Java): Ship.class public class Ship { public static final float sMaxHealth = 0.1F; public static final float sMaxMoveVelocity = 5.0F; public static final float sMaxAngleVelocity = 20.0F; public static final float sRadius = 1.0F; public static final float sMoveDeceleration = 10.0F; public static final float sMoveAcceleration = 2.0F; public static final float sAngleDeceleration = 15.0F; public static final float sAngleAcceleration = 20.0F; private float mHealth; private float mXVelocity; private float mYVelocity; private float mAngleVelocity; private float mX; private float mY; private float mAngle; } (I let the getter and setter away for now) Controller code // Player input if(Gdx.input.isKeyPressed(Keys.UP)) { mPlayer.setXVelocity(mPlayer.getXVelocity() + (float) Math.cos(mPlayer.getAngle()) * Ship.sMoveAcceleration); mPlayer.setYVelocity(mPlayer.getYVelocity() + (float) Math.sin(mPlayer.getAngle()) * Ship.sMoveAcceleration); } if(Gdx.input.isKeyPressed(Keys.LEFT)) { mPlayer.setAngleVelocity(mPlayer.getAngleVelocity() + Ship.sAngleAcceleration * pDeltaTime); } if(Gdx.input.isKeyPressed(Keys.RIGHT)) { mPlayer.setAngleVelocity(mPlayer.getAngleVelocity() - Ship.sAngleAcceleration * pDeltaTime); } // X velocity if(mPlayer.getXVelocity() < 0) { if(-mPlayer.getXVelocity() > Ship.sMaxMoveVelocity) { mPlayer.setXVelocity(-Ship.sMaxMoveVelocity); } mPlayer.setXVelocity(mPlayer.getXVelocity() + Ship.sMoveDeceleration * pDeltaTime); if(mPlayer.getXVelocity() > 0) { mPlayer.setXVelocity(0); } } else if(mPlayer.getXVelocity() > 0) { if(mPlayer.getXVelocity() > Ship.sMaxMoveVelocity) { mPlayer.setXVelocity(Ship.sMaxMoveVelocity); } mPlayer.setXVelocity(mPlayer.getXVelocity() - Ship.sMoveDeceleration * pDeltaTime); if(mPlayer.getXVelocity() < 0) { mPlayer.setXVelocity(0); } } // Y velocity if(mPlayer.getYVelocity() < 0) { if(-mPlayer.getYVelocity() > Ship.sMaxMoveVelocity) { mPlayer.setYVelocity(-Ship.sMaxMoveVelocity); } mPlayer.setYVelocity(mPlayer.getYVelocity() + Ship.sMoveDeceleration * pDeltaTime); if(mPlayer.getYVelocity() > 0) { mPlayer.setYVelocity(0); } } else if(mPlayer.getYVelocity() > 0) { if(mPlayer.getYVelocity() > Ship.sMaxMoveVelocity) { mPlayer.setYVelocity(Ship.sMaxMoveVelocity); } mPlayer.setYVelocity(mPlayer.getYVelocity() - Ship.sMoveDeceleration * pDeltaTime); if(mPlayer.getYVelocity() < 0) { mPlayer.setYVelocity(0); } } // Angle velocity if(mPlayer.getAngleVelocity() < 0) { if(-mPlayer.getAngleVelocity() > Ship.sMaxAngleVelocity) { mPlayer.setAngleVelocity(-Ship.sMaxAngleVelocity); } mPlayer.setAngleVelocity(mPlayer.getAngleVelocity() + Ship.sAngleDeceleration * pDeltaTime); if(mPlayer.getAngleVelocity() > 0) { mPlayer.setAngleVelocity(0); } } else if(mPlayer.getAngleVelocity() > 0) { if(mPlayer.getAngleVelocity() > Ship.sMaxAngleVelocity) { mPlayer.setAngleVelocity(Ship.sMaxAngleVelocity); } mPlayer.setAngleVelocity(mPlayer.getAngleVelocity() - Ship.sAngleDeceleration * pDeltaTime); if(mPlayer.getAngleVelocity() < 0) { mPlayer.setAngleVelocity(0); } } mPlayer.setX(mPlayer.getX() + mPlayer.getXVelocity() * pDeltaTime); mPlayer.setY(mPlayer.getY() + mPlayer.getYVelocity() * pDeltaTime); mPlayer.setAngle(mPlayer.getAngle() + mPlayer.getAngleVelocity() * pDeltaTime); Why the ship does not behave like in asteroides ? What do I wrong?

    Read the article

  • Programmatically disclosing a node in af:tree and af:treeTable

    - by Frank Nimphius
    A common developer requirement when working with af:tree or af:treeTable components is to programmatically disclose (expand) a specific node in the tree. If the node to disclose is not a top level node, like a location in a LocationsView -> DepartmentsView -> EmployeesView hierarchy, you need to also disclose the node's parent node hierarchy for application users to see the fully expanded tree node structure. Working on ADF Code Corner sample #101, I wrote the following code lines that show a generic option for disclosing a tree node starting from a handle to the node to disclose. The use case in ADF Coder Corner sample #101 is a drag and drop operation from a table component to a tree to relocate employees to a new department. The tree node that receives the drop is a department node contained in a location. In theory the location could be part of a country and so on to indicate the depth the tree may have. Based on this structure, the code below provides a generic solution to parse the current node parent nodes and its child nodes. The drop event provided a rowKey for the tree node that received the drop. Like in af:table, the tree row key is not of type oracle.jbo.domain.Key but an implementation of java.util.List that contains the row keys. The JUCtrlHierBinding class in the ADF Binding layer that represents the ADF tree binding at runtime provides a method named findNodeByKeyPath that allows you to get a handle to the JUCtrlHierNodeBinding instance that represents a tree node in the binding layer. CollectionModel model = (CollectionModel) your_af_tree_reference.getValue(); JUCtrlHierBinding treeBinding = (JUCtrlHierBinding ) model.getWrappedData(); JUCtrlHierNodeBinding treeDropNode = treeBinding.findNodeByKeyPath(dropRowKey); To disclose the tree node, you need to create a RowKeySet, which you do using the RowKeySetImpl class. Because the RowKeySet replaces any existing row key set in the tree, all other nodes are automatically closed. RowKeySetImpl rksImpl = new RowKeySetImpl(); //the first key to add is the node that received the drop //operation (departments).            rksImpl.add(dropRowKey);    Similar, from the tree binding, the root node can be obtained. The root node is the end of all parent node iteration and therefore important. JUCtrlHierNodeBinding rootNode = treeBinding.getRootNodeBinding(); The following code obtains a reference to the hierarchy of parent nodes until the root node is found. JUCtrlHierNodeBinding dropNodeParent = treeDropNode.getParent(); //walk up the tree to expand all parent nodes while(dropNodeParent != null && dropNodeParent != rootNode){    //add the node's keyPath (remember its a List) to the row key set    rksImpl.add(dropNodeParent.getKeyPath());      dropNodeParent = dropNodeParent.getParent(); } Next, you disclose the drop node immediate child nodes as otherwise all you see is the department node. Its not quite exactly "dinner for one", but the procedure is very similar to the one handling the parent node keys ArrayList<JUCtrlHierNodeBinding> childList = (ArrayList<JUCtrlHierNodeBinding>) treeDropNode.getChildren();                     for(JUCtrlHierNodeBinding nb : childList){   rksImpl.add(nb.getKeyPath()); } Next, the row key set is defined as the disclosed row keys on the tree so when you refresh (PPR) the tree, the new disclosed state shows tree.setDisclosedRowKeys(rksImpl); AdfFacesContext.getCurrentInstance().addPartialTarget(tree.getParent()); The refresh in my use case is on the tree parent component (a layout container), which usually shows the best effect for refreshing the tree component. 

    Read the article

  • Using LINQ to Twitter OAuth with Windows 8

    - by Joe Mayo
    In previous posts, I explained how to use LINQ to Twitter with Windows 8, but the example was a Twitter Search, which didn’t require authentication. Much of the Twitter API requires authentication, so this post will explain how you can perform OAuth authentication with LINQ to Twitter in a Windows 8 Metro-style application. Getting Started I have earlier posts on how to create a Windows 8 app and add pages, so I’ll assume it isn’t necessary to repeat here. One difference is that I’m using Visual Studio 2012 RC and some of the terminology and/or library code might be slightly different.  Here are steps to get started: Create a new Windows metro style app, selecting the Blank App project template. Create a new Basic Page and name it OAuth.xaml.  Note: You’ll receive a prompt window for adding files and you should click Yes because those files are necessary for this demo. Add a new Basic Page named TweetPage.xaml. Open App.xaml.cs and change !rootFrame.Navigate(typeof(MainPage)) to !rootFrame.Navigate(typeof(TweetPage)). Now that the project is set up you’ll see the reason why authentication is required by setting up the TweetPage. Setting Up to Tweet a Status In this section, I’ll show you how to set up the XAML and code-behind for a tweet.  The tweet logic will check to see if the user is authenticated before performing the tweet. To tweet, I put a TextBox and Button on the XAML page. The following code omits most of the page, concentrating primarily on the elements of interest in this post: <StackPanel Grid.Row="1"> <TextBox Name="TweetTextBox" Margin="15" /> <Button Name="TweetButton" Content="Tweet" Click="TweetButton_Click" Margin="15,0" /> </StackPanel> Given the UI above, the user types the message they want to tweet, and taps Tweet. This invokes TweetButton_Click, which checks to see if the user is authenticated.  If the user is not authenticated, the app navigates to the OAuth page.  If they are authenticated, LINQ to Twitter does an UpdateStatus to post the user’s tweet.  Here’s the TweetButton_Click implementation: void TweetButton_Click(object sender, RoutedEventArgs e) { PinAuthorizer auth = null; if (SuspensionManager.SessionState.ContainsKey("Authorizer")) { auth = SuspensionManager.SessionState["Authorizer"] as PinAuthorizer; } if (auth == null || !auth.IsAuthorized) { Frame.Navigate(typeof(OAuthPage)); return; } var twitterCtx = new TwitterContext(auth); Status tweet = twitterCtx.UpdateStatus(TweetTextBox.Text); new MessageDialog(tweet.Text, "Successful Tweet").ShowAsync(); } For authentication, this app uses PinAuthorizer, one of several authorizers available in the LINQ to Twitter library. I’ll explain how PinAuthorizer works in the next section. What’s important here is that LINQ to Twitter needs an authorizer to post a Tweet. The code above checks to see if a valid authorizer is available. To do this, it uses the SuspensionManager class, which is part of the code generated earlier when creating OAuthPage.xaml. The SessionState property is a Dictionary<string, object> and I’m using the Authorizer key to store the PinAuthorizer.  If the user previously authorized during this session, the code reads the PinAuthorizer instance from SessionState and assigns it to the auth variable. If the user is authorized, auth would not be null and IsAuthorized would be true. Otherwise, the app navigates the user to OAuthPage.xaml, which I’ll discuss in more depth in the next section. When the user is authorized, the code passes the authorizer, auth, to the TwitterContext constructor. LINQ to Twitter uses the auth instance to build OAuth signatures for each interaction with Twitter.  You no longer need to write any more code to make this happen. The code above accepts the tweet just posted in the Status instance, tweet, and displays a message with the text to confirm success to the user. You can pull the PinAuthorizer instance from SessionState, instantiate your TwitterContext, and use it as you need. Just remember to make sure you have a valid authorizer, like the code above. As shown earlier, the code navigates to OAuthPage.xaml when a valid authorizer isn’t available. The next section shows how to perform the authorization upon arrival at OAuthPage.xaml. Doing the OAuth Dance This section shows how to authenticate with LINQ to Twitter’s built-in OAuth support. From the user perspective, they must be navigated to the Twitter authentication page, add credentials, be navigated to a Pin number page, and then enter that Pin in the Windows 8 application. The following XAML shows the relevant elements that the user will interact with during this process. <StackPanel Grid.Row="2"> <WebView x:Name="OAuthWebBrowser" HorizontalAlignment="Left" Height="400" Margin="15" VerticalAlignment="Top" Width="700" /> <TextBlock Text="Please perform OAuth process (above), enter Pin (below) when ready, and tap Authenticate:" Margin="15,15,15,5" /> <TextBox Name="PinTextBox" Margin="15,0,15,15" Width="432" HorizontalAlignment="Left" IsEnabled="False" /> <Button Name="AuthenticatePinButton" Content="Authenticate" Margin="15" IsEnabled="False" Click="AuthenticatePinButton_Click" /> </StackPanel> The WebView in the code above is what allows the user to see the Twitter authentication page. The TextBox is for entering the Pin, and the Button invokes code that will take the Pin and allow LINQ to Twitter to complete the authentication process. As you can see, there are several steps to OAuth authentication, but LINQ to Twitter tries to minimize the amount of code you have to write. The two important parts of the code to make this happen are the part that starts the authentication process and the part that completes the authentication process. The following code, from OAuthPage.xaml.cs, shows a couple events that are instrumental in making this process happen: public OAuthPage() { this.InitializeComponent(); this.Loaded += OAuthPage_Loaded; OAuthWebBrowser.LoadCompleted += OAuthWebBrowser_LoadCompleted; } The OAuthWebBrowser_LoadCompleted event handler enables UI controls when the browser is done loading – notice that the TextBox and Button in the previous XAML have their IsEnabled attributes set to False. When the Page.Loaded event is invoked, the OAuthPage_Loaded handler starts the OAuth process, shown here: void OAuthPage_Loaded(object sender, RoutedEventArgs e) { auth = new PinAuthorizer { Credentials = new InMemoryCredentials { ConsumerKey = "", ConsumerSecret = "" }, UseCompression = true, GoToTwitterAuthorization = pageLink => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => OAuthWebBrowser.Navigate(new Uri(pageLink, UriKind.Absolute))) }; auth.BeginAuthorize(resp => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { switch (resp.Status) { case TwitterErrorStatus.Success: break; case TwitterErrorStatus.RequestProcessingException: case TwitterErrorStatus.TwitterApiError: new MessageDialog(resp.Error.ToString(), resp.Message).ShowAsync(); break; } })); } The PinAuthorizer, auth, a field of this class instantiated in the code above, assigns keys to the Credentials property. These are credentials that come from registering an application with Twitter, explained in the LINQ to Twitter documentation, Securing Your Applications. Notice how I use Dispatcher.RunAsync to marshal the web browser navigation back onto the UI thread. Internally, LINQ to Twitter invokes the lambda expression assigned to GoToTwitterAuthorization when starting the OAuth process.  In this case, we want the WebView control to navigate to the Twitter authentication page, which is defined with a default URL in LINQ to Twitter and passed to the GoToTwitterAuthorization lambda as pageLink. Then you need to start the authorization process by calling BeginAuthorize. This starts the OAuth dance, running asynchronously.  LINQ to Twitter invokes the callback assigned to the BeginAuthorize parameter, allowing you to take whatever action you need, based on the Status of the response, resp. As mentioned earlier, this is where the user performs the authentication process, enters the Pin, and clicks authenticate. The handler for authenticate completes the process and saves the authorizer for subsequent use by the application, as shown below: void AuthenticatePinButton_Click(object sender, RoutedEventArgs e) { auth.CompleteAuthorize( PinTextBox.Text, completeResp => Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { switch (completeResp.Status) { case TwitterErrorStatus.Success: SuspensionManager.SessionState["Authorizer"] = auth; Frame.Navigate(typeof(TweetPage)); break; case TwitterErrorStatus.RequestProcessingException: case TwitterErrorStatus.TwitterApiError: new MessageDialog(completeResp.Error.ToString(), completeResp.Message).ShowAsync(); break; } })); } The PinAuthorizer CompleteAuthorize method takes two parameters: Pin and callback. The Pin is from what the user entered in the TextBox prior to clicking the Authenticate button that invoked this method. The callback handles the response from completing the OAuth process. The completeResp holds information about the results of the operation, indicated by a Status property of type TwitterErrorStatus. On success, the code assigns auth to SessionState. You might remember SessionState from the previous description of TweetPage – this is where the valid authorizer comes from. After saving the authorizer, the code navigates the user back to TweetPage, where they can type in a message, click the Tweet button, and observe that they have successfully tweeted. Summary You’ve seen how to get started with using LINQ to Twitter in a Metro-style application. The generated code contained a SuspensionManager class with way to manage information across multiple pages via its SessionState property. You also saw how LINQ to Twitter performs authorization in two steps of starting the process and completing the process when the user provides a Pin number. Remember to marshal callback thread back onto the UI – you saw earlier how to use Dispatcher.RunAsync to accomplish this. There were a few steps in the process, but LINQ to Twitter did minimize the amount of code you needed to write to make it happen. You can download the MetroOAuthDemo.zip sample on the LINQ to Twitter Samples Page.   @JoeMayo

    Read the article

  • Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

    - by darrenm
    Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together. When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it. We start by configuring the zone and specifying an rootzpool resource: # zonecfg -z eizoss Use 'create' to begin configuring a new zone. zonecfg:eizoss> create create: Using system default template 'SYSdefault' zonecfg:eizoss> set zonepath=/zones/eizoss zonecfg:eizoss> set file-mac-profile=fixed-configuration zonecfg:eizoss> add rootzpool zonecfg:eizoss:rootzpool> add storage \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 zonecfg:eizoss:rootzpool> end zonecfg:eizoss> verify zonecfg:eizoss> commit zonecfg:eizoss> Now lets create the pool and specify encryption: # suriadm map \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 PROPERTY VALUE mapped-dev /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # echo "zfscrypto" > /zones/p # zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \ /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # zpool export eizoss Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install: zoneadm -z eizoss install -x force-zpool-import Configured zone storage resource(s) from: iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 Imported zone zpool: eizoss_rpool Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install Image: Preparing at /zones/eizoss/root. AI Manifest: /tmp/manifest.xml.ujaq54 SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: eizoss Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/solaris/release/ Please review the licenses for the following packages post-install: consolidation/osnet/osnet-incorporation (automatically accepted, not displayed) Package licenses may be viewed using the command: pkg info --license <pkg_fmri> DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 187/187 33575/33575 227.0/227.0 384k/s PHASE ITEMS Installing new actions 47449/47449 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 929.606 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install That was really all we had to do, when the install is done boot it up as normal. The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases: rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally. # zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local  

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >