Search Results

Search found 7669 results on 307 pages for 'dealing with clients'.

Page 85/307 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Cluster failover and strange gratuitous arp behavior

    - by lazerpld
    I am experiencing a strange Windows 2008R2 cluster related issue that is bothering me. I feel that I have come close as to what the issue is, but still don't fully understand what is happening. I have a two node exchange 2007 cluster running on two 2008R2 servers. The exchange cluster application works fine when running on the "primary" cluster node. The problem occurs when failing over the cluster ressource to the secondary node. When failing over the cluster to the "secondary" node, which for instance is on the same subnet as the "primary", the failover initially works ok and the cluster ressource continues to work for a couple of minutes on the new node. Which means that the recieving node does send out a gratuitous arp reply packet that updated the arp tables on the network. But after x amount of time (typically within 5 minutes time) something updates the arp-tables again because all of a sudden the cluster service does not answer to pings. So basically I start a ping to the exchange cluster address when its running on the "primary node". It works just great. I failover the cluster ressource group to the "secondary node" and I only have loss of one ping which is acceptable. The cluster ressource still answers for some time after being failed over and all of a sudden the ping starts timing out. This is telling me that the arp table initially is updated by the secondary node, but then something (which I haven't found out yet) wrongfully updates it again, probably with the primary node's MAC. Why does this happen - has anyone experienced the same problem? The cluster is NOT running NLB and the problem stops immidiately after failing over back to the primary node where there are no problems. Each node is using NIC teaming (intel) with ALB. Each node is on the same subnet and has gateway and so on entered correctly as far as I am concerned. Edit: I was wondering if it could be related to network binding order maybe? Because I have noticed that the only difference I can see from node to node is when showing the local arp table. On the "primary" node the arp table is generated on the cluster address as the source. While on the "secondary" its generated from the nodes own network card. Any input on this? Edit: Ok here is the connection layout. Cluster address: A.B.6.208/25 Exchange application address: A.B.6.212/25 Node A: 3 physical nics. Two teamed using intels teaming with the address A.B.6.210/25 called public The last one used for cluster traffic called private with 10.0.0.138/24 Node B: 3 physical nics. Two teamed using intels teaming with the address A.B.6.211/25 called public The last one used for cluster traffic called private with 10.0.0.139/24 Each node sits in a seperate datacenter connected together. End switches being cisco in DC1 and NEXUS 5000/2000 in DC2. Edit: I have been testing a little more. I have now created an empty application on the same cluster, and given it another ip address on the same subnet as the exchange application. After failing this empty application over, I see the exact same problem occuring. After one or two minutes clients on other subnets cannot ping the virtual ip of the application. But while clients on other subnets cannot, another server from another cluster on the same subnet has no trouble pinging. But if i then make another failover to the original state, then the situation is the opposite. So now clients on same subnet cannot, and on other they can. We have another cluster set up the same way and on the same subnet, with the same intel network cards, the same drivers and same teaming settings. Here we are not seeing this. So its somewhat confusing. Edit: OK done some more research. Removed the NIC teaming of the secondary node, since it didnt work anyway. After some standard problems following that, I finally managed to get it up and running again with the old NIC teaming settings on one single physical network card. Now I am not able to reproduce the problem described above. So it is somehow related to the teaming - maybe some kind of bug? Edit: Did some more failing over without being able to make it fail. So removing the NIC team looks like it was a workaround. Now I tried to reestablish the intel NIC teaming with ALB (as it was before) and i still cannot make it fail. This is annoying due to the fact that now i actually cannot pinpoint the root of the problem. Now it just seems to be some kind of MS/intel hick-up - which is hard to accept because what if the problem reoccurs in 14 days? There is a strange thing that happened though. After recreating the NIC team I was not able to rename the team to "PUBLIC" which the old team was called. So something has not been cleaned up in windows - although the server HAS been restarted! Edit: OK after restablishing the ALB teaming the error came back. So I am now going to do some thorough testing and i will get back with my observations. One thing is for sure. It is related to Intel 82575EB NICS, ALB and Gratuitous Arp.

    Read the article

  • sshd running but no PID file

    - by dunxd
    I'm recently started using monit to monitor the status of sshd on my CentOS 5.4 server. This works fine, but every so often monit reports that sshd is no longer running. This isn't true - I am still able to login to the server via ssh, however I note the following: There is no longer any PID file at /var/run/sshd.pid - after a reboot this file exists. Once it is gone, restarting sshd via service sshd restart does not create the PID file. sudo service sshd status reports openssh-daemon is stopped - again, restarting sshd does not change this, but a reboot does. sudo service sshd stop reports failed, presumably because of the missing PID file. Any idea what is going on? Update sudo netstat -lptun gives the following output relating to port 22 tcp 0 0 :::22 :::* LISTEN 20735/sshd Killing the process with this PID as suggested by @Henry and then starting sshd via service results in service sshd status recognising the process by PID again. Would still like to understand this better. RPM verify suggested by a couple of answerers shows this: sudo rpm -vV openssh openssh-server openssh-clients | grep 'S\.5' S.5....T c /etc/pam.d/sshd S.5....T c /etc/ssh/sshd_config /etc/pam.d/sshd has the following contents: #%PAM-1.0 auth include system-auth account required pam_nologin.so account include system-auth password include system-auth session optional pam_keyinit.so force revoke session include system-auth #session required pam_loginuid.so Should that last line be commented out? Update Here's the output of @YannickGirouard 's script: $ sudo ./sshd_test Searching for the process listening on port 22... Found the following PID: 21330 Command line for PID 21330: /usr/sbin/sshd Listing process(es) relating to PID 21330: UID PID PPID C STIME TTY TIME CMD root 21330 1 0 14:04 ? 00:00:00 /usr/sbin/sshd Listing RPM information about openssh packages: Name : openssh Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:50:57 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 745390 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH implementation of SSH protocol versions 1 and 2 ------------------------------------------------------ Name : openssh-clients Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 871132 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH client applications ------------------------------------------------------ Name : openssh-server Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : System Environment/Daemons Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 492478 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH server daemon ------------------------------------------------------ However, I've since got things working by killing the process and starting afresh, as suggested by @Henry below, so perhaps I am no longer seeing the same thing. Will try again if I am seeing the issue again after next reboot. Update - 14 March Monit alerted me that sshd had disappeared, and again I am able to ssh onto the server. So now I can run the script $ sudo ./sshd_test Searching for the process listening on port 22... Found the following PID: 2208 Command line for PID 2208: /usr/sbin/sshd Listing process(es) relating to PID 2208: UID PID PPID C STIME TTY TIME CMD root 2208 1 0 Mar13 ? 00:00:00 /usr/sbin/sshd root 1885 2208 0 21:50 ? 00:00:00 sshd: dunx [priv] Listing RPM information about openssh packages: Name : openssh Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:50:57 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 745390 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH implementation of SSH protocol versions 1 and 2 ------------------------------------------------------ Name : openssh-clients Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 871132 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH client applications ------------------------------------------------------ Name : openssh-server Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : System Environment/Daemons Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 492478 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH server daemon ------------------------------------------------------ Again, when I look for /var/run/sshd.pid I don't find it. $ cat /var/run/sshd.pid cat: /var/run/sshd.pid: No such file or directory $ sudo netstat -anp | grep sshd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2208/sshd $ sudo kill 2208 $ sudo service sshd start Starting sshd: [ OK ] $ cat /var/run/sshd.pid 3794 $ sudo service sshd status openssh-daemon (pid 3794) is running... Is it possible that sshd is restarting and not creating a pidfile for some reason?

    Read the article

  • IPTables masquerading with one NIC

    - by Tuinslak
    Hi, I am running an OpenVPN server with only one NIC. This is my current layout: public.ip > Cisco firewall > lan.ip > OpenVPN server lan.ip = 192.168.22.70 The Cisco firewall forwards the requests to the oVPN server, thus so far everything works and clients are able to connect. However, all clients connected should be able to access 3 networks: lan1: 192.168.200.0 (vpn lan) > tun0 lan2: 192.168.110.0 (office lan) > eth1 (gw 192.168.22.1) lan3: 192.168.22.0 (server lan) > eth1 (broadcast network) So tun0 is mapped to eth1. Iptables output: # iptables-save # Generated by iptables-save v1.4.2 on Wed Feb 16 14:14:20 2011 *filter :INPUT ACCEPT [327:26098] :FORWARD DROP [305:31700] :OUTPUT ACCEPT [291:27378] -A INPUT -i lo -j ACCEPT -A INPUT -i tun0 -j ACCEPT -A INPUT -i ! tun0 -p udp -m udp --dport 67 -j REJECT --reject-with icmp-port-unreachable -A INPUT -i ! tun0 -p udp -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable -A FORWARD -d 192.168.200.0/24 -i tun0 -j DROP -A FORWARD -s 192.168.200.0/24 -i tun0 -j ACCEPT -A FORWARD -d 192.168.200.0/24 -i eth1 -j ACCEPT COMMIT # Completed on Wed Feb 16 14:14:20 2011 # Generated by iptables-save v1.4.2 on Wed Feb 16 14:14:20 2011 *nat :PREROUTING ACCEPT [302:26000] :POSTROUTING ACCEPT [3:377] :OUTPUT ACCEPT [49:3885] -A POSTROUTING -o eth1 -j MASQUERADE COMMIT # Completed on Wed Feb 16 14:14:20 2011 Yet, clients are unable to ping any ip (including 192.168.200.1, which is the oVPN's IP) When the machine was directly connected to the internet, with 2 NICs, it was quite simply solved with masquerading and adding static routes in the oVPN client's config. However, as masquerading won't accept virtual interfaces (eth0:0, etc) I am unable to get masquerading to work again (and I'm not even sure whether I need virtual interfaces). Thanks. Edit: OpenVPN server: # ifconfig eth1 Link encap:Ethernet HWaddr ba:e6:64:ec:57:ac inet addr:192.168.22.70 Bcast:192.168.22.255 Mask:255.255.255.0 inet6 addr: fe80::b8e6:64ff:feec:57ac/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6857 errors:0 dropped:0 overruns:0 frame:0 TX packets:4044 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:584046 (570.3 KiB) TX bytes:473691 (462.5 KiB) Interrupt:14 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:334 errors:0 dropped:0 overruns:0 frame:0 TX packets:334 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:33773 (32.9 KiB) TX bytes:33773 (32.9 KiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.200.1 P-t-P:192.168.200.2 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ifconfig on a client: # ifconfig eth0 Link encap:Ethernet HWaddr 00:22:64:71:11:56 inet addr:192.168.110.94 Bcast:192.168.110.255 Mask:255.255.255.0 inet6 addr: fe80::222:64ff:fe71:1156/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3466 errors:0 dropped:0 overruns:0 frame:0 TX packets:1838 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:997924 (974.5 KiB) TX bytes:332406 (324.6 KiB) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:37847 errors:0 dropped:0 overruns:0 frame:0 TX packets:37847 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2922444 (2.7 MiB) TX bytes:2922444 (2.7 MiB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.200.30 P-t-P:192.168.200.29 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:689 errors:0 dropped:18 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:468778 (457.7 KiB) wlan0 Link encap:Ethernet HWaddr 00:16:ea:db:ae:86 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:704699 errors:0 dropped:0 overruns:0 frame:0 TX packets:730176 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:520385963 (496.2 MiB) TX bytes:225210422 (214.7 MiB) static routes line at the end of the client's config (I've been playing around with the 192.168.200.0 -- (un)commenting to see if anything changes): route 192.168.200.0 255.255.255.0 route 192.168.110.0 255.255.255.0 route 192.168.22.0 255.255.255.0 route on a vpn client: # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.200.29 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 192.168.22.0 192.168.200.29 255.255.255.0 UG 0 0 0 tun0 192.168.200.0 192.168.200.29 255.255.255.0 UG 0 0 0 tun0 192.168.110.0 192.168.200.29 255.255.255.0 UG 0 0 0 tun0 192.168.110.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 192.168.110.1 0.0.0.0 UG 0 0 0 eth0 edit: Weirdly enough, if I set push "redirect-gateway def1" in the server config, (and thus routes all traffic through VPN, which is not what I want), it seems to work.

    Read the article

  • OpenVPN Server Ethernet Bridging Question

    - by Hooplad
    Hello All, I am having a difficult time properly configuring an ethernet bridge using OpenVPN 2.0.9 install on CentOS 5 ( VPN server ). The goal that I am trying to complete is to connect a VM ( instance running on the same CentOS machine ) acting as a Microsoft Business Contact Manager server. I would then like this "BCM server" to serve Windows XP clients on 192.168.1.0/24 network as well as clients connecting from VPN ( 10.8.0.0/24 ). The setup as it is now was based off a known working configuration. The problem with the working configuration was that it would allow to the client to connect and access everything running on the VPN server ( SVN, Samba, VM Server ) but not any computers on the 192.168.1.0/24 network. I must disclose that the VPN server is behind a router/firewall. Ports are being forwarded correctly ( again, clients were able to connect to the VPN server with no problem. netcat confirms the udp port is open as well ). current ifconfig output br0 Link encap:Ethernet HWaddr 00:21:5E:4D:3A:C2 inet addr:192.168.1.169 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::221:5eff:fe4d:3ac2/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:846890 errors:0 dropped:0 overruns:0 frame:0 TX packets:3072351 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:42686842 (40.7 MiB) TX bytes:4540654180 (4.2 GiB) eth0 Link encap:Ethernet HWaddr 00:21:5E:4D:3A:C2 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:882641 errors:0 dropped:0 overruns:0 frame:0 TX packets:1781383 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:82342803 (78.5 MiB) TX bytes:2614727660 (2.4 GiB) Interrupt:169 eth1 Link encap:Ethernet HWaddr 00:21:5E:4D:3A:C3 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:650 errors:0 dropped:0 overruns:0 frame:0 TX packets:1347223 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:67403 (65.8 KiB) TX bytes:1959529142 (1.8 GiB) Interrupt:233 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:17452058 errors:0 dropped:0 overruns:0 frame:0 TX packets:17452058 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:94020256229 (87.5 GiB) TX bytes:94020256229 (87.5 GiB) tap0 Link encap:Ethernet HWaddr DE:18:C6:D7:01:63 inet6 addr: fe80::dc18:c6ff:fed7:163/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:3086 errors:0 dropped:166 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 b) TX bytes:315099 (307.7 KiB) vmnet1 Link encap:Ethernet HWaddr 00:50:56:C0:00:01 inet addr:192.168.177.1 Bcast:192.168.177.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fec0:1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:4224 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) vmnet8 Link encap:Ethernet HWaddr 00:50:56:C0:00:08 inet addr:192.168.55.1 Bcast:192.168.55.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fec0:8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:4226 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) current route table Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.55.0 * 255.255.255.0 U 0 0 0 vmnet8 192.168.177.0 * 255.255.255.0 U 0 0 0 vmnet1 192.168.1.0 * 255.255.255.0 U 0 0 0 br0 current iptables output Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination server_known_working.conf local banshee port 1194 proto udp dev tap0 ca ca.crt cert banshee_server.crt key banshee_server.key dh dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "route 192.168.1.0 255.255.255.0" client-to-client keepalive 10 120 tls-auth ta.key 0 user nobody group nobody persist-key persist-tun status openvpn-status.log verb 4 The following is the current CentOS server config file. server_ethernet_bridged.conf ( current ) local 192.168.1.169 port 1194 proto udp dev tap0 ca ca.crt cert server.crt key server.key dh dh1024.pem ifconfig-pool-persist ipp.txt server-bridge 192.168.1.169 255.255.255.0 192.168.1.200 192.168.1.210 push "route 192.168.1.0 255.255.255.0 192.168.1.1" client-to-client keepalive 10 120 tls-auth ta.key 0 user nobody group nobody persist-key persist-tun status openvpn-status.log verb 6 The following is one of the client's config file that was used with the known working configuration. client.opvn client dev tap proto udp remote XXX.XXX.XXX 1194 resolv-retry infinite nobind persist-key persist-tun ca client.crt cert client.crt key client.key tls-auth client.key 1 verb 3 I have tried the HOWTO provided by OpenVPN as well as others http://www.thebakershome.net/openvpn%5Ftutorial?page=1 with no success. Any help or suggestions would be appreciated.

    Read the article

  • MacBook - Where to fix?

    - by user18731
    Not too sure if this is the website to post on, therefore please direct me elsewhere otherwise. I have a crack on my MacBook, underneath the trackpad. Will Apple fix this under warranty? If not, what would be the estimated cost? Has anyone else had experience with dealing with Apple customer experience? Thanks!

    Read the article

  • How to design a C / C++ library to be usable in many client languages?

    - by Brian Schimmel
    I'm planning to code a library that should be usable by a large number of people in on a wide spectrum of platforms. What do I have to consider to design it right? To make this questions more specific, there are four "subquestions" at the end. Choice of language Considering all the known requirements and details, I concluded that a library written in C or C++ was the way to go. I think the primary usage of my library will be in programs written in C, C++ and Java SE, but I can also think of reasons to use it from Java ME, PHP, .NET, Objective C, Python, Ruby, bash scrips, etc... Maybe I cannot target all of them, but if it's possible, I'll do it. Requirements It would be to much to describe the full purpose of my library here, but there are some aspects that might be important to this question: The library itself will start out small, but definitely will grow to enormous complexity, so it is not an option to maintain several versions in parallel. Most of the complexity will be hidden inside the library, though The library will construct an object graph that is used heavily inside. Some clients of the library will only be interested in specific attributes of specific objects, while other clients must traverse the object graph in some way Clients may change the objects, and the library must be notified thereof The library may change the objects, and the client must be notified thereof, if it already has a handle to that object The library must be multi-threaded, because it will maintain network connections to several other hosts While some requests to the library may be handled synchronously, many of them will take too long and must be processed in the background, and notify the client on success (or failure) Of course, answers are welcome no matter if they address my specific requirements, or if they answer the question in a general way that matters to a wider audience! My assumptions, so far So here are some of my assumptions and conclusions, which I gathered in the past months: Internally I can use whatever I want, e.g. C++ with operator overloading, multiple inheritance, template meta programming... as long as there is a portable compiler which handles it (think of gcc / g++) But my interface has to be a clean C interface that does not involve name mangling Also, I think my interface should only consist of functions, with basic/primitive data types (and maybe pointers) passed as parameters and return values If I use pointers, I think I should only use them to pass them back to the library, not to operate directly on the referenced memory For usage in a C++ application, I might also offer an object oriented interface (Which is also prone to name mangling, so the App must either use the same compiler, or include the library in source form) Is this also true for usage in C# ? For usage in Java SE / Java EE, the Java native interface (JNI) applies. I have some basic knowledge about it, but I should definitely double check it. Not all client languages handle multithreading well, so there should be a single thread talking to the client For usage on Java ME, there is no such thing as JNI, but I might go with Nested VM For usage in Bash scripts, there must be an executable with a command line interface For the other client languages, I have no idea For most client languages, it would be nice to have kind of an adapter interface written in that language. I think there are tools to automatically generate this for Java and some others For object oriented languages, it might be possible to create an object oriented adapter which hides the fact that the interface to the library is function based - but I don't know if its worth the effort Possible subquestions is this possible with manageable effort, or is it just too much portability? are there any good books / websites about this kind of design criteria? are any of my assumptions wrong? which open source libraries are worth studying to learn from their design / interface / souce? meta: This question is rather long, do you see any way to split it into several smaller ones? (If you reply to this, do it as a comment, not as an answer)

    Read the article

  • local user cannot access vsftpd server

    - by Zloy Smiertniy
    I'm currently running a vsftpd server and I added the necessary configurations in vsftpd.conf so that local users can use clients like FileZilla to manage their homes in a server. I found out that only users in the sudoers list access without a problem only they can't download the files, but users that are not sudoers cannot even access their homes from a client but they can access by a web browser using the FTP protocol and they can only access their home directories (as intented) Im running a fedora 14 on my server and my vsftpd.conf looks like this: # Example config file /etc/vsftpd/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. ascii_upload_enable=YES ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Welcome to GAMBITA FTP service # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). chroot_local_user=YES chroot_list_enable=YES # (default follows) chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd with two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES use_localtime=YES Anyone has an idea of what might be happening? Nothing concerning vsftpd is written in any log

    Read the article

  • problems with ASA 8.4 Nat Rules for a Web Server

    - by Marko
    Having problems with the NAT RULES and Access Rules changes on my ASA5505. Want to straight replace a 5505 with a newer 5505 and unfortunately this means dealing with old version 7.2 and the newer 8.4 configurations. my old NAT RULE: static (inside,outside) WebOutside WebInside netmask 255.255.255.255 and an Access Rule of: access-list outside_access_in extended permit tcp any host WebOutside eq www these dont work in 8.4 I understand there are some changed but I can find little information that makes any sense on how to configure these. Any pointers welcomed.

    Read the article

  • I'm managing SQLite, MySQL and PostgreSQL databases and want a tool for this locally

    - by littlejim84
    I develop web applications on Mac OSX in SQLite, MySQL and PostgreSQL and these are then put on the webserver. I want to be able to take the brunt out of looking at the terminal locally when dealing with these databases - is there any software available (free or otherwise) that can handle all three of these database technologies in a GUI for the Mac that is actually decent and worth it?

    Read the article

  • Centos server email delay [closed]

    - by sisko
    I am hosting a website on a CentOS server and all was well until I tried sending emails from my website. I realized it is taking unusually long to send email and the webpage to refresh. I actually timed it at just over 6 minutes to send one email to 3 addresses. I have been able to determine the server is utilizing sendmail but I don't know much else about dealing with server issues on a Centos server. Can anyone please help me out?

    Read the article

  • OpenVPN (HideMyAss) client on Ubuntu: Route only HTTP traffic

    - by Andersmith
    I want to use HideMyAss VPN (hidemyass.com) on Ubuntu Linux to route only HTTP (ports 80 & 443) traffic to the HideMyAss VPN server, and leave all the other traffic (MySQL, SSH, etc.) alone. I'm running Ubuntu on AWS EC2 instances. The problem is that when I try and run the default HMA script, I suddenly can't SSH into the Ubuntu instance anymore and have to reboot it from the AWS console. I suspect the Ubuntu instance will also have trouble connecting to the RDS MySQL database, but haven't confirmed it. HMA uses OpenVPN like this: sudo openvpn client.cfg The client configuration file (client.cfg) looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client auth-user-pass #management-query-passwords #management-hold # Disable management port for debugging port issues #management 127.0.0.1 13010 ping 5 ping-exit 30 # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. #;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. proto tcp ;proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. # All VPN Servers are added at the very end ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. # We order the hosts according to number of connections. # So no need to randomize the list # remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ./keys/ca.crt cert ./keys/hmauser.crt key ./keys/hmauser.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ;ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. #comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 # Detect proxy auto matically #auto-proxy # Need this for Vista connection issue route-metric 1 # Get rid of the cached password warning #auth-nocache #show-net-up #dhcp-renew #dhcp-release #route-delay 0 120 # added to prevent MITM attack ns-cert-type server # # Remote servers added dynamically by the master server # DO NOT CHANGE below this line # remote-random remote 173.242.116.200 443 # 0 remote 38.121.77.74 443 # 0 # etc... remote 67.23.177.5 443 # 0 remote 46.19.136.130 443 # 0 remote 173.254.207.2 443 # 0 # END

    Read the article

  • Is there a unix command to output time elapsed during a command?

    - by Olivier Lacan
    I love using time to find out how long a command took to execute but when dealing with commands that execute sub-commands internally (and provide output that allows you to tell when each of those sub-commands start running) it would be really great to be able to tell after what number of seconds (or milliseconds) a specific sub-command started running. When I say sub-command, really the only way to distinguish these from the outside is anything printed to standard out. Really this seems like it should be an option to time.

    Read the article

  • php curl http 405 error mind body online api

    - by K_G
    I am trying to connect to the Mind Body Online API (their forum is down currently, so I came here). Below is the code I am using. I am receiving this: SERVER ERROR 405 - HTTP verb used to access this page is not allowed. The page you are looking for cannot be displayed because an invalid method (HTTP verb) was used to attempt to access. code: //Data, connection, auth $soapUrl = "http://clients.mindbodyonline.com/api/0_5"; // xml post structure $xml_post_string = '<?xml version="1.0" encoding="utf-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="http://clients.mindbodyonline.com/api/0_5"> <soapenv:Header/> <soapenv:Body> <GetClasses> <Request> <SourceCredentials> <SourceName>username</SourceName> <Password>password</Password> <SiteIDs> <int>site id</int> </SiteIDs> </SourceCredentials> <UserCredentials> <Username>username</Username> <Password>password</Password> <SiteIDs> <int></int> </SiteIDs> </UserCredentials> <Fields> <string>Classes.Resource</string> </Fields> <XMLDetail>Basic</XMLDetail> <PageSize>10</PageSize> <CurrentPageIndex>0</CurrentPageIndex> </Request> </GetClasses> </soapenv:Body> </soapenv:Envelope>'; $headers = array( "Content-type: text/xml;charset=\"utf-8\"", "Accept: text/xml", "Cache-Control: no-cache", "Pragma: no-cache", "SOAPAction: http://clients.mindbodyonline.com/api/0_5", "Content-length: ".strlen($xml_post_string), ); //SOAPAction: your op URL $url = $soapUrl; // PHP cURL for https connection with auth $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY); curl_setopt($ch, CURLOPT_TIMEOUT, 10); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, $xml_post_string); // the SOAP request curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); // converting $response = curl_exec($ch); curl_close($ch); var_dump($response); // converting //$response1 = str_replace("<soap:Body>","",$response); //$response2 = str_replace("</soap:Body>","",$response1); // convertingc to XML //$parser = simplexml_load_string($response2); // user $parser to get your data out of XML response and to display it. Any help would be great and if anyone has any experience working with their API, my first time ;) Here is the post on stack I am going off of: SOAP request in PHP

    Read the article

  • Use vmconnect from another AD domain

    - by user1459015
    I try to connect remotly to the KVM (console) of an Hyper-V Virtual Machine using vmconnect.exe but I'm dealing with some kind of a problem : When I connect from a computer within the same AD of my Hyper-V Host, everything work fine but when I try to connect from a computer not within the same AD, wmconnect say that my RCP services is not running on the host The problem is that it doesn't ask me for any credentials and so, i can't authenticate in the AD Does someone have any clues ?

    Read the article

  • CakePHP access indirectly related model - beginner's question

    - by user325077
    Hi everyone, I am writing a CakePHP application to log the work I do for various clients, but after trying for days I seem unable to get it to do what I want. I have read most of the book CakePHP's website. and googled for all I'm worth, so I presume I am missing something obvious! Every 'log item' belongs to a 'sub-project, which in turn belongs to a 'project', which in turn belongs to a 'sub-client' which finally belongs to a client. These are the 5 MySQL tables I am using: mysql> DESCRIBE log_items; +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | date | date | NO | | NULL | | | time | time | NO | | NULL | | | time_spent | int(11) | NO | | NULL | | | sub_projects_id | int(11) | NO | MUL | NULL | | | title | varchar(100) | NO | | NULL | | | description | text | YES | | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +-----------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE sub_projects; +-------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | projects_id | int(11) | NO | MUL | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +-------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE projects; +----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | sub_clients_id | int(11) | NO | MUL | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +----------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE sub_clients; +------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | clients_id | int(11) | NO | MUL | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +------------+--------------+------+-----+---------+----------------+ mysql> DESCRIBE clients; +----------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +----------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(100) | NO | | NULL | | | created | datetime | YES | | NULL | | | modified | datetime | YES | | NULL | | +----------+--------------+------+-----+---------+----------------+ I have set up the following associations in CakePHP: LogItem belongsTo SubProjects SubProject belongsTo Projects Project belongsTo SubClients SubClient belongsTo Clients Client hasMany SubClients SubClient hasMany Projects Project hasMany SubProjects SubProject hasMany LogItems Using 'cake bake' I have created the models, controllers (index, view add, edit and delete) and views, and things seem to function - as in I am able to perform simple CRUD operations successfully. The Question When editing a 'log item' at www.mydomain/log_items/edit I am presented with the view you would all suspect; namely the columns of the log_items table with the appropriate textfields/select boxes etc. I would also like to incorporate select boxes to choose the client, sub-client, project and sub-project in the 'log_items' edit view. Ideally the 'sub-client' select box should populate itself depending upon the 'client' chosen, the 'project' select box should also populate itself depending on the 'sub-client' selected etc, etc. I guess the way to go about populating the select boxes with relevant options is Ajax, but I am unsure of how to go about actually accessing a model from the child view of a indirectly related model, for example how to create a 'sub-client' select box in the 'log_items' edit view. I have have found this example: http://forum.phpsitesolutions.com/php-frameworks/cakephp/ajax-cakephp-dynamically-populate-html-select-dropdown-box-t29.html where someone achieves something similar for US states, counties and cities. However, I noticed in the database schema - which is downloadable from the site above link - that the database tables don't have any foreign keys, so now I'm wondering if I'm going about things in the correct manner. Any pointers and advice would be very much appreciated. Kind regards, Chris

    Read the article

  • Using cd to go up multiple directory levels

    - by Tossrock
    I'm dealing with java projects which often result in deeply nested folders (/path/to/project/com/java/lang/whatever, etc) and sometimes want to be able to jump, say, 4 directory levels upwards. Typing cd ../../../.. is a pain, and I don't want to symlink. Is there some flag to cd that lets you go up multiple directory levels (in my head, it would be something like cd -u 4)? Unfortunately I can't find any man page for cd specifically, instead just getting the useless "builtins" page.

    Read the article

  • SignalR Auto Disconnect when Page Changed in AngularJS

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/05/30/signalr-auto-disconnect-when-page-changed-in-angularjs.aspxIf we are using SignalR, the connection lifecycle was handled by itself very well. For example when we connect to SignalR service from browser through SignalR JavaScript Client the connection will be established. And if we refresh the page, close the tab or browser, or navigate to another URL then the connection will be closed automatically. This information had been well documented here. In a browser, SignalR client code that maintains a SignalR connection runs in the JavaScript context of a web page. That's why the SignalR connection has to end when you navigate from one page to another, and that's why you have multiple connections with multiple connection IDs if you connect from multiple browser windows or tabs. When the user closes a browser window or tab, or navigates to a new page or refreshes the page, the SignalR connection immediately ends because SignalR client code handles that browser event for you and calls the "Stop" method. But unfortunately this behavior doesn't work if we are using SignalR with AngularJS. AngularJS is a single page application (SPA) framework created by Google. It hijacks browser's address change event, based on the route table user defined, launch proper view and controller. Hence in AngularJS we address was changed but the web page still there. All changes of the page content are triggered by Ajax. So there's no page unload and load events. This is the reason why SignalR cannot handle disconnect correctly when works with AngularJS. If we dig into the source code of SignalR JavaScript Client source code we will find something below. It monitors the browser page "unload" and "beforeunload" event and send the "stop" message to server to terminate connection. But in AngularJS page change events were hijacked, so SignalR will not receive them and will not stop the connection. 1: // wire the stop handler for when the user leaves the page 2: _pageWindow.bind("unload", function () { 3: connection.log("Window unloading, stopping the connection."); 4:  5: connection.stop(asyncAbort); 6: }); 7:  8: if (isFirefox11OrGreater) { 9: // Firefox does not fire cross-domain XHRs in the normal unload handler on tab close. 10: // #2400 11: _pageWindow.bind("beforeunload", function () { 12: // If connection.stop() runs runs in beforeunload and fails, it will also fail 13: // in unload unless connection.stop() runs after a timeout. 14: window.setTimeout(function () { 15: connection.stop(asyncAbort); 16: }, 0); 17: }); 18: }   Problem Reproduce In the codes below I created a very simple example to demonstrate this issue. Here is the SignalR server side code. 1: public class GreetingHub : Hub 2: { 3: public override Task OnConnected() 4: { 5: Debug.WriteLine(string.Format("Connected: {0}", Context.ConnectionId)); 6: return base.OnConnected(); 7: } 8:  9: public override Task OnDisconnected() 10: { 11: Debug.WriteLine(string.Format("Disconnected: {0}", Context.ConnectionId)); 12: return base.OnDisconnected(); 13: } 14:  15: public void Hello(string user) 16: { 17: Clients.All.hello(string.Format("Hello, {0}!", user)); 18: } 19: } Below is the configuration code which hosts SignalR hub in an ASP.NET WebAPI project with IIS Express. 1: public class Startup 2: { 3: public void Configuration(IAppBuilder app) 4: { 5: app.Map("/signalr", map => 6: { 7: map.UseCors(CorsOptions.AllowAll); 8: map.RunSignalR(new HubConfiguration() 9: { 10: EnableJavaScriptProxies = false 11: }); 12: }); 13: } 14: } Since we will host AngularJS application in Node.js in another process and port, the SignalR connection will be cross domain. So I need to enable CORS above. In client side I have a Node.js file to host AngularJS application as a web server. You can use any web server you like such as IIS, Apache, etc.. Below is the "index.html" page which contains a navigation bar so that I can change the page/state. As you can see I added jQuery, AngularJS, SignalR JavaScript Client Library as well as my AngularJS entry source file "app.js". 1: <html data-ng-app="demo"> 2: <head> 3: <script type="text/javascript" src="jquery-2.1.0.js"></script> 1:  2: <script type="text/javascript" src="angular.js"> 1: </script> 2: <script type="text/javascript" src="angular-ui-router.js"> 1: </script> 2: <script type="text/javascript" src="jquery.signalR-2.0.3.js"> 1: </script> 2: <script type="text/javascript" src="app.js"></script> 4: </head> 5: <body> 6: <h1>SignalR Auto Disconnect with AngularJS by Shaun</h1> 7: <div> 8: <a href="javascript:void(0)" data-ui-sref="view1">View 1</a> | 9: <a href="javascript:void(0)" data-ui-sref="view2">View 2</a> 10: </div> 11: <div data-ui-view></div> 12: </body> 13: </html> Below is the "app.js". My SignalR logic was in the "View1" page and it will connect to server once the controller was executed. User can specify a user name and send to server, all clients that located in this page will receive the server side greeting message through SignalR. 1: 'use strict'; 2:  3: var app = angular.module('demo', ['ui.router']); 4:  5: app.config(['$stateProvider', '$locationProvider', function ($stateProvider, $locationProvider) { 6: $stateProvider.state('view1', { 7: url: '/view1', 8: templateUrl: 'view1.html', 9: controller: 'View1Ctrl' }); 10:  11: $stateProvider.state('view2', { 12: url: '/view2', 13: templateUrl: 'view2.html', 14: controller: 'View2Ctrl' }); 15:  16: $locationProvider.html5Mode(true); 17: }]); 18:  19: app.value('$', $); 20: app.value('endpoint', 'http://localhost:60448'); 21: app.value('hub', 'GreetingHub'); 22:  23: app.controller('View1Ctrl', function ($scope, $, endpoint, hub) { 24: $scope.user = ''; 25: $scope.response = ''; 26:  27: $scope.greeting = function () { 28: proxy.invoke('Hello', $scope.user) 29: .done(function () {}) 30: .fail(function (error) { 31: console.log(error); 32: }); 33: }; 34:  35: var connection = $.hubConnection(endpoint); 36: var proxy = connection.createHubProxy(hub); 37: proxy.on('hello', function (response) { 38: $scope.$apply(function () { 39: $scope.response = response; 40: }); 41: }); 42: connection.start() 43: .done(function () { 44: console.log('signlar connection established'); 45: }) 46: .fail(function (error) { 47: console.log(error); 48: }); 49: }); 50:  51: app.controller('View2Ctrl', function ($scope, $) { 52: }); When we went to View1 the server side "OnConnect" method will be invoked as below. And in any page we send the message to server, all clients will got the response. If we close one of the client, the server side "OnDisconnect" method will be invoked which is correct. But is we click "View 2" link in the page "OnDisconnect" method will not be invoked even though the content and browser address had been changed. This might cause many SignalR connections remain between the client and server. Below is what happened after I clicked "View 1" and "View 2" links four times. As you can see there are 4 live connections.   Solution Since the reason of this issue is because, AngularJS hijacks the page event that SignalR need to stop the connection, we can handle AngularJS route or state change event and stop SignalR connect manually. In the code below I moved the "connection" variant to global scope, added a handler to "$stateChangeStart" and invoked "stop" method of "connection" if its state was not "disconnected". 1: var connection; 2: app.run(['$rootScope', function ($rootScope) { 3: $rootScope.$on('$stateChangeStart', function () { 4: if (connection && connection.state && connection.state !== 4 /* disconnected */) { 5: console.log('signlar connection abort'); 6: connection.stop(); 7: } 8: }); 9: }]); Now if we refresh the page and navigated to View 1, the connection will be opened. At this state if we clicked "View 2" link the content will be changed and the SignalR connection will be closed automatically.   Summary In this post I demonstrated an issue when we are using SignalR with AngularJS. The connection cannot be closed automatically when we navigate to other page/state in AngularJS. And the solution I mentioned below is to move the SignalR connection as a global variant and close it manually when AngularJS route/state changed. You can download the full sample code here. Moving the SignalR connection as a global variant might not be a best solution. It's just for easy to demo here. In production code I suggest wrapping all SignalR operations into an AngularJS factory. Since AngularJS factory is a singleton object, we can safely put the connection variant in the factory function scope.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Ten - oh, wait, eleven - Eleven things you should know about the ASP.NET Fall 2012 Update

    - by Jon Galloway
    Today, just a little over two months after the big ASP.NET 4.5 / ASP.NET MVC 4 / ASP.NET Web API / Visual Studio 2012 / Web Matrix 2 release, the first preview of the ASP.NET Fall 2012 Update is out. Here's what you need to know: There are no new framework bits in this release - there's no change or update to ASP.NET Core, ASP.NET MVC or Web Forms features. This means that you can start using it without any updates to your server, upgrade concerns, etc. This update is really an update to the project templates and Visual Studio tooling, conceptually similar to the ASP.NET MVC 3 Tools Update. It's a relatively lightweight install. It's a 41MB download. I've installed it many times and usually takes 5-7 minutes; it's never required a reboot. It adds some new project templates to ASP.NET MVC: Facebook Application and Single Page Application templates. It adds a lot of cool enhancements to ASP.NET Web API. It adds some tooling that makes it easy to take advantage of features like SignalR, Friendly URLs, and Windows Azure Authentication. Most of the new features are installed via NuGet packages. Since ASP.NET is open source, nightly NuGet packages are available, and the roadmap is published, most of this has really been publicly available for a while. The official name of this drop is the ASP.NET Fall 2012 Update BUILD Prerelease. Please do not attempt to say that ten times fast. While the EULA doesn't prohibit it, it WILL legally change your first name to Scott. As with all new releases, you can find out everything you need to know about the Fall Update at http://asp.net/vnext (especially the release notes!) I'm going to be showing all of this off, assisted by special guest code monkey Scott Hanselman, this Friday at BUILD: Bleeding edge ASP.NET: See what is next for MVC, Web API, SignalR and more… (and I've heard it will be livestreamed). Let's look at some of those things in more detail. No new bits ASP.NET 4.5, MVC 4 and Web API have a lot of great core features. I see the goal of this update release as making it easier to put those features to use to solve some useful scenarios by taking advantage of NuGet packages and template code. If you create a new ASP.NET MVC application using one of the new templates, you'll see that it's using the ASP.NET MVC 4 RTM NuGet package (4.0.20710.0): This means you can install and use the Fall Update without any impact on your existing projects and no worries about upgrading or compatibility. New Facebook Application Template ASP.NET MVC 4 (and ASP.NET 4.5 Web Forms) included the ability to authenticate your users via OAuth and OpenID, so you could let users log in to your site using a Facebook account. One of the new changes in the Fall Update is a new template that makes it really easy to create full Facebook applications. You could create Facebook application in ASP.NET already, you'd just need to go through a few steps: Search around to find a good Facebook NuGet package, like the Facebook C# SDK (written by my friend Nathan Totten and some other Facebook SDK brainiacs). Read the Facebook developer documentation to figure out how to authenticate and integrate with them. Write some code, debug it and repeat until you got something working. Get started with the application you'd originally wanted to write. What this template does for you: eliminate steps 1-3. Erik Porter, Nathan and some other experts built out the Facebook Application template so it automatically pulls in and configures the Facebook NuGet package and makes it really easy to take advantage of it in an ASP.NET MVC application. One great example is the the way you access a Facebook user's information. Take a look at the following code in a File / New / MVC / Facebook Application site. First, the Home Controller Index action: [FacebookAuthorize(Permissions = "email")] public ActionResult Index(MyAppUser user, FacebookObjectList<MyAppUserFriend> userFriends) { ViewBag.Message = "Modify this template to jump-start your Facebook application using ASP.NET MVC."; ViewBag.User = user; ViewBag.Friends = userFriends.Take(5); return View(); } First, notice that there's a FacebookAuthorize attribute which requires the user is authenticated via Facebook and requires permissions to access their e-mail address. It binds to two things: a custom MyAppUser object and a list of friends. Let's look at the MyAppUser code: using Microsoft.AspNet.Mvc.Facebook.Attributes; using Microsoft.AspNet.Mvc.Facebook.Models; // Add any fields you want to be saved for each user and specify the field name in the JSON coming back from Facebook // https://developers.facebook.com/docs/reference/api/user/ namespace MvcApplication3.Models { public class MyAppUser : FacebookUser { public string Name { get; set; } [FacebookField(FieldName = "picture", JsonField = "picture.data.url")] public string PictureUrl { get; set; } public string Email { get; set; } } } You can add in other custom fields if you want, but you can also just bind to a FacebookUser and it will automatically pull in the available fields. You can even just bind directly to a FacebookUser and check for what's available in debug mode, which makes it really easy to explore. For more information and some walkthroughs on creating Facebook applications, see: Deploying your first Facebook App on Azure using ASP.NET MVC Facebook Template (Yao Huang Lin) Facebook Application Template Tutorial (Erik Porter) Single Page Application template Early releases of ASP.NET MVC 4 included a Single Page Application template, but it was removed for the official release. There was a lot of interest in it, but it was kind of complex, as it handled features for things like data management. The new Single Page Application template that ships with the Fall Update is more lightweight. It uses Knockout.js on the client and ASP.NET Web API on the server, and it includes a sample application that shows how they all work together. I think the real benefit of this application is that it shows a good pattern for using ASP.NET Web API and Knockout.js. For instance, it's easy to end up with a mess of JavaScript when you're building out a client-side application. This template uses three separate JavaScript files (delivered via a Bundle, of course): todoList.js - this is where the main client-side logic lives todoList.dataAccess.js - this defines how the client-side application interacts with the back-end services todoList.bindings.js - this is where you set up events and overrides for the Knockout bindings - for instance, hooking up jQuery validation and defining some client-side events This is a fun one to play with, because you can just create a new Single Page Application and hit F5. Quick, easy install (with one gotcha) One of the cool engineering changes for this release is a big update to the installer to make it more lightweight and efficient. I've been running nightly builds of this for a few weeks to prep for my BUILD demos, and the install has been really quick and easy to use. The install takes about 5 minutes, has never required a reboot for me, and the uninstall is just as simple. There's one gotcha, though. In this preview release, you may hit an issue that will require you to uninstall and re-install the NuGet VSIX package. The problem comes up when you create a new MVC application and see this dialog: The solution, as explained in the release notes, is to uninstall and re-install the NuGet VSIX package: Start Visual Studio 2012 as an Administrator Go to Tools->Extensions and Updates and uninstall NuGet. Close Visual Studio Navigate to the ASP.NET Fall 2012 Update installation folder: For Visual Studio 2012: Program Files\Microsoft ASP.NET\ASP.NET Web Stack\Visual Studio 2012 For Visual Studio 2012 Express for Web: Program Files\Microsoft ASP.NET\ASP.NET Web Stack\Visual Studio Express 2012 for Web Double click on the NuGet.Tools.vsix to reinstall NuGet This took me under a minute to do, and I was up and running. ASP.NET Web API Update Extravaganza! Uh, the Web API team is out of hand. They added a ton of new stuff: OData support, Tracing, and API Help Page generation. OData support Some people like OData. Some people start twitching when you mention it. If you're in the first group, this is for you. You can add a [Queryable] attribute to an API that returns an IQueryable<Whatever> and you get OData query support from your clients. Then, without any extra changes to your client or server code, your clients can send filters like this: /Suppliers?$filter=Name eq ‘Microsoft’ For more information about OData support in ASP.NET Web API, see Alex James' mega-post about it: OData support in ASP.NET Web API ASP.NET Web API Tracing Tracing makes it really easy to leverage the .NET Tracing system from within your ASP.NET Web API's. If you look at the \App_Start\WebApiConfig.cs file in new ASP.NET Web API project, you'll see a call to TraceConfig.Register(config). That calls into some code in the new \App_Start\TraceConfig.cs file: public static void Register(HttpConfiguration configuration) { if (configuration == null) { throw new ArgumentNullException("configuration"); } SystemDiagnosticsTraceWriter traceWriter = new SystemDiagnosticsTraceWriter() { MinimumLevel = TraceLevel.Info, IsVerbose = false }; configuration.Services.Replace(typeof(ITraceWriter), traceWriter); } As you can see, this is using the standard trace system, so you can extend it to any other trace listeners you'd like. To see how it works with the built in diagnostics trace writer, just run the application call some API's, and look at the Visual Studio Output window: iisexpress.exe Information: 0 : Request, Method=GET, Url=http://localhost:11147/api/Values, Message='http://localhost:11147/api/Values' iisexpress.exe Information: 0 : Message='Values', Operation=DefaultHttpControllerSelector.SelectController iisexpress.exe Information: 0 : Message='WebAPI.Controllers.ValuesController', Operation=DefaultHttpControllerActivator.Create iisexpress.exe Information: 0 : Message='WebAPI.Controllers.ValuesController', Operation=HttpControllerDescriptor.CreateController iisexpress.exe Information: 0 : Message='Selected action 'Get()'', Operation=ApiControllerActionSelector.SelectAction iisexpress.exe Information: 0 : Operation=HttpActionBinding.ExecuteBindingAsync iisexpress.exe Information: 0 : Operation=QueryableAttribute.ActionExecuting iisexpress.exe Information: 0 : Message='Action returned 'System.String[]'', Operation=ReflectedHttpActionDescriptor.ExecuteAsync iisexpress.exe Information: 0 : Message='Will use same 'JsonMediaTypeFormatter' formatter', Operation=JsonMediaTypeFormatter.GetPerRequestFormatterInstance iisexpress.exe Information: 0 : Message='Selected formatter='JsonMediaTypeFormatter', content-type='application/json; charset=utf-8'', Operation=DefaultContentNegotiator.Negotiate iisexpress.exe Information: 0 : Operation=ApiControllerActionInvoker.InvokeActionAsync, Status=200 (OK) iisexpress.exe Information: 0 : Operation=QueryableAttribute.ActionExecuted, Status=200 (OK) iisexpress.exe Information: 0 : Operation=ValuesController.ExecuteAsync, Status=200 (OK) iisexpress.exe Information: 0 : Response, Status=200 (OK), Method=GET, Url=http://localhost:11147/api/Values, Message='Content-type='application/json; charset=utf-8', content-length=unknown' iisexpress.exe Information: 0 : Operation=JsonMediaTypeFormatter.WriteToStreamAsync iisexpress.exe Information: 0 : Operation=ValuesController.Dispose API Help Page When you create a new ASP.NET Web API project, you'll see an API link in the header: Clicking the API link shows generated help documentation for your ASP.NET Web API controllers: And clicking on any of those APIs shows specific information: What's great is that this information is dynamically generated, so if you add your own new APIs it will automatically show useful and up to date help. This system is also completely extensible, so you can generate documentation in other formats or customize the HTML help as much as you'd like. The Help generation code is all included in an ASP.NET MVC Area: SignalR SignalR is a really slick open source project that was started by some ASP.NET team members in their spare time to add real-time communications capabilities to ASP.NET - and .NET applications in general. It allows you to handle long running communications channels between your server and multiple connected clients using the best communications channel they can both support - websockets if available, falling back all the way to old technologies like long polling if necessary for old browsers. SignalR remains an open source project, but now it's being included in ASP.NET (also open source, hooray!). That means there's real, official ASP.NET engineering work being put into SignalR, and it's even easier to use in an ASP.NET application. Now in any ASP.NET project type, you can right-click / Add / New Item... SignalR Hub or Persistent Connection. And much more... There's quite a bit more. You can find more info at http://asp.net/vnext, and we'll be adding more content as fast as we can. Watch my BUILD talk to see as I demonstrate these and other features in the ASP.NET Fall 2012 Update, as well as some other even futurey-er stuff!

    Read the article

  • SQL – NuoDB and Third Party Explorer – SQuirreL SQL Client, SQL Workbench/J and DbVisualizer

    - by Pinal Dave
    I recently wrote a four-part series on how I started to learn about and begin my journey with NuoDB. Big Data is indeed a big world and the learning of the Big Data is like spaghetti – no one knows in reality where to start, so I decided to learn it with the help of NuoDB. You can download NuoDB and continue your journey with me as well. Part 1 – Install NuoDB in 90 Seconds Part 2 – Manage NuoDB Installation Part 3 – Explore NuoDB Database Part 4 – Migrate from SQL Server to NuoDB …and in this blog post we will try to answer the most asked question about NuoDB. “I like the NuoDB Explorer but can I connect to NuoDB from my preferred Graphical User Interface?” Honestly, I did not expect this question to be asked of me so many times but from the question it is clear that we developers absolutely want to learn new things and along with that we do want to continue to use our most efficient developer tools. Now here is the answer to the question: “Absolutely, you can continue to use any of the following most popular SQL clients.” NuoDB supports the three most popular 3rd-party SQL clients. In all the leading development environments there are always more than one database installed and managing each of them with a different tool is often a very difficult task. Developers like to use one tool, which can control most of the databases. Once developers are familiar with one database tool it is very difficult for them to switch to another tool. This is particularly difficult when we developers find that tool to be the key reason for our efficiency. Let us see how to install each of the NuoDB supported 3rd party tools along with a quick tutorial on how to go about using them. SQuirreL SQL Client First download SQuirreL Universal SQL client. On the Windows platform you can double-click on the file and it will install the SQuirrel client. Once it is installed, open the application and it will bring up the following screen. Now go to the Drivers tab on the left side and scroll it down. You will find NuoDB mentioned there. Now right click over it and click on Modify Driver. Now here is where you need to make sure that you make proper entries or your client will not work with the database. Enter following values: Name: NuoDB Example URL: jdbc:com:nuodb://localhost:48004/test Website URL: http://www.nuodb.com Now click on the Extra Class Path tab and Add the location of the nuodbjdbc.jar file. If you are following my blog posts and have installed NuoDB in the default location, you will find the default path as C:\Program Files\NuoDB\jar\nuodbjdbc.jar. The class name of the driver is automatically populated. Once you click OK you will see that there is a small icon displayed to the left of NuoDB, which shows that you have successfully configured and installed the NuoDB driver. Now click on the tab of Alias tab and you can notice that it is empty. Now click on the big Plus icon and it will open screen of adding an alias. “Alias” means nothing more than adding a database to your system. The database name of the original installation can be anything and, if you wish, you can register the database with any other alternative name. Here are the details you should fill into the Alias screen below. Name: Test (or your preferred alias) Driver: NuoDB URL: jdbc:com:nuodb://localhost:48004/test (This is for test database) User Name: dba (This is the username which I entered for test Database) Password: goalie (This is the password which I entered for test Database) Check Auto Logon and Connect at Startup and click on OK. That’s it! You are done. On the right side you will see a table name and on the left side you will see various tabs with all the relevant details from respective table. You can see various metadata, schemas, data types and other information in the table. In addition, you can also generate script and do various important tasks related to database. You can see how easy it is to configure NuoDB with the SQuirreL Client and get going with it immediately. SQL Workbench/J This is another wonderful client tool, which works very well with NuoDB. The best part is that in the Driver dropdown you will see NuoDB being mentioned there. Click here to download  SQL Workbench/J Universal SQL client. The download process is straight forward and the installation is a very easy process for SQL Workbench/J. As soon as you open the client, you will notice on following screen the NuoDB driver when selecting a New Connection Profile. Select NuoDB from the drop down and click on OK. In the driver information, enter following details: Driver: NuoDB (com.nuodb.jdbc.Driver) URL: jdbc:com.nuodb://localhost/test Username: dba Password: goalie While clicking on OK, it will bring up the following pop-up. Click Yes to edit the driver information. Click on OK and it will bring you to following screen. This is the screen where you can perform various tasks. You can write any SQL query you want and it will instantly show you the results. Now click on the database icon, which you see right on the left side of the word User=dba.  Once you click on Database Explorer, you can perform various database related tasks. As a developer, one of my favorite tasks is to look at the source of the table as it gives me a proper view of the structure of the database. I find SQL Workbench/J very efficient in doing the same. DbVisualizer DBVisualizer is another great tool, which helps you to connect to NuoDB and retrieve database information in your desired format. A developer who is familiar with DBVisualizer will find this client to be very easy to work with. The installation of the DBVisualizer is very pretty straight forward. When we open the client, it will bring us to the following screen. As a first step we need to set up the driver. Go to Tools >> Driver Manager. It will bring up following screen where we set up the diver. Click on Create Driver and it will open up the driver settings on the right side. On the right side of the area where it displays Driver Settings please enter the following values- Name: NuoDB URL Format: jdbc:com.nuodb://localhost:48004/test Now under the driver path, click on the folder icon and it will ask for the location of the jar file. Provide the path as a C:\Program Files\NuoDB\jar\nuodbjdbc.jar and click OK. You will notice there is a green button displayed at the bottom right corner. This means the driver is configured properly. Once driver is configured properly, we can go to Create Database Connection and create a database. If the pop up show up for the Wizard. Click on No Wizard and continue to enter the settings manually. Here is the Database Connection screen. This screen can be bit tricky. Here are the settings you need to remember to enter. Name: NuoDB Database Type: Generic Driver: NuoDB Database URL: jdbc:com.nuodb://localhost:48004/test Database Userid: dba Database Password: goalie Once you enter the values, click on Connect. Once Connect is pressed, it will change the button value to Reconnect if the connection is successfully established and it will show the connection details on lthe eft side. When we further explore the NuoDB, we can see various tables created in our test application. We can further click on the right side screen and see various details on the table. If you click on the Data Tab, it will display the entire data of the table. The Tools menu also has some very interesting and cool features like Driver Manager, Data Monitor and SQL History. Summary Well, this was a relatively long post but I find it is extremely essential to cover all the three important clients, which we developers use in our daily database development. Here is my question to you? Which one of the following is your favorite NuoDB 3rd-Party Database Client? (Pick One) SQuirreL SQL Client SQL Workbench/J DbVisualizer I will be very much eager to read your experience about NuoDB. You can download NuoDB from here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Skanska Builds Global Workforce Insight with Cloud-Based HCM System

    - by HCM-Oracle
    By David Baum - Originally posted on Profit Peter Bjork grew up building things. He started his work life learning all sorts of trades at his father’s construction company in the northern part of Sweden. So in college, it was natural for him to pursue a bachelor’s degree in construction engineering—but he broke new ground when he added a master’s degree in finance to his curriculum vitae. Written on a traditional résumé, Bjork’s current title (vice president of information systems strategies) doesn’t reveal the diversity of his experience—that he’s adept with hammer and nails as well as rows and columns. But a big part of his current job is to work with his counterparts in human resources (HR) designing, building, and deploying the systems needed to get a complete view of the skills and potential of Skanska’s 22,000-strong white-collar workforce. And Bjork believes that complete view is essential to Skanska’s success. “Our business is really all about people,” says Bjork, who has worked with Skanska for 16 years. “You can have equipment and financial resources, but to truly succeed in a business like ours you need to have the right people in the right places. That’s what this system is helping us accomplish.” In a global HR environment that suffers from a paradox of high unemployment and a scarcity of skilled labor, managers need to have a complete understanding of workforce capabilities to develop management skills, recruit for open positions, ensure that staff is getting the training they need, and reduce attrition. Skanska’s human capital management (HCM) systems, based on Oracle Talent Management Cloud, play a critical role delivering that understanding. “Skanska’s philosophy of having great people, encouraging their development, and giving them the chance to move across business units has nurtured a culture of collaboration, but managing a diverse workforce spread across the globe is a monumental challenge,” says Annika Lindholm, global human resources system owner in the HR department at Skanska’s headquarters just outside of Stockholm, Sweden. “We depend heavily on Oracle’s cloud technology to support our HCM function.” Construction, Workers For Skanska’s more than 60,000 employees and contractors, managing huge construction projects is an everyday job. Beyond erecting signature buildings, management’s goal is to build a corporate culture where valuable talent can be sought out and developed, bringing in the right mix of people to support and grow the business. “Of all the companies in our space, Skanska is probably one of the strongest ones, with a laser focus on people and people development,” notes Tom Crane, chief HR and communications officer for Skanska in the United States. “Our business looks like equipment and material, but all we really have at the end of the day are people and their intellectual capital. Without them, second only to clients, of course, you really can’t achieve great things in the high-profile environment in which we work.” During the 1990s, Skanska entered an expansive growth phase. A string of successful acquisitions paved the way for the company’s transformation into a global enterprise. “Today the company’s focus is on profitable growth,” continues Crane. “But you can’t really achieve growth unless you are doing a very good job of developing your people and having the right people in the right places and driving a culture of growth.” In the United States alone, Skanska has more than 8,000 employees in four distinct business units: Skanska USA Building, also known as the Construction Manager, builds everything at ground level and above—hospitals, educational facilities, stadiums, airport terminals, and other massive projects. Skanska USA Civil does everything at ground level and below, such as light rail, water treatment facilities, power plants or power industry facilities, highways, and bridges. Skanska Infrastructure Development develops public-private partnerships—projects in which Skanska adds equity and also arranges for outside financing. Skanska Commercial Development acts like a commercial real estate developer, acquiring land and building offices on spec or build-to-suit for its clients. Skanska's international portfolio includes construction of the new Meadowlands Stadium. Getting the various units to operate collaboratatively helps Skanska deliver high value to clients and shareholders. “When we have this collaboration among units, it allows us to enrich each of the business units and, at the same time, develop our future leaders to be more facile in operating across business units—more accepting of a ‘one Skanska’ approach,” explains Crane. Workforce Worldwide But HR needs processes and tools to support managers who face such business dynamics. Oracle Talent Management Cloud is helping Skanska implement world-class recruiting strategies and generate the insights needed to drive quality hiring practices, internal mobility, and a proactive approach to building talent pipelines. With their new cloud system in place, Skanska HR leaders can manage everything from recruiting, compensation, and goal and performance management to employee learning and talent review—all as part of a single, cohesive software-as-a-service (SaaS) environment. Skanska has successfully implemented two modules from Oracle Talent Management Cloud—the recruiting and performance management modules—and is in the process of implementing the learn module. Internally, they call the systems Skanska Recruit, Skanska Talent, and Skanska Learn. The timing is apropos. With high rates of unemployment in recent years, there have been many job candidates on the market. However, talent scarcity continues to frustrate recruiters. Oracle Taleo Recruiting Cloud Service, one of the applications in the Oracle Talent Management cloud portfolio, enables Skanska managers to create more-intelligent recruiting strategies, pulling high-performer profile statistics to create new candidate profiles and using multitiered screening and assessments to ensure that only the best-suited candidate applications make it to the recruiter’s desk. Tools such as applicant tracking, interview management, and requisition management help recruiters and hiring managers streamline the hiring process. Oracle’s cloud-based software system automates and streamlines many other HR processes for Skanska’s multinational organization and delivers insight into the success of recruiting and talent-management efforts. “The Oracle system is definitely helping us to construct global HR processes,” adds Bjork. “It is really important that we have a business model that is decentralized, so we can effectively serve our local markets, and interact with our global ERP [enterprise resource planning] systems as well. We would not be able to do this without a really good, well-integrated HCM system that could support these efforts.” A key piece of this effort is something Skanska has developed internally called the Skanska Leadership Profile. Core competencies, on which all employees are measured, are used in performance reviews to determine weak areas but also to discover talent, such as those who will be promoted or need succession plans. This global profiling system brings consistency to the way HR professionals evaluate and review talent across the company, with a consistent set of ratings and a consistent definition of competencies. All salaried employees in Skanska are tied to a talent management process that gives opportunity for midyear and year-end reviews. Using the performance management module, managers can align individual goals with corporate goals; provide clear visibility into how each employee contributes to the success of the organization; and drive a strategic, end-to-end talent management strategy with a single, integrated system for all talent-related activities. This is critical to a company that is highly focused on ensuring that every employee has a development plan linked to his or her succession potential. “Our approach all along has been to deploy software applications that are seamless to end users,” says Crane. “The beauty of a cloud-based system is that much of the functionality takes place behind the scenes so we can focus on making sure users can access the data when they need it. This model greatly improves their efficiency.” The employee profile not only sets a competency baseline for new employees but is also integrated with Skanska’s other back-office Oracle systems to ensure consistency in the way information is used to support other business functions. “Since we have about a dozen different HR systems that are providing us with information, we built a master database that collects all the information,” explains Lindholm. “That data is sent not only to Oracle Talent Management Cloud, but also to other systems that are dependent on this information.” Collaboration to Scale Skanska is poised to launch a new Oracle module to link employee learning plans to the review process and recruitment assessments. According to Crane, connecting these processes allows Skanska managers to see employees’ progress and produce an updated learning program. For example, as employees take classes, supervisors can consult the Oracle Talent Management Cloud portal to monitor progress and align it to each individual’s training and development plan. “That’s a pretty compelling solution for an organization that wants to manage its talent on a real-time basis and see how the training is working,” Crane says. Rolling out Oracle Talent Management Cloud was a joint effort among HR, IT, and a global group that oversaw the worldwide implementation. Skanska deployed the solution quickly across all markets at once. In the United States, for example, more than 35 offices quickly got up to speed on the new system via webinars for employees and face-to-face training for the HR group. “With any migration, there are moments when you hold your breath, but in this case, we had very few problems getting the system up and running,” says Crane. Lindholm adds, “There has been very little resistance to the system as users recognize its potential. Customizations are easy, and a lasting partnership has developed between Skanska and Oracle when help is needed. They listen to us.” Bjork elaborates on the implementation process from an IT perspective. “Deploying a SaaS system removes a lot of the complexity,” he says. “You can downsize the IT part and focus on the business part, which increases the probability of a successful implementation. If you want to scale the system, you make a quick phone call. That’s all it took recently when we added 4,000 users. We didn’t have to think about resizing the servers or hiring more IT people. Oracle does that for us, and they have provided very good support.” As a result, Skanska has been able to implement a single, cost-effective talent management solution across the organization to support its strategy to recruit and develop a world-class staff. Stakeholders are confident that they are providing the most efficient recruitment system possible for competent personnel at all levels within the company—from skilled workers at construction sites to top management at headquarters. And Skanska can retain skilled employees and ensure that they receive the development opportunities they need to grow and advance.

    Read the article

  • Oracle Customer Reference Forum – Apex IT – Oracle Sales Cloud

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Apex IT, an Oracle Platinum Partner, wins Nucleus Research's ROI Award with a 724% return. Learn how you can improve your ROI with Oracle Sales and Marketing Cloud. We are pleased to invite you to a discussion with Apex IT on industry trends, why sales automation is important, the decision making process for choosing Oracle Sales Cloud, and benefits achieved since going live. Apex IT works with clients large and small, assisting them at all stages in the process: organizing ideas and developing strategies, selecting the most appropriate package, implementing it for best results, and keeping systems optimized with long-term support. Please plan to register at least three hours prior to the event taking place in order to participate and get the dial-in information associated in due time. Speakers: Bryan Hinz, Vice President of Business Development, Apex IT (Speaker) Chris Haven, Senior Director Product Management, Oracle (Moderator) Organization Profile: Since 1997, Apex IT has helped public sector, corporate and higher education clients use technology to streamline their processes and increase productivity and profitability. Based on products and best practices from Oracle our experts provide a full range of enterprise solutions including CX/CRM and related applications that support marketing, sales, and service; HR and HR Helpdesk; and Business Intelligence. Our project approach is results-driven and our attitude is people-focused. Industry: Professional Services Products/Services: Oracle Sales Cloud Organization Website: http://apexit.com/ Event Description: In this informal reference call, you will have the opportunity to hear Apex IT discuss industry trends, why sales automation is important, the decision making process for choosing Oracle Sales Cloud, and benefits achieved since going live. The call will open with a brief overview, followed by discussion, and an open question and answer session. Please allow one hour for the call. Why Oracle: Apex IT needed a mobile-enabled sales force automation tool that could promote account collaboration and integrate with Microsoft Outlook. Oracle Sales Cloud met these needs and Apex IT’s requirements for: Improved collaborative selling Improved quality of customer engagement and information Improved business development Improved pipeline management Please plan to register at least three hours prior to the event taking place in order to participate and get the dial-in information associated in due time. After you register your information will be forwarded through an Approval Process. Once your registration request has been validated against the invitation database, you will receive an email confirmation with your registration details as long as there is availability. Please be advised that Apex IT will revise the registrants list and may dismiss registrations as they see fit. Note: To access more information at the corporate site you would need an Oracle.com account. If you do not already have an account, getting one is easy and free. Click on the link and you will be prompted to create an account. After you have created your account, you will be automatically returned to the full page description of this event. Register Now! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Oracle Virtualization at Oracle OpenWorld 2012

    - by Chris Kawalek
    Mini-Series Entry 1 of 3: Hands-On Virtualization This is the first entry of a 3 part mini-series aimed at highlighting server and desktop virtualization at this year’s Oracle OpenWorld.  Oracle OpenWorld 2012 is fast approaching! If you are as excited as we are about the fascinating new Oracle virtualization content featured at Oracle OpenWorld 2012, you won’t want to miss this blog mini-series. We will be highlighting sessions that cover advances and innovations in our products, our product strategy and roadmap, and hands on labs for step-by-step instructions from our field and product experts. In the blog mini-series you will learn about: The Oracle Virtualization general keynote session Hands-on labs  Key Oracle server and desktop virtualization sessions In this entry, we will cover the Oracle Virtualization keynote session and the hands-on labs you won't want to miss. General Session: Oracle Virtualization Strategy and Roadmap Session ID: GEN8725 Oracle offers the industry’s most complete and integrated virtualization portfolio enabling organizations to realize benefits beyond simple consolidation as they transform their data centers into flexible cloud-based infrastructures. Join Oracle executives and experts to learn about Oracle’s desktop-to-data-center virtualization solutions, such as the OS, with built-in management integration at all layers that can help you virtualize and manage the complete computing environment, from physical servers to virtual servers and applications. This “don’t-miss” session offers details of the latest product updates and strategy; product roadmaps; integration with enterprise applications; and real-world examples of how Oracle server, desktop, and storage virtualization is benefiting customers. Here are our top picks for Hands-On Labs for Oracle OpenWorld 2012: Oracle Virtual Desktop Infrastructure Performance and Tablet Mobility Session ID: HOL9907 This hands-on lab demonstrates the performance (using an industry-standard load tester) and roaming capabilities of Oracle Virtual Desktop Infrastructure with Oracle’s Sun Ray Clients, Apple iPad and other clients. Deploying an IaaS Environment with Oracle VM: Hands-On Lab  Session ID: HOL9558 This hands-on lab takes you through the planning and deployment of an infrastructure as a service (IaaS) environment with Oracle VM as the foundation. It covers a range of topics, from planning storage capacity, LUN creation, network bandwidth planning, and best practices to designing and streamlining the environment for ease of management. Learn from deeply experienced field engineers and product experts. Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-On Lab Session ID: HOL9559 This hands-on lab is for application architects or system administrators who will need to deploy and manage Oracle Applications. You’ll learn how Oracle VM Templates can turn you into a power user who can virtualize and deploy complex Oracle Applications in minutes. Longtime field-experienced engineers and product experts will show you, step by step, how to download and import templates and deploy the applications. x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance Session ID: HOL9870 The purpose of this hands-on lab is to demonstrate the functionality and usage of Oracle’s enterprise cloud infrastructure for x86 with Oracle VM 3.x. It covers:  Creation of VMs Migration of VMs  Quick and easy deployment of Oracle applications with Oracle VM Templates  Usage of the Storage Connect plug-in for the Sun ZFS Storage Appliance You can find these and other great sessions on the Oracle OpenWorld 2012 Content Catalogue. Start checking now to better plan and organize your week at the conference. Then you’ll be ready to sign up for all of your sessions in mid-July when the scheduling tool goes live. While the hands-on labs allow you to directly interact with Oracle virtualization products, the conference sessions allow you to hear from a wide variety of industry experts on how they're using they technology in real world deployments, solving specific challenges, and more. In tomorrow's entry, we'll start talking about the many conference sessions related to Oracle server and desktop virtualization you can attend during the show. See you then! - The Oracle Virtualization marketing team

    Read the article

  • Fast Fashion Freshness

    - by David Dorf
    Fashion retailers such as H&M, Zara, and Wet Seal have perfected the fast fashion retailing model. The concept requires no replenishment in order to maintain assortment freshness and to create a sense of urgency for the consumer to purchase now. However, maintaining assortment freshness results in high product turnover, making markdown optimization a necessity. Wet Seal, for instance, needed to move from ad-hoc markdowns and dealing with surplus inventory to handling markdowns methodically across 8,000 SKUs with only 12-15 week lifecycle (from DC receipt to exit). By optimizing and automating markdowns, Wet Seal is reaching their goal of assortment freshness, which in turn increases sales. If you're interested in learning more, register for a free webinar occurring on May 13th featuring Join Daniel Ryu, Vice President of Planning and Allocation at Wet Seal. He'll be discussing how the fast fashion retailer maintains their goal of assortment freshness.

    Read the article

  • ASP.NET Web API - Screencast series with downloadable sample code - Part 1

    - by Jon Galloway
    There's a lot of great ASP.NET Web API content on the ASP.NET website at http://asp.net/web-api. I mentioned my screencast series in original announcement post, but we've since added the sample code so I thought it was worth pointing the series out specifically. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. So - let's watch them together! Grab some popcorn and pay attention, because these are short. After each video, I'll talk about what I thought was important. I'm embedding the videos using HTML5 (MP4) with Silverlight fallback, but if something goes wrong or your browser / device / whatever doesn't support them, I'll include the link to where the videos are more professionally hosted on the ASP.NET site. Note also if you're following along with the samples that, since Part 1 just looks at the File / New Project step, the screencast part numbers are one ahead of the sample part numbers - so screencast 4 matches with sample code demo 3. Note: I started this as one long post for all 6 parts, but as it grew over 2000 words I figured it'd be better to break it up. Part 1: Your First Web API [Video and code on the ASP.NET site] This screencast starts with an overview of why you'd want to use ASP.NET Web API: Reach more clients (thinking beyond the browser to mobile clients, other applications, etc.) Scale (who doesn't love the cloud?!) Embrace HTTP (a focus on HTTP both on client and server really simplifies and focuses service interactions) Next, I start a new ASP.NET Web API application and show some of the basics of the ApiController. We don't write any new code in this first step, just look at the example controller that's created by File / New Project. using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace NewProject_Mvc4BetaWebApi.Controllers { public class ValuesController : ApiController { // GET /api/values public IEnumerable<string> Get() { return new string[] { "value1", "value2" }; } // GET /api/values/5 public string Get(int id) { return "value"; } // POST /api/values public void Post(string value) { } // PUT /api/values/5 public void Put(int id, string value) { } // DELETE /api/values/5 public void Delete(int id) { } } } Finally, we walk through testing the output of this API controller using browser tools. There are several ways you can test API output, including Fiddler (as described by Scott Hanselman in this post) and built-in developer tools available in all modern browsers. For simplicity I used Internet Explorer 9 F12 developer tools, but you're of course welcome to use whatever you'd like. A few important things to note: This class derives from an ApiController base class, not the standard ASP.NET MVC Controller base class. They're similar in places where API's and HTML returning controller uses are similar, and different where API and HTML use differ. A good example of where those things are different is in the routing conventions. In an HTTP controller, there's no need for an "action" to be specified, since the HTTP verbs are the actions. We don't need to do anything to map verbs to actions; when a request comes in to /api/values/5 with the DELETE HTTP verb, it'll automatically be handled by the Delete method in an ApiController. The comments above the API methods show sample URL's and HTTP verbs, so we can test out the first two GET methods by browsing to the site in IE9, hitting F12 to bring up the tools, and entering /api/values in the URL: That sample action returns a list of values. To get just one value back, we'd browse to /values/5: That's it for Part 1. In Part 2 we'll look at getting data (beyond hardcoded strings) and start building out a sample application.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >