Search Results

Search found 9816 results on 393 pages for 'blade servers'.

Page 144/393 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • CentOS 5.4 NFS v4 client file permissions differ from original files & NFS Share file contents

    - by p4guru
    Having a strange problem with NFS share and file permissions on the 1 out of the 2 NFS clients, web1 has file permissions issues but web2 is fine. web1 and web2 are load balanced web servers. So questions are: how do I ensure NFS share file contents retain the same permissions for user/group as the original files on web1 server like they do on web2 server ? how do I reverse what I did on web1, i tried unmount command and said command not found ? Information: I'm using 3 dedicated server setup. All 3 servers CentOS 5.4 64bit based. servers are as follows: web1 - nfs client with file permissions issues web2 - nfs client file permissions are OKAY db1 - nfs share at /nfsroot web2 nfs client was setup by my web host, while web1 was setup by me. I did the following commands on web1 and it worked with updating db1 nfsroot share at /nfsroot/site_css with latest files on web1 but the file permissions don't stick even if i use tar with -p command to perserve file permissions ? cd /home/username/public_html/forums/script/ tar -zcp site_css/ > site_css.tar.gz mount -t nfs4 nfsshareipaddress:/site_css /home/username/public_html/forums/scripts/site_css/ -o rw,soft cd /home/username/public_html/forums/script/ tar -zxf site_css.tar.gz But checking on web1 file permissions no longer username user/group but owned by nobody ? but web2 file permissions correct ? This is only a problem for web1 while web2 is correct ? Looks like numeric ids aren't the same ? Not sure how to correct this ? web1 with incorrect user/group of nobody ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 nobody nobody 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Feb 22 02:43 ../ -rw-r--r-- 1 nobody nobody 1 Nov 30 2006 index.html -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-cc2f96c9-00011.css web1 numeric ids ls -n /home/username/public_html/forums/scripts/site_css total 48 drwxrwxrwx 2 99 99 4096 Feb 22 02:37 ./ drwxr-xr-x 3 503 500 4096 Feb 22 02:43 ../ -rw-r--r-- 1 99 99 1 Nov 30 2006 index.html -rw-r--r-- 1 99 99 5876 Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 99 99 5877 Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 99 99 5877 Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 99 99 5876 Feb 18 05:37 style-cc2f96c9-00011.css web2 correct username user/group permissions ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Dec 2 14:51 ../ -rw-r--r-- 1 username username 1 Nov 30 2006 index.html -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-cc2f96c9-00011.css web2 numeric ids ls -n /home/username/public_html/forums/scripts/site_css total 48 drwxrwxrwx 2 503 500 4096 Feb 22 02:37 ./ drwxr-xr-x 3 503 500 4096 Dec 2 14:51 ../ -rw-r--r-- 1 503 500 1 Nov 30 2006 index.html -rw-r--r-- 1 503 500 5876 Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 503 500 5877 Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 503 500 5877 Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 503 500 5876 Feb 18 05:37 style-cc2f96c9-00011.css I checked db1 /nfsroot/site_css and user/group ownership was incorrect for newer files dated feb22 owned by root and not username ? on db1 originally incorrect root assigned user/group for new feb22 dated files ls -alh /nfsroot/site_css total 44K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 . drwxr-xr-x 17 root root 4.0K Feb 17 12:06 .. -rw-r--r-- 1 root root 1 Nov 30 2006 index.html -rw-r--r-- 1 root root 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 root root 5.8K Feb 22 02:37 style-95001864-00002.css -rw------- 1 username nfs 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw------- 1 username nfs 5.8K Feb 18 05:37 style-cc2f96c9-00011.css Then I chmod them all on db1 and chown to set to right ownership on db1 so it looks like below on db1 once corrected the newer feb22 dated files ls -alh /nfsroot/site_css total 44K drwxrwxrwx 2 root root 4.0K Feb 22 02:37 . drwxr-xr-x 17 root root 4.0K Feb 17 12:06 .. -rw-r--r-- 1 username username 1 Nov 30 2006 index.html -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 username username 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 username username 5.8K Feb 18 05:37 style-cc2f96c9-00011.css but still web1 shows owned by nobody ? while web2 shows correct permissions ? web1 still with incorrect user/group of nobody not matching what web2 and db1 are set to ? ls -alh /home/username/public_html/forums/scripts/site_css total 48K drwxrwxrwx 2 nobody nobody 4.0K Feb 22 02:37 ./ drwxr-xr-x 3 username username 4.0K Feb 22 02:43 ../ -rw-r--r-- 1 nobody nobody 1 Nov 30 2006 index.html -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-057c3df0-00011.css -rw-r--r-- 1 nobody nobody 5.8K Feb 22 02:37 style-95001864-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-b1879ba7-00002.css -rw-r--r-- 1 nobody nobody 5.8K Feb 18 05:37 style-cc2f96c9-00011.css Just so confusing so any help is very very much appreciated! thanks

    Read the article

  • Windows DNS Server 2008 R2 fallaciously returns SERVFAIL

    - by Easter Sunshine
    I have a Windows 2008 R2 domain controller which is also a DNS server. When resolving certain TLDs, it returns a SERVFAIL: $ dig bogus. ; <<>> DiG 9.8.1 <<>> bogus. ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 31919 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;bogus. IN A I get the same result for a real TLD like com. when querying the DC as shown above. Compare to a BIND server that is working as expected: $ dig bogus. @128.59.59.70 ; <<>> DiG 9.8.1 <<>> bogus. @128.59.59.70 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 30141 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;bogus. IN A ;; AUTHORITY SECTION: . 10800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2012012501 1800 900 604800 86400 ;; Query time: 18 msec ;; SERVER: 128.59.59.70#53(128.59.59.70) ;; WHEN: Wed Jan 25 14:09:14 2012 ;; MSG SIZE rcvd: 98 Similarly, when I query my Windows DNS server with dig . any, I get a SERVFAIL but the BIND servers return the root zone as expected. This sounds similar to the issue described in http://support.microsoft.com/kb/968372 except I am using two forwarders (128.59.59.70 from above as well as 128.59.62.10) and falling back to root hints so the preconditions to expose the issue are not the same. Nevertheless, I also applied the MaxCacheTTL registry fix as described and restarted DNS and the whole server as well but the problem persists. The problem occurs on all domain controllers in this domain and has occurred since half a year ago, even though the servers are getting automatic Windows updates. EDIT Here is a debug log. The client is 160.39.114.110, which is my workstation. 1/25/2012 2:16:01 PM 0E08 PACKET 000000001EA6BFD0 UDP Rcv 160.39.114.110 2e94 Q [0001 D NOERROR] A (5)bogus(0) UDP question info at 000000001EA6BFD0 Socket = 508 Remote addr 160.39.114.110, port 49710 Time Query=1077016, Queued=0, Expire=0 Buf length = 0x0fa0 (4000) Msg length = 0x0017 (23) Message: XID 0x2e94 Flags 0x0100 QR 0 (QUESTION) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 0 Z 0 CD 0 AD 0 RCODE 0 (NOERROR) QCOUNT 1 ACOUNT 0 NSCOUNT 0 ARCOUNT 0 QUESTION SECTION: Offset = 0x000c, RR count = 0 Name "(5)bogus(0)" QTYPE A (1) QCLASS 1 ANSWER SECTION: empty AUTHORITY SECTION: empty ADDITIONAL SECTION: empty 1/25/2012 2:16:01 PM 0E08 PACKET 000000001EA6BFD0 UDP Snd 160.39.114.110 2e94 R Q [8281 DR SERVFAIL] A (5)bogus(0) UDP response info at 000000001EA6BFD0 Socket = 508 Remote addr 160.39.114.110, port 49710 Time Query=1077016, Queued=0, Expire=0 Buf length = 0x0fa0 (4000) Msg length = 0x0017 (23) Message: XID 0x2e94 Flags 0x8182 QR 1 (RESPONSE) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 1 Z 0 CD 0 AD 0 RCODE 2 (SERVFAIL) QCOUNT 1 ACOUNT 0 NSCOUNT 0 ARCOUNT 0 QUESTION SECTION: Offset = 0x000c, RR count = 0 Name "(5)bogus(0)" QTYPE A (1) QCLASS 1 ANSWER SECTION: empty AUTHORITY SECTION: empty ADDITIONAL SECTION: empty Every option in the debug log box was checked except "filter by IP". By contrast, when I query, say, accounts.google.com, I can see the DNS server go out to its forwarder (128.59.59.70, for example). In this case, I didn't see any packets going out from my DNS server even though bogus. was not in the cache (the debug log was already running and this is the first time I queried this server for bogus. or any TLD). It just returned SERVFAIL without consulting any other DNS server, as in the Microsoft KB article linked above.

    Read the article

  • SSH connection times out

    - by mark
    Given: vm - a WinXPsp3 virtual machine hosted by a Win7sp1 physical machine alice is the user on vm srv - a Win2008R2sp1 server bob is the user on srv quake - a linux server mark is the user on quake Both vm and srv have the same new installation of cygwin (1.7.9) and openssh. Firewall service is disabled on vm (and its host) and on srv All the machines can be pinged from all the machines. ssh mark@quake works OK from both vm and srv. ssh bob@srv works OK from both quake and vm. ssh alice@vm works on the vm itself only, but it fails on the other two machines: alice@vm ~ $ ssh alice@vm alice@vm's password: Last login: Tue Oct 25 23:42:09 2011 from vm.shunra.net [mark@Quake ~]$ ssh -vvv alice@vm OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to vm [172.30.2.60] port 22. debug1: connect to address 172.30.2.60 port 22: Connection timed out ssh: connect to host vm port 22: Connection timed out bob@Srv ~ $ ssh -vvv alice@vm OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Connecting to vm [172.30.2.60] port 22. debug1: connect to address 172.30.2.60 port 22: Connection timed out ssh: connect to host vm port 22: Connection timed out I used ssh-host-config both on vm and srv to configure the ssh to run as a windows service. Besides that I did nothing else. Can anyone help me troubleshoot this issue? Thank you very much. EDIT The virtual machine software is VMWare Workstation 7.1.4. I think the problem is in its settings, but I have no idea where exactly. The Network Adapter is set to Bridged. EDIT2 All the machines are located in the company lab, I think all of them are on the same segment, but I may be wrong. Below is the ipconfig /all output for each machine (skipping the linux server). I have deleted the Tunnel adapters to keep the output minimal. If anyone thinks they matter, do tell so and I will post them as well. In addition ping output is given to show that DNS is correct. Something else, may be relevant, may be not. Doing psexec to srv works OK, whereas to vm failes with Access Denied. srv: C:\Windows\system32>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : srv Primary Dns Suffix . . . . . . . : shunra.net Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : shunra.net Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom BCM5709C NetXtreme II GigE (NDIS VBD Client) Physical Address. . . . . . . . . : E4-1F-13-6D-F3-00 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 172.30.6.9(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.248.0 Default Gateway . . . . . . . . . : 172.30.0.254 DNS Servers . . . . . . . . . . . : 172.30.1.1 172.30.1.2 NetBIOS over Tcpip. . . . . . . . : Enabled C:\Windows\system32>ping vm Pinging vm.shunra.net [172.30.2.60] with 32 bytes of data: Reply from 172.30.2.60: bytes=32 time=1ms TTL=128 Reply from 172.30.2.60: bytes=32 time=4ms TTL=128 Reply from 172.30.2.60: bytes=32 time<1ms TTL=128 Reply from 172.30.2.60: bytes=32 time<1ms TTL=128 Ping statistics for 172.30.2.60: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 4ms, Average = 1ms C:\Windows\system32> vm: C:\>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : vm Primary Dns Suffix . . . . . . . : shunra.net Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : shunra.net shunranet Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : shunranet Description . . . . . . . . . . . : VMware Accelerated AMD PCNet Adapter Physical Address. . . . . . . . . : 00-0C-29-8F-A0-0B Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 172.30.2.60 Subnet Mask . . . . . . . . . . . : 255.255.248.0 Default Gateway . . . . . . . . . : 172.30.0.254 DHCP Server . . . . . . . . . . . : 172.30.1.1 DNS Servers . . . . . . . . . . . : 172.30.1.1 172.30.1.2 Lease Obtained. . . . . . . . . . : Tuesday, October 25, 2011 18:16:34 Lease Expires . . . . . . . . . . : Wednesday, November 02, 2011 18:16:34 C:\>ping srv Pinging srv.shunra.net [172.30.6.9] with 32 bytes of data: Reply from 172.30.6.9: bytes=32 time=1ms TTL=128 Reply from 172.30.6.9: bytes=32 time<1ms TTL=128 Reply from 172.30.6.9: bytes=32 time<1ms TTL=128 Reply from 172.30.6.9: bytes=32 time<1ms TTL=128 Ping statistics for 172.30.6.9: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 1ms, Average = 0ms C:\> vm-host (the host machine of the vm): C:\>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : vm-host Primary Dns Suffix . . . . . . . : shunra.net Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : shunra.net Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Realtek RTL8168D/8111D Family PCI-E Gigabit Ethernet NIC (NDIS 6.20) Physical Address. . . . . . . . . : 6C-F0-49-E7-E9-30 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::f59d:7f6e:1510:6f%10(Preferred) IPv4 Address. . . . . . . . . . . : 172.30.6.7(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.248.0 Default Gateway . . . . . . . . . : 172.30.0.254 DHCPv6 IAID . . . . . . . . . . . : 242020425 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-13-CC-39-80-6C-F0-49-E7-E9-30 DNS Servers . . . . . . . . . . . : 172.30.1.1 194.90.1.5 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter VMware Network Adapter VMnet1: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : VMware Virtual Ethernet Adapter for VMnet1 Physical Address. . . . . . . . . : 00-50-56-C0-00-01 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::cd92:38c0:9a6d:c008%16(Preferred) Autoconfiguration IPv4 Address. . : 169.254.192.8(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : DHCPv6 IAID . . . . . . . . . . . : 352342102 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-13-CC-39-80-6C-F0-49-E7-E9-30 DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1 fec0:0:0:ffff::2%1 fec0:0:0:ffff::3%1 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter VMware Network Adapter VMnet8: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : VMware Virtual Ethernet Adapter for VMnet8 Physical Address. . . . . . . . . : 00-50-56-C0-00-08 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::edb9:b78c:a504:593b%17(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.5.1(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : DHCPv6 IAID . . . . . . . . . . . : 369119318 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-13-CC-39-80-6C-F0-49-E7-E9-30 DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1 fec0:0:0:ffff::2%1 fec0:0:0:ffff::3%1 NetBIOS over Tcpip. . . . . . . . : Enabled C:\>ping srv Pinging srv.shunra.net [172.30.6.9] with 32 bytes of data: Reply from 172.30.6.9: bytes=32 time<1ms TTL=128 Reply from 172.30.6.9: bytes=32 time<1ms TTL=128 Reply from 172.30.6.9: bytes=32 time<1ms TTL=128 Reply from 172.30.6.9: bytes=32 time<1ms TTL=128 Ping statistics for 172.30.6.9: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\>ping vm Pinging vm.shunra.net [172.30.2.60] with 32 bytes of data: Reply from 172.30.2.60: bytes=32 time<1ms TTL=128 Reply from 172.30.2.60: bytes=32 time<1ms TTL=128 Reply from 172.30.2.60: bytes=32 time<1ms TTL=128 Reply from 172.30.2.60: bytes=32 time<1ms TTL=128 Ping statistics for 172.30.2.60: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\> EDIT3 I have just checked - the vm-host is able to ssh to the vm machine! I still do not know how to leverage this discovery to solve the problem.

    Read the article

  • arp problems with transparent bridge on linux

    - by Mink
    I've been trying to secure my virtual machines on my esx server by putting them behind a transparent bridge with 2 interfaces, one in front, one at the back. My intention is to put all the firewall rules in one place (instead of on each virtual server). I've been using as bridge a blank new virtual machine based on arch linux (but I suspect it doesn't matter which brand of linux it is). What I have is 2 virtual switchs (thus two Virtual Network, VN_front and VN_back), each with 2 types of ports (switched/separated or promiscious/where the machine can see all packets). On my bridge machine, I've set up 2 virtual NIC, one on VN_front, one on VN_back, both in promisc mode. I've created a bridge br0 with both NIC in it: brctl addbr br0 brctl stp br0 off brctl addif br0 front_if brctl addif br0 back_if Then brought them up: ifconfig front_if 0.0.0.0 promisc ifconfig back_if 0.0.0.0 promisc ifconfig br0 0.0.0.0 (I use promisc mode, because I'm not sure I can do without, thinking that maybe the packets don't reach the NICs) Then I took one of my virtual server sitting on VN_front, and plugged it to VN_back instead (that's the nifty use case I'm thinking about, being able to move my servers around just by changing the VN they are plugged into, without changing anything in the configuration). Then I looked into the macs "seen" by my addressless bridge using brctl showmacs br0 and it did show my server from both sides: I get something that looks like this : port no mac addr is local? ageing timer 2 00:0c:29:e1:54:75 no 9.27 1 00:0c:29:fd:86:0c no 9.27 2 00:50:56:90:05:86 no 73.38 1 00:50:56:90:05:88 no 0.10 2 00:50:56:90:05:8b yes 0.00 << FRONT VN 1 00:50:56:90:05:8c yes 0.00 << BACK VN 2 00:50:56:90:19:18 no 13.55 2 00:50:56:90:3c:cf no 13.57 the thing is that the server that are plugged in front/back are not shown on the correct port. I suspect some horrible thing happening in the ARP-world... :-/ If I ping from a front virtual server to a back virtual server, I can only see the back machine if that back machine pings something in the front. As soon as I stop the ping from the back machine, the ping from the front machine stops getting through... I've noticed that if the back machine pings, then its port on the bridge is the correct one... I've tried to play with the arp_ switch of /proc/sys, but with no clear effect on the end result... /proc/sys/net/ipv4/ip_forward doesn't seem to be of any use when using a bridge (seems it's all taken care of by brctl) /proc/sys/net/ipv4/conf//arp_ don't seem to change much either... (tried arp_announce to 2 or 8 - like suggested elsewhere - and arp_ignore to 0 or 1 ) All the examples I've seen have a different subnet on either side like 10.0.1.0/24 and 10.0.2.0/24... In my case I want 10.0.1.0/24 on both side (just like a transparent switch - except it's a hidden fw ). Turning stp on/off doesn't seem to have any impact on my issue. It's as if the arp packets where getting through the bridge, corrupting the other side with false data... I've tried to use the -arp on each interface, br0, front, back... it breaks the thing altogether... I suspect it has something to do with both side being on the same subnet... I've thought about putting all my machine behind the fw, so as to have all the same subnet at the back... but I'm stuck with my provider's gateway standing at the front with part of my subnet (in fact 3 appliance to route the whole subnet), so I'll always have ips from the same subnet on both side, whatever I do... (I'm using fixed front IPs on my delegated subnet). I'm at a loss... -_-'' Thx for your help. (As anyone tried something like this? from within ESXi?) (It's not just a stunt, the idea is to have something like fail2ban running on some servers, sending their banned IP to the bridge/fw so that it too could ban them - saving all the other servers from that same attacker in one go, allowing for some honeypot that would trigger the fw from any kind of suitable response, and stuffs of the sort... I am aware I could use something like snort, but it addresses some completely different kind of problems, in a completely different way... )

    Read the article

  • Secondary DHCP server won't start on Centos 6.2

    - by Slowjoe
    I'm trying to create a backup DHCP server. Server times are in sync. Primary server starts fine. Secondary server won't start. Error from /var/log/messages is: Sep 15 14:47:45 stream dhcpd: Copyright 2004-2010 Internet Systems Consortium. Sep 15 14:47:45 stream dhcpd: All rights reserved. Sep 15 14:47:45 stream dhcpd: For info, please visit https://www.isc.org/software/dhcp/ Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 25: invalid statement in peer declaration Sep 15 14:47:45 stream dhcpd: #011max-response-default Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 41: failover peer dhcp-failover: not found Sep 15 14:47:45 stream dhcpd: failover peer "dhcp-failover" Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 49: failover peer dhcp-failover: not found Sep 15 14:47:45 stream dhcpd: failover peer "dhcp-failover" Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: WARNING: Host declarations are global. They are not limited to the scope you declared them in. Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 70: failover peer dhcp-failover: not found Sep 15 14:47:45 stream dhcpd: failover peer "dhcp-failover" Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 78: failover peer dhcp-failover: not found Sep 15 14:47:45 stream dhcpd: failover peer "dhcp-failover" Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: Configuration file errors encountered -- exiting Sep 15 14:47:45 stream dhcpd: Sep 15 14:47:45 stream dhcpd: This version of ISC DHCP is based on the release available Sep 15 14:47:45 stream dhcpd: on ftp.isc.org. Features have been added and other changes Sep 15 14:47:45 stream dhcpd: have been made to the base software release in order to make Sep 15 14:47:45 stream dhcpd: it work better with this distribution. Sep 15 14:47:45 stream dhcpd: Sep 15 14:47:45 stream dhcpd: Please report for this software via the CentOS Bugs Database: Sep 15 14:47:45 stream dhcpd: http://bugs.centos.org/ Sep 15 14:47:45 stream dhcpd: Sep 15 14:47:45 stream dhcpd: exiting. Config file contents: # DHCP Server Configuration file. # see /usr/share/doc/dhcp*/dhcpd.conf.sample # see 'man 5 dhcpd.conf' # option domain-name "eng.foo.com"; option domain-name-servers ns0.eng.foo.com, ns1.eng.foo.com; option ntp-servers ntp.eng.foo.com; #option time-servers ntp.eng.foo.com; default-lease-time 3600; max-lease-time 7200; authoritative; log-facility local7; failover peer "dhcp-failover" { secondary; address 10.0.1.70; port 647; peer address 10.0.1.11; peer port 647; max-response-default 30; max-unacked-updates 10; load balance max seconds 3; } # # Management subnet # subnet 10.0.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option broadcast-address 10.0.0.255; option routers 10.0.0.1; option domain-search "eng.foo.com", "foo.com"; # Unknown clients get this pool pool { failover peer "dhcp-failover"; max-lease-time 300; range 10.0.0.240 10.0.0.249; allow unknown-clients; } # Known clients get this pool pool { failover peer "dhcp-failover"; max-lease-time 28800; range 10.0.0.150 10.0.0.199; deny unknown-clients; } include "/etc/dhcp/dhcpd.conf-engmgmt"; } # # Data subnet # subnet 10.0.1.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option broadcast-address 10.0.1.255; option routers 10.0.1.1; option domain-search "eng.foo.com", "foo.com"; # Unknown clients get this pool pool { failover peer "dhcp-failover"; max-lease-time 300; range 10.0.1.240 10.0.1.249; allow unknown-clients; } # Known clients get this pool pool { failover peer "dhcp-failover"; max-lease-time 28800; range 10.0.1.150 10.0.1.199; deny unknown-clients; } # For centos network installs if substring (option vendor-class-identifier, 0, 8) = "anaconda" { filename "/autohome/distro/ks/"; next-server eng-data.eng.foo.com; } # For PXE network installs if substring (option vendor-class-identifier, 0, 9) = "PXEClient" { filename "pxelinux.0"; next-server eng-data.eng.foo.com; } # For KVM PXE network installs if substring (option vendor-class-identifier, 0, 9) = "Etherboot" { filename "pxelinux.0"; next-server eng-data.eng.foo.com; } include "/etc/dhcp/dhcpd.conf-engdata"; }

    Read the article

  • Network outside internal not reaching TMG Forefront 2010 (Hyper-V environment)

    - by Pascal
    Below is my environment: I have 1 physical machine running Windows 2008 R2, with the Hyper-V role. This machine has 3 physical NICs: One for Internet One for Internal Network One for Wireless Network All 3 have their respective Virtual Networks in Hyper-V, and I have an extra Private virutal machine network for a DMZ Network. In one of the virtual machines, I have TMG Forefront 2010 SP1 installed, with all 4 networks available to it. Below is the IPCONFIG /ALL at the firewall: Windows IP Configuration Host Name . . . . . . . . . . . . : FRW-EXP1-02 Primary Dns Suffix . . . . . . . : exp1.eti.br Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : Yes WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : exp1.eti.br Ethernet adapter Internet: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft Virtual Machine Bus Network Adapter #4 Physical Address. . . . . . . . . : 00-15-5D-01-06-0E DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::6d05:6033:4cfc:bdf5%15(Preferred) IPv4 Address. . . . . . . . . . . : 189.100.110.xxx(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.240.0 Lease Obtained. . . . . . . . . . : quarta-feira, 5 de janeiro de 2011 11:17:24 Lease Expires . . . . . . . . . . : quarta-feira, 5 de janeiro de 2011 16:07:02 Default Gateway . . . . . . . . . : 189.100.96.xxx DHCP Server . . . . . . . . . . . : 201.6.2.43 DHCPv6 IAID . . . . . . . . . . . : 436213085 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-14-6D-75-6F-00-15-5D-01-06-0B DNS Servers . . . . . . . . . . . : 201.6.2.163 201.6.2.43 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter Rede Interna: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft Virtual Machine Bus Network Adapter #3 Physical Address. . . . . . . . . : 00-15-5D-01-06-0C DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::51ff:4723:ce4c:bbc3%14(Preferred) IPv4 Address. . . . . . . . . . . : 10.50.75.10(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : DHCPv6 IAID . . . . . . . . . . . : 352327005 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-14-6D-75-6F-00-15-5D-01-06-0B DNS Servers . . . . . . . . . . . : 10.50.75.1 10.50.75.2 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter DMZ: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft Virtual Machine Bus Network Adapter #2 Physical Address. . . . . . . . . : 00-15-5D-01-06-0A DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::d4c5:75cf:e9aa:73e1%13(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.10.1(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : DHCPv6 IAID . . . . . . . . . . . : 301995357 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-14-6D-75-6F-00-15-5D-01-06-0B DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1 fec0:0:0:ffff::2%1 fec0:0:0:ffff::3%1 NetBIOS over Tcpip. . . . . . . . : Enabled Ethernet adapter Wireless: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft Virtual Machine Bus Network Adapter Physical Address. . . . . . . . . : 00-15-5D-01-06-0B DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::459:8ca6:d02:8da1%11(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.1.10(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : DHCPv6 IAID . . . . . . . . . . . : 234886493 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-14-6D-75-6F-00-15-5D-01-06-0B DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1 fec0:0:0:ffff::2%1 fec0:0:0:ffff::3%1 NetBIOS over Tcpip. . . . . . . . : Enabled I have the Networks below at Forefront: External: IP addresses external to the Forefront TMG Networks Internal: 10.50.75.0 - 10.50.75.255 Local Host: Perimiter: 192.168.10.0 - 192.168.10.255 Wireless: 192.168.1.0 - 192.168.1.255 In the Networks Rules, I have: 1 => Route => Local Host => All Networks 2 => Route => Quarantined; VPN => Internal 3 => NAT => Internal; VPN => Perimiter 4 => NAT => Internal; Perimiter; Quarantined; VPN; Wireless => External My problem is that I can only communicate with the Internal and External networks. If a ping www.google.com or 10.50.75.21 from the Forefront VM, I get answer backs without a problem. If I try to ping a machine at the Perimiter network or the Wireless network, it doesn't get routed back to Forefront, and it's the default gateway on all Networks. Here as ping samples: PS C:\Users\Administrator.TPB1> ping www.google.com Pinging www.l.google.com [64.233.163.104] with 32 bytes of data: Reply from 64.233.163.104: bytes=32 time=11ms TTL=58 Reply from 64.233.163.104: bytes=32 time=8ms TTL=58 Ping statistics for 64.233.163.104: Packets: Sent = 2, Received = 2, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 8ms, Maximum = 11ms, Average = 9ms Control-C PS C:\Users\Administrator.TPB1> ping 10.50.75.21 Pinging 10.50.75.21 with 32 bytes of data: Reply from 10.50.75.21: bytes=32 time=1ms TTL=128 Reply from 10.50.75.21: bytes=32 time=1ms TTL=128 Reply from 10.50.75.21: bytes=32 time=1ms TTL=128 Reply from 10.50.75.21: bytes=32 time=1ms TTL=128 Ping statistics for 10.50.75.21: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 1ms, Maximum = 1ms, Average = 1ms PS C:\Users\Administrator.TPB1> ping 192.168.10.3 Pinging 192.168.10.3 with 32 bytes of data: Reply from 192.168.10.1: Destination host unreachable. Request timed out. Request timed out. Request timed out. Ping statistics for 192.168.10.3: Packets: Sent = 4, Received = 1, Lost = 3 (75% loss), PS C:\Users\Administrator.TPB1> The ping to the 192.168.10.3 gets the Destination host unreachable. Below is the ipconfig for the perimiter VM: PS C:\Users\Administrator.Administrator> ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : app-exp1-02 Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Unkown IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Microsoft Virtual Machine Bus Network Adapter Physical Address. . . . . . . . . : 00-15-5D-01-06-08 DHCP Enabled. . . . . . . . . . . : No IPv4 Address. . . . . . . . . . . : 192.168.10.3 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.10.1 DNS Servers . . . . . . . . . . . : 201.6.2.163 201.6.2.43 Trying to ping 192.168.10.1 ( the gateway ) from the DMZ machine also does not work. When I use Log & Reports to monitor packets from Wireless network and Perimiter network, I don't get any packets link PING or HTTP that I try to send. But I do get a lot of spoofing messages for NETBIOS broadcasts... it's like Forefront thinks it's coming from a different network, but I don't know why. Please Help! Tks

    Read the article

  • slow DNS resolution

    - by Ehsan
    I have a DNS server that resolves all queries for an internal group of servers. It is a bind on CentOS 5.5 (same as RHEL5) and I have set it up to allow recursion and resolve direction without any forwarders. The problem I am facing is that it takes a freakishly long amount of time to resolve a name for the first time. (in the magnitudes of 20 sec) This causes clients to give timeout. When I set it to forward all to Google's public DNS, i.e. 8.8.8.8+8.8.4.4, it works very nicely (within a second). I tried monitoring the traffic on the net to see why it is doing this: [root@ns1 ~]# tcpdump -nnvvvA -s0 udp tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 23:06:36.137797 IP (tos 0x0, ttl 64, id 35903, offset 0, flags [none], proto: UDP (17), length: 60) 172.17.1.10.36942 > 172.17.1.4.53: [udp sum ok] 19773+ A? www.paypal.com. (32) E..<[email protected]... .....N.5.(6.M=...........www.paypal.com..... 23:06:36.140594 IP (tos 0x0, ttl 64, id 56477, offset 0, flags [none], proto: UDP (17), length: 71) 172.17.1.4.6128 > 192.35.51.30.53: [udp sum ok] 10105 [1au] A? www.paypal.com. ar: . OPT UDPsize=4096 (43) E..G....@........#3....5.3fR'y...........www.paypal.com.......)........ 23:06:38.149756 IP (tos 0x0, ttl 64, id 13078, offset 0, flags [none], proto: UDP (17), length: 71) 172.17.1.4.52425 > 192.54.112.30.53: [udp sum ok] 54892 [1au] A? www.paypal.com. ar: . OPT UDPsize=4096 (43) [email protected]&.....6p....5.3.q.l...........www.paypal.com.......)........ 23:06:40.159725 IP (tos 0x0, ttl 64, id 43016, offset 0, flags [none], proto: UDP (17), length: 71) 172.17.1.4.24059 > 192.42.93.30.53: [udp sum ok] 11205 [1au] A? www.paypal.com. ar: . OPT UDPsize=4096 (43) E..G....@..@.....*].]..5.3..+............www.paypal.com.......)........ 23:06:41.141403 IP (tos 0x0, ttl 64, id 35904, offset 0, flags [none], proto: UDP (17), length: 60) 172.17.1.10.36942 > 172.17.1.4.53: [udp sum ok] 19773+ A? www.paypal.com. (32) E..<.@..@..@... .....N.5.(6.M=...........www.paypal.com..... 23:06:42.169652 IP (tos 0x0, ttl 64, id 44001, offset 0, flags [none], proto: UDP (17), length: 60) 172.17.1.4.9141 > 192.55.83.30.53: [udp sum ok] 1184 A? www.paypal.com. (32) E..<[email protected].#..5.(...............www.paypal.com..... 23:06:42.207295 IP (tos 0x0, ttl 54, id 38004, offset 0, flags [none], proto: UDP (17), length: 205) 192.55.83.30.53 > 172.17.1.4.9141: [udp sum ok] 1184- q: A? www.paypal.com. 0/3/3 ns: paypal.com. NS ns1.isc-sns.net., paypal.com. NS ns2.isc-sns.com., paypal.com. NS ns3.isc-sns.info. ar: ns1.isc-sns.net. AAAA 2001:470:1a::1, ns1.isc-sns.net. A 72.52.71.1, ns2.isc-sns.com. A 38.103.2.1 (177) E....t..6./A.7S......5#..................www.paypal.com..................ns1.isc-sns.net..............ns2.isc-sns...............ns3.isc-sns.info..,.......... ..p.............,..........H4G..I..........&g.. (this goes on for a few more seconds) If you look carefully, you will see that the first 3-4 root servers did not respond at all. This wastes 7-8 seconds, until one of them responded. Do you think I have setup something wrong here? Interestingly, when I dig directly from the root servers that did not respond, the always respond very fast (showing the firewall/nat is not the issue here). E.g. dig www.paypal.com @192.35.51.30 works perfectly, consistently, and very fast. What do you think about this mystery?

    Read the article

  • OpenVPN (HideMyAss) client on Ubuntu: Route only HTTP traffic

    - by Andersmith
    I want to use HideMyAss VPN (hidemyass.com) on Ubuntu Linux to route only HTTP (ports 80 & 443) traffic to the HideMyAss VPN server, and leave all the other traffic (MySQL, SSH, etc.) alone. I'm running Ubuntu on AWS EC2 instances. The problem is that when I try and run the default HMA script, I suddenly can't SSH into the Ubuntu instance anymore and have to reboot it from the AWS console. I suspect the Ubuntu instance will also have trouble connecting to the RDS MySQL database, but haven't confirmed it. HMA uses OpenVPN like this: sudo openvpn client.cfg The client configuration file (client.cfg) looks like this: ############################################## # Sample client-side OpenVPN 2.0 config file # # for connecting to multi-client server. # # # # This configuration can be used by multiple # # clients, however each client should have # # its own cert and key files. # # # # On Windows, you might want to rename this # # file so it has a .ovpn extension # ############################################## # Specify that we are a client and that we # will be pulling certain config file directives # from the server. client auth-user-pass #management-query-passwords #management-hold # Disable management port for debugging port issues #management 127.0.0.1 13010 ping 5 ping-exit 30 # Use the same setting as you are using on # the server. # On most systems, the VPN will not function # unless you partially or fully disable # the firewall for the TUN/TAP interface. #;dev tap dev tun # Windows needs the TAP-Win32 adapter name # from the Network Connections panel # if you have more than one. On XP SP2, # you may need to disable the firewall # for the TAP adapter. ;dev-node MyTap # Are we connecting to a TCP or # UDP server? Use the same setting as # on the server. proto tcp ;proto udp # The hostname/IP and port of the server. # You can have multiple remote entries # to load balance between the servers. # All VPN Servers are added at the very end ;remote my-server-2 1194 # Choose a random host from the remote # list for load-balancing. Otherwise # try hosts in the order specified. # We order the hosts according to number of connections. # So no need to randomize the list # remote-random # Keep trying indefinitely to resolve the # host name of the OpenVPN server. Very useful # on machines which are not permanently connected # to the internet such as laptops. resolv-retry infinite # Most clients don't need to bind to # a specific local port number. nobind # Downgrade privileges after initialization (non-Windows only) ;user nobody ;group nobody # Try to preserve some state across restarts. persist-key persist-tun # If you are connecting through an # HTTP proxy to reach the actual OpenVPN # server, put the proxy server/IP and # port number here. See the man page # if your proxy server requires # authentication. ;http-proxy-retry # retry on connection failures ;http-proxy [proxy server] [proxy port #] # Wireless networks often produce a lot # of duplicate packets. Set this flag # to silence duplicate packet warnings. ;mute-replay-warnings # SSL/TLS parms. # See the server config file for more # description. It's best to use # a separate .crt/.key file pair # for each client. A single ca # file can be used for all clients. ca ./keys/ca.crt cert ./keys/hmauser.crt key ./keys/hmauser.key # Verify server certificate by checking # that the certicate has the nsCertType # field set to "server". This is an # important precaution to protect against # a potential attack discussed here: # http://openvpn.net/howto.html#mitm # # To use this feature, you will need to generate # your server certificates with the nsCertType # field set to "server". The build-key-server # script in the easy-rsa folder will do this. ;ns-cert-type server # If a tls-auth key is used on the server # then every client must also have the key. ;tls-auth ta.key 1 # Select a cryptographic cipher. # If the cipher option is used on the server # then you must also specify it here. ;cipher x # Enable compression on the VPN link. # Don't enable this unless it is also # enabled in the server config file. #comp-lzo # Set log file verbosity. verb 3 # Silence repeating messages ;mute 20 # Detect proxy auto matically #auto-proxy # Need this for Vista connection issue route-metric 1 # Get rid of the cached password warning #auth-nocache #show-net-up #dhcp-renew #dhcp-release #route-delay 0 120 # added to prevent MITM attack ns-cert-type server # # Remote servers added dynamically by the master server # DO NOT CHANGE below this line # remote-random remote 173.242.116.200 443 # 0 remote 38.121.77.74 443 # 0 # etc... remote 67.23.177.5 443 # 0 remote 46.19.136.130 443 # 0 remote 173.254.207.2 443 # 0 # END

    Read the article

  • CUDA: Memory copy to GPU 1 is slower in multi-GPU

    - by zenna
    My company has a setup of two GTX 295, so a total of 4 GPUs in a server, and we have several servers. We GPU 1 specifically was slow, in comparison to GPU 0, 2 and 3 so I wrote a little speed test to help find the cause of the problem. //#include <stdio.h> //#include <stdlib.h> //#include <cuda_runtime.h> #include <iostream> #include <fstream> #include <sstream> #include <string> #include <cutil.h> __global__ void test_kernel(float *d_data) { int tid = blockDim.x*blockIdx.x + threadIdx.x; for (int i=0;i<10000;++i) { d_data[tid] = float(i*2.2); d_data[tid] += 3.3; } } int main(int argc, char* argv[]) { int deviceCount; cudaGetDeviceCount(&deviceCount); int device = 0; //SELECT GPU HERE cudaSetDevice(device); cudaEvent_t start, stop; unsigned int num_vals = 200000000; float *h_data = new float[num_vals]; for (int i=0;i<num_vals;++i) { h_data[i] = float(i); } float *d_data = NULL; float malloc_timer; cudaEventCreate(&start); cudaEventCreate(&stop); cudaEventRecord( start, 0 ); cudaMemcpy(d_data, h_data, sizeof(float)*num_vals,cudaMemcpyHostToDevice); cudaMalloc((void**)&d_data, sizeof(float)*num_vals); cudaEventRecord( stop, 0 ); cudaEventSynchronize( stop ); cudaEventElapsedTime( &malloc_timer, start, stop ); cudaEventDestroy( start ); cudaEventDestroy( stop ); float mem_timer; cudaEventCreate(&start); cudaEventCreate(&stop); cudaEventRecord( start, 0 ); cudaMemcpy(d_data, h_data, sizeof(float)*num_vals,cudaMemcpyHostToDevice); cudaEventRecord( stop, 0 ); cudaEventSynchronize( stop ); cudaEventElapsedTime( &mem_timer, start, stop ); cudaEventDestroy( start ); cudaEventDestroy( stop ); float kernel_timer; cudaEventCreate(&start); cudaEventCreate(&stop); cudaEventRecord( start, 0 ); test_kernel<<<1000,256>>>(d_data); cudaEventRecord( stop, 0 ); cudaEventSynchronize( stop ); cudaEventElapsedTime( &kernel_timer, start, stop ); cudaEventDestroy( start ); cudaEventDestroy( stop ); printf("cudaMalloc took %f ms\n",malloc_timer); printf("Copy to the GPU took %f ms\n",mem_timer); printf("Test Kernel took %f ms\n",kernel_timer); cudaMemcpy(h_data,d_data, sizeof(float)*num_vals,cudaMemcpyDeviceToHost); delete[] h_data; return 0; } The results are GPU0 cudaMalloc took 0.908640 ms Copy to the GPU took 296.058777 ms Test Kernel took 326.721283 ms GPU1 cudaMalloc took 0.913568 ms Copy to the GPU took[b] 663.182251 ms[/b] Test Kernel took 326.710785 ms GPU2 cudaMalloc took 0.925600 ms Copy to the GPU took 296.915039 ms Test Kernel took 327.127930 ms GPU3 cudaMalloc took 0.920416 ms Copy to the GPU took 296.968384 ms Test Kernel took 327.038696 ms As you can see, the cudaMemcpy to the GPU is well double the amount of time for GPU1. This is consistent between all our servers, it is always GPU1 that is slow. Any ideas why this may be? All servers are running windows XP.

    Read the article

  • SqlBulkCopy is slow, doesn't utilize full network speed

    - by Alex
    Hi, for that past couple of weeks I have been creating generic script that is able to copy databases. The goal is to be able to specify any database on some server and copy it to some other location, and it should only copy the specified content. The exact content to be copied over is specified in a configuration file. This script is going to be used on some 10 different databases and run weekly. And in the end we are copying only about 3%-20% of databases which are as large as 500GB. I have been using the SMO assemblies to achieve this. This is my first time working with SMO and it took a while to create generic way to copy the schema objects, filegroups ...etc. (Actually helped find some bad stored procs). Overall I have a working script which is lacking on performance (and at times times out) and was hoping you guys would be able to help. When executing the WriteToServer command to copy large amount of data ( 6GB) it reaches my timeout period of 1hr. Here is the core code for copying table data. The script is written in PowerShell. $query = ("SELECT * FROM $selectedTable " + $global:selectiveTables.Get_Item($selectedTable)).Trim() Write-LogOutput "Copying $selectedTable : '$query'" $cmd = New-Object Data.SqlClient.SqlCommand -argumentList $query, $source $cmd.CommandTimeout = 120; $bulkData = ([Data.SqlClient.SqlBulkCopy]$destination) $bulkData.DestinationTableName = $selectedTable; $bulkData.BulkCopyTimeout = $global:tableCopyDataTimeout # = 3600 $reader = $cmd.ExecuteReader(); $bulkData.WriteToServer($reader); # Takes forever here on large tables The source and target databases are located on different servers so I kept track of the network speed as well. The network utilization never went over 1% which was quite surprising to me. But when I just transfer some large files between the servers, the network utilization spikes up to 10%. I have tried setting the $bulkData.BatchSize to 5000 but nothing really changed. Increasing the BulkCopyTimeout to an even greater amount would only solve the timeout. I really would like to know why the network is not being used fully. Anyone else had this problem? Any suggestions on networking or bulk copy will be appreciated. And please let me know if you need more information. Thanks. UPDATE I have tweaked several options that increase the performance of SqlBulkCopy, such as setting the transaction logging to simple and providing a table lock to SqlBulkCopy instead of the default row lock. Also some tables are better optimized for certain batch sizes. Overall, the duration of the copy was decreased by some 15%. And what we will do is execute the copy of each database simultaneously on different servers. But I am still having a timeout issue when copying one of the databases. When copying one of the larger databases, there is a table for which I consistently get the following exception: System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. It is thrown about 16 after it starts copying the table which is no where near my BulkCopyTimeout. Even though I get the exception that table is fully copied in the end. Also, if I truncate that table and restart my process for that table only, the tables is copied over without any issues. But going through the process of copying that entire database fails always for that one table. I have tried executing the entire process and reseting the connection before copying that faulty table, but it still errored out. My SqlBulkCopy and Reader are closed after each table. Any suggestions as to what else could be causing the script to fail at the point each time?

    Read the article

  • "Local transaction already has 1 non-XA Resource: cannot add more resources" error

    - by jthg
    After reading previous questions about this error, it seems like all of them conclude that you need to enable XA on all of the data sources. But: What if I don't want a distributed transaction? What would I do if I want to start transactions on two different databases at the same time, but commit the transaction on one database and roll back the transaction on the other? I'm wondering how my code actually initiated a distributed transaction. It looks to me like I'm starting completely separate transactions on each of the databases. Info about the application: The application is an EJB running on a Sun Java Application Server 9.1 I use something like the following spring context to set up the hibernate session factories: <bean id="dbADatasource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/dbA"/> </bean> <bean id="dbASessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="dbADatasource" /> <property name="hibernateProperties"> [hibernate properties...] </property> <property name="mappingResources"> [mapping resources...] </property> </bean> <bean id="dbBDatasource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/dbB"/> </bean> <bean id="dbBSessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="dbBDatasource" /> <property name="hibernateProperties"> [hibernate properties...] </property> <property name="mappingResources"> [mapping resources...] </property> </bean> Both of the JNDI resources are javax.sql.ConnectionPoolDatasoure's. They actually both point to the same connection pool, but we have two different JNDI resources because there's the possibility that the two groups of tables will move to different databases in the future. Then in code, I do: sessionA = dbASessionFactory.openSession(); sessionB = dbBSessionFactory.openSession(); sessionA.beginTransaction(); sessionB.beginTransaction(); The sessionB.beginTransaction() line produces the error in the title of this post - sometimes. I ran the app on two different sun application servers. On one runs it fine, the other throws the error. I don't see any difference in how the two servers are configured although they do connect to different, but equivalent databases. So the question is Why doesn't the above code start completely independent transactions? How can I force it to start independent transactions rather than a distributed transaction? What configuration could cause the difference in behavior between the two application servers? Thanks.

    Read the article

  • Perl IO modules possibly causing issues in Net::DNS module

    - by Rich
    Hi! I’m porting some software that I wrote for a White Russian OpenWRT system to a new Kamikaze 8.09.1 OpenWRT system but I am having some serious issues that I’m hoping you can help me with. Old system Linux kernel 2.4.34 MIPSEL arch Perl 5.8.7 Net::DNS 0.48 IO 1.21 IO::Socket 1.28 IO::Socket::INET 1.28 New system Linux kernel 2.6.26.8 MIPS arch Perl 5.10.0 Net::DNS 0.66 IO 1.23_01 IO::Socket 1.30_01 IO::Socket::INET 1.31 First, let me provide some background information… I am trying to resolve my server (clearprobe.winbeam.com) from within my Perl program and see the following if I enable debugging in Net::DNS: resolve: Server 'clearprobe-ddns.winbeam.com' ;; query(clearprobe-ddns.winbeam.com) ;; setting up an AF_INET() family type UDP socket ;; send_udp(192.168.88.1:53) ;; send_udp(4.2.2.2:53) ;; send_udp(192.168.88.1:53) ;; send_udp(4.2.2.2:53) resolve: res->errorstring: query timed out Both of these servers resolve clearprobe.winbeam.com fine from the command line: root@cwb-2-11:~# echo “nameserver 192.168.88.1” > /etc/resolv.conf root@cwb-2-11:~# nslookup clearprobe-ddns.winbeam.com Server: 192.168.88.1 Address 1: 192.168.88.1 router Name: clearprobe-ddns.winbeam.com Address 1: 64.13.48.40 64-13-48-40.war.clearwire-dns.net root@cwb-2-11:~# echo “nameserver 4.2.2.2” > /etc/resolv.conf root@cwb-2-11:~# nslookup clearprobe-ddns.winbeam.com Server: 4.2.2.2 Address 1: 4.2.2.2 vnsc-bak.sys.gtei.net Name: clearprobe-ddns.winbeam.com Address 1: 64.13.48.40 64-13-48-40.war.clearwire-dns.net Using Perl’s call to the C gethostbyaddr() function works fine, but I need to do another lookup later in the software which requires that I specify the nameserver (clearprobe-ddns.winbeam.com is the authority for my internal DNS zone), hence my Net::DNS requirement. Now, here is the IO module-specific information: What I am seeing is that the reply is coming back from the nameserver (confirmed via tcpdump – I can send the captures if you’d like), but the UDP packets are sitting in the process’s UDP receive queue pending reception by Net::DNS (the approx 1752 bytes per response stay queued waiting for $sel-can_read()): root@cwb-2-11:~# netstat -una Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State udp 1752 0 0.0.0.0:52680 0.0.0.0:* root@cwb-2-11:~# netstat -una Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State udp 5256 0 0.0.0.0:52680 0.0.0.0:* If I force $sock[AF_INET]-recv($buf, $self-_packetsz) around line 803 of /usr/lib/perl5/5.10/Net/DNS/Resolver/Base.pm, instead of waiting for IO::Select’s can_read() function ( @ready = $sel-can_read($timeout)) to populate @ready, the response is received and processed. Any idea what could be causing this issue? In a possibly related matter, I noticed in another script that the following code fails in the same manner (network responses stay in the process’s TCP receive queue) with the new system: $sock = new IO::Socket::INET( PeerAddr => "$server", PeerPort => 37, Proto => 'tcp', Timeout => 5 ); Whereas the following code works: $sock = new IO::Socket::INET( PeerAddr => "$server", PeerPort => 37, Proto => 'tcp' ); I have looked through the NET::DNS code and don’t see a timeout passed for the UDP sockets, so I am not sure if that this is related or not. Please let me know if I can provide you with any further information in order to help diagnose this issue. Thanks! -Rich

    Read the article

  • Upgrade Your Existing BI Publisher 11g (11.1.1.3) to 11.1.1.5

    - by Kan Nishida
    It’s already more than a month now since BI Publisher 11.1.1.5 was released at beginning of May. Have you already tried out many of the great new features? If you are already running on the first version of BI Publisher 11g (11.1.1.3) you might wonder how to upgrade the existing BI Publisher to the 11.1.1.5 version. There are two ways to do this, one is ‘Out-Place’ and another is ‘In-Place’. The ‘Out-Place’ would be quite simple. Basically you will need to install the whole BI or just BI Publisher standalone R11.1.1.5 at a different location then you can switch the catalog to the existing one so that all the reports will be there in the new 11.1.1.5 environment. But sometimes things are not that simple, you might have some custom applications or configuration on the original environment and you want to keep all of them with the upgraded environment. For such scenarios, there is the ‘In-Place’ upgrade, which overrides on top of the original environment only the parts relevant for BI and BI Publisher, and that’s what I’m going to talk about today. Here is the basic steps of the ‘In-Place’ upgrade. Upgrade WebLogic Server to 10.3.5 Upgrade BI System to 11.1.1.5 Upgrade Database Schema Re-register BI Components Upgrade FMW (Fusion Middleware) Configuration Upgrade BI Catalog There is a section that talks about this upgrade from 11.1.1.3 to 11.1.1.5 as part of the overall upgrade document. But I hope my blog post summarized it and made it simple for you to cover only what’s necessary. Upgrade Document: http://download.oracle.com/docs/cd/E21764_01/bi.1111/e16452/bi_plan.htm#BABECJJH Before You Start Stop BI System and Backup I can’t emphasize enough, but before you start PLEASE make sure you take a backup of the existing environments first. You want to stop all WebLogic Servers, Node Manager, OPMN, and OPMN-managed system components that are part of your Oracle BI domains. If you’re on Windows you can do this by simply selecting ‘Stop BI Services’ menu. Then backup the whole system. Upgrade WebLogic Server to 10.3.5 Download WebLogic Server 10.3.5 Upgrade Installer With BI 11.1.1.3 installation your WebLogic Server (WLS) is 10.3.3 and you need to upgrade this to 10.3.5 before upgrading the BI part. In order to upgrade you will need this 10.3.5 upgrade version of WLS, which you can download from our support web site (https://support.oracle.com) You can find the detail information about the installation and the patch numbers for the WLS upgrade installer on this document. Just for your short cut, if you are running on Windows or Linux (x86) here is the patch number for your platform. Windows 32 bit: 12395517: Linux: 12395517 Upgrade WebLogic Server 1. After unzip the downloaded file, launch wls1035_upgrade_win32.exe if you’re on Windows. 2. Accept all the default values and keep ‘Next’ till end, and start the upgrade. Once the upgrade process completes you’ll see the following window. Now let’s move to the BI upgrade. Upgrade BI Platform to 11.1.1.5 with Software Only Install Download BI 11.1.1.5 You can download the 11.1.1.5 version from our OTN page for your evaluation or development. For the production use it’s recommended to download from eDelivery. 1. Launch the installer by double click ‘setup.exe’ (for Windows) 2. Select ‘Software Only Install’ option 3. Select your original Oracle Home where you installed BI 11.1.1.3. 4. Click ‘Install’ button to start the installation. And now the software part of the BI has been upgraded to 11.1.1.5. Now let’s move to the database schema upgrade. Upgrade Database Schema with Patch Assistant You need to upgrade the BIPLATFORM and MDS Schemas. You can use the Patch Assistant utility to do this, and here is an example assuming you’ve created the schema with ‘DEV’ prefix, otherwise change it with yours accordingly. Upgrade BIPLATFORM schema (if you created this schema with DEV_ prev) psa.bat -dbConnectString localhost:1521:orcl -dbaUserName sys -schemaUserName DEV_BIPLATFORM Upgrade MDS schema (if you created this schema with DEV_ prev) psa.bat -dbConnectString localhost:1521:orcl -dbaUserName sys -schemaUserName DEV_MDS Re-register BI System components Now you need to re-register your BI system components such as BI Server, BI Presentation Server, etc to the Fusion Middleware system. You can do this by running ‘upgradenonj2eeapp.bat (or .sh)’ command, which can be found at %ORACLE_HOME%/opmn/bin. Before you run, you need to start the WLS Server and make sure your WLS environment is not locked. If it’s locked then you need to release the system from the Fusion Middleware console before you run the following command. Here is the syntax for the ‘upgradenonj2eeapp.bat (or .sh) command.  upgradenonj2eeapp.bat    -oracleInstance Instance_Home_Location    -adminHost WebLogic_Server_Host_Name    -adminPort administration_server_port_number    -adminUsername administration_server_user And here is an example: cd %BI_HOME%\opmn\bin upgradenonj2eeapp.bat -oracleInstance C:\biee11\instances\instance1 -adminHost localhost -adminPort 7001 -adminUsername weblogic Upgrade Fusion Middleware Configuration There are a couple things on the Fusion Middleware need to be upgraded for the BI system to work. Here is a list of the components to upgrade. Upgrade Shared Library (JRF) Upgrade Fusion Middleware Security (OPSS) Upgrade Code Grants Upgrade OWSM Policy Repository Before moving forward, you need to stop the WebLogic Server. Here is an example. cd %MW_HOME%user_projects\domains\bifoundation_domain\binstopWebLogic.cmd And, let’s start with ‘Upgrade Shared Library (JRF)’. Upgrade Shared Library (JRF) You can use updateJRF() WLST command to upgrade the shared libraries in your domain. Before you do this, you need to stop all running instances, Managed Servers, Administration Server, and Node Manager in the domain. Here is an example of the ‘upgradeJRF()’ command: cd %MW_HOME%\oracle_common\common\bin wlst.cmd upgradeJRF('C:/biee11/user_projects/domains/bifoundation_domain') Upgrade Fusion Middleware Security (OPSS) This step is to upgrade the Fusion Middleware security piece. You can use ‘upgradeOpss()’ WLST command. Here is a syntax for the command. upgradeOpss(jpsConfig="existing_jps_config_file", jaznData="system_jazn_data_file") The ‘existing jps-config.xml file can be found under %DOMAIN_HOME%/config/fmwconfig/jps-config.xml and the ‘system_jazn_data_file’ can be found under %MW_HOME%/oracle_common/modules/oracle.jps_11.1.1/domain_config/system-jazn-data.xml. And here is an example: cd %MW_HOME%\oracle_common\common\bin wlst.cmd upgradeOpss(jpsConfig="c:/biee11/user_projects/domains/bifoundation_domain/config/fmwconfig/jps-config.xml", jaznData="c:/biee11/oracle_common/modules/oracle.jps_11.1.1/domain_config/system-jazn-data.xml") exit() Upgrade Code Grants for Oracle BI Domain And this is the last step for the Fusion Middleware platform upgrade task. You need to run this python script ‘bi-upgrade.py‘ script to configure the code grants necessary to ensure that SSL works correctly for Oracle BI. However, even if you don’t use SSL, you still need to run this script. And if you have multiple BI domains (Enterprise deployment) then you need to run this on each domain. Here is an example: cd %MW_HOME%\oracle_common\common\bin wlst c:\biee11\Oracle_BI1\bin\bi-upgrade.py --bioraclehome c:\biee11\Oracle_BI1 --domainhome c:\biee11\user_projects\domains\bifoundation_domain Upgrade OWSM Policy Repository This is to upgrade OWSM (Oracle Web Service Manager) policy repository, you can use WLST command ‘upgradeWSMPolicyRepository()’. In order to run this command you need to have your WebLogic Server up-and-running. Here is an example. cd %MW_HOME%user_projects\domains\bifoundation_domain\binstopWebLogic.cmd cd %MW_HOME%\oracle_common\common\bin wlst.cmd connect ('weblogic','welcome1','t3://localhost:7001') upgradeWSMPolicyRepository() exit() Upgrade BI Catalogs This step is required only when you have your BI Publisher integrated with BIEE. If your BI Publisher is deployed as a standalone then you don’t need to follow this step. Now finally, you can upgrade the BI catalog. This won’t upgrade your BI Publisher reports themselves, but it just upgrades some attributes information inside the catalog. Before you do this upgrade, make sure the BI system components are not running. You can check the status by the command below. opmnctl status You can do the upgrade by updating a configuration file ‘instanceconfig.xml’, which can be found at %BI_HOME%\instances\instance1\config\coreapplication_obips1, and change the value of ‘UpgradeAndExit’ to be ‘true’. Here is an example: <ps:Catalog xmlns:ps="oracle.bi.presentation.services/config/v1.1"> <ps:UpgradeAndExit>true</ps:UpgradeAndExit> </ps:Catalog> After you made the change and save the file, you need to start the BI Presentation Server. This time you want to start only the BI Presentation Server instead of starting all the servers. You can use ‘opmnctl’ to do so, and here is an example. cd %ORACLE_INSTANCE%\bin opmnctl startproc ias-component=coreapplication_obips1 This would upgrade your BI Catalog to be 11.1.1.5. After the catalog is updated, you can stop the BI Presentation Server so that you can modify the instanceconfig.xml file again to revert the upgradeAndExit value back to ‘false’. Start Explore BI Publisher 11.1.1.5 After all the above steps, you can start all the BI Services, access to the same URL, now you have your BI Publisher and/or BI 11.1.1.5 in your hands. Have fun exploring all the new features of R11.1.1.5!

    Read the article

  • top tweets WebLogic Partner Community – November 2011

    - by JuergenKress
    Send us your tweets @wlscommunity #WebLogicCommunity and follow us on twitter http://twitter.com/wlscommunity glassfish GlassFish Marek’s JAX-RS 2.0 content from Devoxx 2011 – bit.ly/sp2NJO chriscmuir chriscmuir New blog post: ADF bug: missing af:column borders in af:table for IE7 – t.co/81np2jug chriscmuir chriscmuir Reading: Oracle’s ADF Rich Client User Interface (RCUI) Guidelines – oracle.com/webfolder/ux/m… netbeans NetBeans Team Bottlenecks be gone! #Java Performance Tuning workshop in Munich w Kirk Pepperdine, Nov 29-Dec 2: ow.ly/7Akh5 OracleBlogs OracleBlogs Creating ADF Faces Comamnd Button at Runtime ow.ly/1fM9dE alexismp Alexis MP blogged "GlassFish Back from Devoxx 2011, Mature Java EE 6 and EE 7 well on its way" – bit.ly/rP8LV0 JDeveloper JDeveloper & ADF Usage of jQuery in ADF dlvr.it/x3t84 20 hours ago Favorite Retweet Reply OTNArchBeat OTNArchBeat Webcast: Introducing Oracle WebLogic Server 12c: Developer Deep Dive – Dec 1 – 11am PT / 2pm ET bit.ly/t61W4G oraclepartners ORCL PartnerNetwork Brand new Oracle WebLogic 12c will launch on December 1, 10AM PT with a global Webcast highlighting salient… t.co/aflQQ3IX OracleBlogs OracleBlogs JDeveloper and ADF at UKOUG t.co/2CQTiB9n fnimphiu Frank Nimphius Attending UKOUG? All ADF sessions at a glance: t.co/TcMNTMXp 21 Nov Favorite Retweet Reply JDeveloper JDeveloper & ADF Free Webinar ‘ADF Task Flows for Beginners’, information and registration t.co/66jXnGgo via javafx4you javafx4you Java Developer Workshop #2 – Dec 1, 2011 @ Oracle Aoyama center in Tokyo t.co/8p9q3W2B AMIS_Services AMIS Services #vacature #Oracle #ADF ontwikkelaars. bit.ly/AMISADF Gun jezelf een nieuwe uitdaging? Meer op: dld.bz/azZ5N OracleBlogs OracleBlogs Launch Invitation: Introducing Oracle WebLogic Server 12c t.co/bRxCKwAk fnimphiu Frank Nimphius The brand new WebLogic 12c will be released on December 1st 2011 !!! Register for online launch event t.co/pPScg4Xh glassfish GlassFish Announcing Oracle WebLogic 12c – t.co/qh8TdFEl AdamBien Adam Bien Sun Coding Conventions–The Only Standard (Stop Inventing): Code written according to the Sun Coding Conventions… t.co/qaUWp5Mz wlscommunity WebLogic Community Launch Invitation: Introducing Oracle WebLogic Server 12c wp.me/p1LMIb-4y andrejusb Andrejus Baranovskis Andrejus Baranovskis’s Blog: Custom Exception Registration for ADF BC EO Attribute fb.me/1m6nXQD52 MNEMONIC01 Michel Schildmeijer Blog by Michel Schildmeijer: "Oracle WebLogic 12c has been announced" bit.ly/vk6WQL glassfish GlassFish Tab Sweep – Coherence, SBT for GlassFish, OSGi in question, Java EE plugins, … t.co/tVIL95lj OracleBlogs OracleBlogs JavaFX 2.0 at Devoxx 2011 ow.ly/1fJ5iT JDeveloper JDeveloper & ADF Experimenting with ADF BC Application Module Pool Tuning dlvr.it/wjLC1 OracleWebLogic Oracle WebLogic Brand New #WebLogic 12c Launch Event, Dec 1 10am PT. Hasan Rizvi, SVP Fusion Middleware. Developer session. bit.ly/weblogic12clau… JDeveloper JDeveloper & ADF PopUp and Esc/Cancel operations. ADF 11g dlvr.it/whrmC JDeveloper JDeveloper & ADF BPM Workspace: issue loading ADF task flows t.co/vk1gKPx5 OpenJDK OpenJDK Kelly O’Hair — OpenJDK B24 Available : t.co/1bFws6Nw JDeveloper JDeveloper & ADF Oracle ADF setting Task flow to use same page definition file of caller page t.co/9k6UIoYZ JDeveloper JDeveloper & ADF Master Detail Data presentation and CRUD Operations. Detail records in an Editable Popup. ADF 11g t.co/H8uudR0Y JDeveloper JDeveloper & ADF Entity Attribute Validation Rule (Business Rule) based on Master View Object Attribute Example ADF 11g t.co/1agxEQcZ oracletechnet Justin Kestelyn Webcast: Oracle WebLogic Server 12c Launch/Developer Deep-Dive (Dec. 1) t.co/OVBdGKzC JDeveloper JDeveloper & ADF How to render different node icons for different tree levels dlvr.it/wY2jL JDeveloper JDeveloper & ADF Query Component with ‘dynamic’ view criteria dlvr.it/wXlF1 JDeveloper JDeveloper & ADF How to play Flash .swf file in Oracle ADF application t.co/zaSONWAH Devoxx Devoxx Duke at the #Devoxx 2011 Noxx Party! pic.twitter.com/bVJWyu1Z brhubart Bob Rhubart Adam Leftik: JavaEE adoption continues to increase, reaching 40+ million downloads this year. #qconsf11 JDeveloper JDeveloper & ADF Free #ODTUG Seminar – #ADF Task Flows for Beginners – sign up today. www3.gotomeeting.com/register/13372… java Java New Project: OpenJFX j.mp/tI4k3s #javafx #openjdk #devoxx << JavaFX is open source! /via frankmunz Frank Munz WebLogic 12c launch event Dec 1st. t.co/jQKinBqN brhubart Bob Rhubart Spring to Java EE Migration | David Heffelfinger feedly.com/k/td8ccG odtug ODTUG Mark your calendars and register for our upcoming webinars: bit.ly/dWKG1C ADF Task Flows & Measuring Scalability & Performance w/TCP myfear Markus Eisele Anybody willing to take this question? Using #JavaMail with #Weblogic Server bit.ly/stJOET AMIS_Services AMIS Services 20-22 december #training #Oracle JHeadstart #11g, productief ontwikkelen met ADF. Schrijf je in op: amis.nl/trainingen/ora… AdamBien Adam Bien Stress Testing Java EE 6 Applications – Free Article In Free Java Magazine: In the November / December 2011 issu… bit.ly/vmzKkc java Java New Tech Article: Spring to #JavaEE Migration t.co/0EvdHNxb OracleBlogs OracleBlogs WebLogic Java record SPARC T4-4 Servers Set World Record on SPECjEnterprise2010 t.co/Eu1b6ZE0 OracleBlogs OracleBlogs What Is JavaFX? ow.ly/1frb6I OTNArchBeat OTNArchBeat The openJDK Windows Binary Download | Adam Bien ow.ly/7fRiG wlscommunity WebLogic Community WebLogic – Java record – SPARC T4-4 Servers Set World Record on SPECjEnterprise2010 glassfish GlassFish "youtube.com/java" blogs.oracle.com/theaquarium/en… OTNArchBeat OTNArchBeat Beta Testing Concludes: 1Z1-102 – "Oracle WebLogic Server 11g: System Administration I" (Oracle Certification) ow.ly/7fJCl wlscommunity WebLogic Community A deep dive in Oracle WebLogic! @ Contribute – November 29th, 2011 Kontich Belgium wp.me/p1LMIb-4u glassfish GlassFish Gartner’s Latest Enterprise Application Server Magic Quadrant – Oracle’s leadership t.co/aYDqipD8 OpenJDK OpenJDK Terrence Barr – Open sourcing of JavaFX: OpenJFX Project proposed – bit.ly/uKVnEl OpenJDK OpenJDK Maurizio Cimadamore – Testing overload resolution: bit.ly/vgXAbQ java Java Java User Groups Roundup, November 2011 : t.co/hea6vVnk /via @robilad << in German JavaSpotlight The Java Spotlight Java Spotlight Episode 54: Stuart Marks on the Coinification of JDK7 goo.gl/fb/3UXoM OTNArchBeat OTNArchBeat Article Series: Migrating Spring to Java EE 6 | Arun Gupta bit.ly/twUJtz glassfish GlassFish New Java EE 6 Hands-On lab, Devoxx-approved! bit.ly/vup5uE java Java Brian Goetz’s enthusiasm for Java is palpable! #devoxx interview adf_emg ADF EMG "ADF testing with a mock framework" – what is a mock framework? Visit the forum and see: groups.google.com/forum/#!topic/… java Java Taping a bunch of interviews today with Java experts at #devoxx. View on Parleys.com tomorrow. glassfish GlassFish New screencast to configure and run a cross-machine cluster using GlassFish 3.1.1 in < 7 mins faissalb.blogspot.com/2011/11/glassf… (via @bfaissal) glassfish GlassFish Oracle Contributor Agreements – New Home! bit.ly/tD2eLo OTNArchBeat OTNArchBeat Java Magazine – by and for the Java Community- inaugural issue bit.ly/tTv8UD OTNArchBeat OTNArchBeat The Heroes of Java: Michael Hüttermann | @MyFear bit.ly/rYYOFe javafx4you javafx4you Development with #JavaFX on #Linux j.mp/uOpe69 #not_for_the_faint_of_heart java Java Contribute Technical Questions for Java Experts at #devoxx bit.ly/up2cN0 netbeans NetBeans Team A simple REST service using #NetBeans 7, #Java Servlet, and #JAXB: t.co/pKkufsD8 AdamBien Adam Bien The most beautiful, and portable slide of the whole #jaxcon for "Die Hard Java EE 6"session checked-in: kenai.com/projects/javae… jaxlondon JAX London Mark Little’s (@nmcl) excellent keynote from #jaxlondon ‘Middleware Everywhere…’ is available in full – t.co/8vBmtDJ1 AdamBien Adam Bien Calculator sample from "Die Hard Java EE 6" #jaxcon session checked-in: t.co/0UqaULfg OTNArchBeat OTNArchBeat ADF Faces – a logic bomb in the order of bean instantiations | @ChrisCMuir bit.ly/vjqRaZ OracleBlogs OracleBlogs ODI 11g y JMS Queue de Weblogic ow.ly/1fzfQJ frankmunz Frank Munz Which WebLogic book do you recommend? Review of S. Alapati’s WebLogic 11g Administration Handbook. bit.ly/rP0RtW JDeveloper JDeveloper & ADF PageFlowScope with Unbounded Task Flows: the magic sauce for multi-browser-tab support in JDeveloper ADF applications dlvr.it/vNFgn OracleBlogs OracleBlogs 3 New ADF Insider Essential training videos published. ow.ly/1fz94q OracleBlogs OracleBlogs Weblogic Server 11gR1 PS2: Administration Essentials book and eBook t.co/ykzwIaqs OracleBlogs OracleBlogs Specialized Partners Only! New Service to Promote Your Events t.co/qTgyEpY4 wlscommunity WebLogic Community Oracle Weblogic Server 11gR1 PS2: Administration Essentials book and eBook andrejusb Andrejus Baranovskis Andrejus Baranovskis’s Blog: Stress Testing Oracle ADF BC Applications – Intern… andrejusb.blogspot.com/2011/11/stress… OracleBlogs OracleBlogs Frank Nimphius presenting a full day of Oracle ADF in Switzerland ow.ly/1fxU78 java Java #JavaEE and #GlassFish: #JavaOne11 Slides, Demos, Replays, Hands-on Labs t.co/tLM0ehrD OracleBlogs OracleBlogs weblogic.security.SecurityInitializationException: Authentication for user weblogic denied ow.ly/1fxmiu glassfish GlassFish The Last Migration – GlassFish Wiki : t.co/Dc5FT1SJ OTNArchBeat OTNArchBeat A Successful Year of @MiddlewareMagic t.co/amcGGTTk OracleWebLogic Oracle WebLogic Unbeatable Performance for your Cloud Applications with Exalogic, #OracleCoherence and #WebLogic. ow.ly/7lYKm OTNArchBeat OTNArchBeat Stress Testing Oracle ADF BC Applications – Passivation and Activation | @AndrejusB bit.ly/sASssL OTNArchBeat OTNArchBeat Review: "Oracle Weblogic Server 11gR1 PS2: Administration Essentials" by Michel Schildmeijer | @MyFear t.co/ll6ra0J9 OTNArchBeat OTNArchBeat GlassFish 3.1.2 themes and features | The Aquarium bit.ly/vVqr9r Andre_van_Dalen Andre van Dalen Masterclass: Advanced Oracle ADF 11g lnkd.in/M_45Pi AdamBien Adam Bien The "lunch" edition of RentACar is pushed into: kenai.com/projects/javae… #wjax AdamBien Adam Bien In munich, room munich at #wjax. Welcome to #javaee workshop. Gather your questions. 15 minutes to go lucasjellema Lucas Jellema Review by Markus of Michel’s book: t.co/41U9wvOb In short: valuable for novice WLS users, maybe not so much for die-hard WLS admin. biemond Edwin Biemond “@myfear: [blog] #Review: "#Weblogic Server 11gR1 PS2: Administration Essentials" t.co/LsODcb3e” got the same conclusion on amazon glassfish GlassFish Practical advice for deploying Lift apps to GlassFish: bit.ly/t3KUml glassfish GlassFish The unbearable lightness of GlassFish t.co/v9307SEJ javafx4you javafx4you Building Java EE applications in JavaFX: JavaFX 2.0, FXML and Spring j.mp/tiMDUh andrejusb Andrejus Baranovskis Andrejus Baranovskis’s Blog: Stress Testing Oracle ADF BC Applications – Passiv… andrejusb.blogspot.com/2011/11/stress… wlscommunity WebLogic Community “@AMIS_Services: Follow @amis_services To Win a copy of SOA Suite 11g Handbook by @lucasjellema dld.bz/axD22 pls RT” excellent book! glassfish GlassFish GlassFish 3.1.2 themes and features bit.ly/uEc6uZ biemond Edwin Biemond Weblogic pre-sales exam was hard, you really need to know the versions , upgrade path and have a score above 80% monkchips James Governor The Rise and Fall and Rise of Java. JAX 2011 london keynote. how big data and the web are floating the boat. slidesha.re/u3Kzlo glassfish GlassFish Tab Sweep – Jersey, Hudson, GlassFish Hosting, GC’s compared, Spring to JavaEE, Modularity, … bit.ly/u9Cc30 oracletechnet Justin Kestelyn Oracle Tuxedo: A renewed acquaintance t.co/gp0mmf20 OTNArchBeat OTNArchBeat Oracle Enterprise Pack for Eclipse, OEPE 11.1.1.8 bit.ly/tC3eKp OracleBlogs OracleBlogs NetBeans HTML Editor and Groovy Editor in a Multiview Component (Part 2) ow.ly/1ftCeI myfear Markus Eisele [blog] #Oracle 2008 – 2011 in Gartners Magic Quadrant for Enterprise Application Servers t.co/2Bs1vgMZ myfear Markus Eisele [blog] #EclipseCon Europe – Java 7 in the Enterprise goo.gl/fb/r80df #ece2011 #java7 javafx4you javafx4you JavaFX 2.0 for Mac build b07 (developer preview) is available for download j.mp/vSwmBP Enjoy! #JavaFX #Mac OracleBlogs OracleBlogs A deep dive in Oracle WebLogic! @ Contribute November 29th, 2011 Kontich Belgium ow.ly/1fsEZs arungupta Arun Gupta #JavaEE7 slides from #jaxlondon and #jfall11 now available: slidesha.re/sh4iFq AdamBien Adam Bien Just checked-in the results of the #jaxlondon community night (somehow beer related): kenai.com/projects/javae… glassfish GlassFish GlassFish Podcast Episode #080 – User Stories, Part 3: Adam Bien and Sean Comerford (ESPN) blogs.oracle.com/glassfishpodca… glassfish GlassFish Story: t.co/jQPqihJb using GlassFish blogs.oracle.com/stories/entry/… "3000+ requests/sec" and more enterprisejava Java EE Mentions New blog post WebLogic deployment status checks for CI wp.me/pOOSs-F #weblogic #continuousintegration /vi… bit.ly/uZz0fk The become a member in the WebLogic Partner Community please first login at http://partner.oracle.com and then visit: http://www.oracle.com/partners/goto/wls-emea Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: twitter,WebLogic,WebLogic Community,OPN,Oracle,Jürgen Kress

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #033

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Spatial Database Definition and Research Documents Here is the definition from Wikipedia about spatial database : A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. Select Only Date Part From DateTime – Best Practice A very common question which I receive is how to only get Date or Time part from datetime value. In this blog post I explain the same in very simple words. T-SQL Paging Query Technique Comparison (OVER and ROW_NUMBER()) – CTE vs. Derived Table I have received few emails and comments about my post SQL SERVER – T-SQL Paging Query Technique Comparison – SQL 2000 vs SQL 2005. The main question was is this can be done using CTE? Absolutely! What about Performance? It is identical! Please refer above mentioned article for the history of paging. SQL SERVER – Cannot resolve collation conflict for equal to operation One of the very first error I ever encountered in my career was to resolve this conflict. I have blogged about it and I have realized that many others like me who are facing this error. LEN and DATALENGTH of NULL Simple Example Here is the question for you what is the LEN of NULL value? Well it is very easy – just read the blog. Recovery Models and Selection Very simple and easy explanation of the Database Backup Recovery Model and how to select the best option for you. Explanation SQL SERVER Hash Join Hash join gives best performance when two more join tables are joined and at-least one of them have no index or is not sorted. It is also expected that smaller of the either of table can be read in memory completely (though not necessary). Easy Sequence of SELECT FROM JOIN WHERE GROUP BY HAVING ORDER BY SELECT yourcolumns FROM tablenames JOIN tablenames WHERE condition GROUP BY yourcolumns HAVING aggregatecolumn condition ORDER BY yourcolumns NorthWind Database or AdventureWorks Database – Samples Databases In this blog post we learn how to install Northwind database. I also shared the source where one can download this database as that is used in many examples on MSDN help files. sp_HelpText for sp_HelpText – Puzzle A simple quick puzzle – do you know the answer of it? If not, go ahead and read the blog. 2008 SQL SERVER – 2008 – Step By Step Installation Guide With Images When SQL Server 2008 was newly introduced lots of people had no clue how to install SQL Server 2008 and the amount of the question which I used to receive were so much. I wrote this blog post with the spirit that this will help all the newbies to install SQL Server 2008 with the help of images. Still today this blog post has been bible for all of the people who are confused with SQL Server installation. Inline Variable Assignment I loved this feature. I have always wanted this feature to be present in SQL Server. The last time when I met developers from Microsoft SQL Server, I had talked about this feature. I think this feature saves some time but make the code more readable. Introduction to Policy Management – Enforcing Rules on SQL Server If our company policy is to create all the Stored Procedure with prefix ‘usp’ that developers should be just prevented to create Stored Procedure with any other prefix. Let us see a small tutorial how to create conditions and policy which will prevent any future SP to be created with any other prefix. 2009 Performance Counters from System Views – By Kevin Mckenna Many of you are not aware of this fact that access to performance information is readily available in SQL Server and that too without querying performance counters using a custom application or via perfmon. Till now, this fact has remained undisclosed but through this post I would like to explain you can easily access SQL Server performance counter information. Without putting much effort you will come across the system viewsys.dm_os_performance_counters. As the name suggests, this provides you easy access to the SQL Server performance counter information that is passed on to perfmon, but you can get at it via tsql. Customize Toolbar – Remove Debug Button from Toolbar I was fond of SQL Server Debugger feature in SQL Server 2000. To my utter disappointment, this feature was withdrawn from SQL Server 2005. The button of the debugger is similar to a play button and is used to run debugging commands of Visual Studio. Because of this reason, it gets very much infuriating for developers when they are developing on both – Visual Studio and SSMS. Let us now see how we can remove debugging button from SQL Server Management Studio. Effect of Normalization on Index and Performance A very interesting conversation which started from twitter. If you want to read one link this is the link I encourage you to read it. SSMS Feature – Multi-server Queries Using SQL Server Management Studio (SSMS) DBAs can now query multiple servers from one window. It is quite common for DBAs with large amount of servers to maintain and gather information from multiple SQL Servers and create report. This feature is a blessing for the DBAs, as they can now assemble all the information instantaneously without going anywhere. Query Optimizer Hint ROBUST PLAN – Question to You “ROBUST PLAN” is a kind of query hint which works quite differently than other hints. It does not improve join or force any indexes to use; it just makes sure that a query does not crash due to over the limit size of row. Let me elaborate upon it in the blog post. 2010 Do you really know the difference between various date functions available in SQL Server 2012? Here is a three part story where we explored the same with examples: Fastest Way to Restore the Database Difference Between DATETIME and DATETIME2 Difference Between DATETIME and DATETIME2 – WITH GETDATE Shrinking NDF and MDF Files – Readers’ Opinion Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. 2011 Solution – Puzzle – SELECT * vs SELECT COUNT(*) This is very interesting question and I am very confident that not every one knows the answer to this question. Let me ask you again – Which will be faster SELECT* or SELECT COUNT (*) or do you think this is apples and oranges comparison. 2012 Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Let us understand the entire discussion with the help of three different examples. Finding Size of a Columnstore Index Using DMVs One of the very common question I often see is need of the list of columnstore index along with their size and corresponding table name. I quickly re-wrote a script using DMVs sys.indexes and sys.dm_db_partition_stats. This script gives the size of the columnstore index on disk only. I am sure there will be advanced script to retrieve details related to components associated with the columnstore index. However, I believe following script is sufficient to start getting an idea of columnstore index size. Developer Training Resources and Summary Roundup Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. Developer Training – A Conclusive Summary- Part 5 There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. A Quick Look at Logging and Ideas around Logging Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Beginning Performance Tuning with SQL Server Execution Plan Solution of Puzzle – Swap Value of Column Without Case Statement Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • Making a Case For The Command Line

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/06/30/making-a-case-for-the-command-line.aspxI have had an idea percolating in the back of my mind for over a year now that I’ve just recently started to implement. This idea relates to building out “internal tools” to ease the maintenance and on-going support of a software system. The system that I currently work on is (mostly) web-based, so we traditionally we have built these internal tools in the form of pages within the app that are only accessible by our developers and support personnel. These pages allow us to perform tasks within the system that, for one reason or another, we don’t want to let our end users perform (e.g. mass create/update/delete operations on data, flipping switches that turn paid modules of the system on or off, etc). When we try to build new tools like this we often struggle with the level of effort required to build them. Effort Required Creating a whole new page in an existing web application can be a fairly large undertaking. You need to create the page and ensure it will have a layout that is consistent with the other pages in the app. You need to decide what types of input controls need to go onto the page. You need to ensure that everything uses the same style as the rest of the site. You need to figure out what the text on the page should say. Then, when you figure out that you forgot about an input that should really be present you might have to go back and re-work the entire thing. Oh, and in addition to all of that, you still have to, you know, write the code that actually performs the task. Everything other than the code that performs the task at hand is just overhead. We don’t need a fancy date picker control in a nicely styled page for the vast majority of our internal tools. We don’t even really need a page, for that matter. We just need a way to issue a command to the application and have it, in turn, execute the code that we’ve written to accomplish a given task. All we really need is a simple console application! Plumbing Problems A former co-worker of mine, John Sonmez, always advocated the Unix philosophy for building internal tools: start with something that runs at the command line, and then build a UI on top of that if you need to. John’s idea has a lot of merit, and we tried building out some internal tools as simple Console applications. Unfortunately, this was often easier said that done. Doing a “File –> New Project” to build out a tool for a mature system can be pretty daunting because that new project is totally empty.  In our case, the web application code had a lot of of “plumbing” built in: it managed authentication and authorization, it handled database connection management for our multi-tenanted architecture, it managed all of the context that needs to follow a user around the application such as their timezone and regional/language settings. In addition, the configuration file for the web application  (a web.config in our case because this is an ASP .NET application) is large and would need to be reproduced into a similar configuration file for a Console application. While most of these problems are could be solved pretty easily with some refactoring of the codebase, building Console applications for internal tools still potentially suffers from one pretty big drawback: you’d have to execute them on a machine with network access to all of the needed resources. Obviously, our web servers can easily communicate the the database servers and can publish messages to our service bus, but the same is not true for all of our developer and support personnel workstations. We could have everyone run these tools remotely via RDP or SSH, but that’s a bit cumbersome and certainly a lot less convenient than having the tools built into the web application that is so easily accessible. Mix and Match So we need a way to build tools that are easily accessible via the web application but also don’t require the overhead of creating a user interface. This is where my idea comes into play: why not just build a command line interface into the web application? If it’s part of the web application we get all of the plumbing that comes along with that code, and we’re executing everything on the web servers which means we’ll have access to any external resources that we might need. Rather than having to incur the overhead of creating a brand new page for each tool that we want to build, we can create one new page that simply accepts a command in text form and executes it as a request on the web server. In this way, we can focus on writing the code to accomplish the task. If the tool ends up being heavily used, then (and only then) should we consider spending the time to build a better user experience around it. To be clear, I’m not trying to downplay the importance of building great user experiences into your system; we should all strive to provide the best UX possible to our end users. I’m only advocating this sort of bare-bones interface for internal consumption by the technical staff that builds and supports the software. This command line interface should be the “back end” to a highly polished and eye-pleasing public face. Implementation As I mentioned at the beginning of this post, this is an idea that I’ve had for awhile but have only recently started building out. I’ve outlined some general guidelines and design goals for this effort as follows: Text in, text out: In the interest of keeping things as simple as possible, I want this interface to be purely text-based. Users will submit commands as plain text, and the application will provide responses in plain text. Obviously this text will be “wrapped” within the context of HTTP requests and responses, but I don’t want to have to think about HTML or CSS when taking input from the user or displaying responses back to the user. Task-oriented code only: After building the initial “harness” for this interface, the only code that should need to be written to create a new internal tool should be code that is expressly needed to accomplish the task that the tool is intended to support. If we want to encourage and enable ourselves to build good tooling, we need to lower the barriers to entry as much as possible. Built-in documentation: One of the great things about most command line utilities is the ‘help’ switch that provides usage guidelines and details about the arguments that the utility accepts. Our web-based command line utility should allow us to build the documentation for these tools directly into the code of the tools themselves. I finally started trying to implement this idea when I heard about a fantastic open-source library called CLAP (Command Line Auto Parser) that lets me meet the guidelines outlined above. CLAP lets you define classes with public methods that can be easily invoked from the command line. Here’s a quick example of the code that would be needed to create a new tool to do something within your system: 1: public class CustomerTools 2: { 3: [Verb] 4: public void UpdateName(int customerId, string firstName, string lastName) 5: { 6: //invoke internal services/domain objects/hwatever to perform update 7: } 8: } This is just a regular class with a single public method (though you could have as many methods as you want). The method is decorated with the ‘Verb’ attribute that tells the CLAP library that it is a method that can be invoked from the command line. Here is how you would invoke that code: Parser.Run(args, new CustomerTools()); Note that ‘args’ is just a string[] that would normally be passed passed in from the static Main method of a Console application. Also, CLAP allows you to pass in multiple classes that define [Verb] methods so you can opt to organize the code that CLAP will invoke in any way that you like. You can invoke this code from a command line application like this: SomeExe UpdateName -customerId:123 -firstName:Jesse -lastName:Taber ‘SomeExe’ in this example just represents the name of .exe that is would be created from our Console application. CLAP then interprets the arguments passed in order to find the method that should be invoked and automatically parses out the parameters that need to be passed in. After a quick spike, I’ve found that invoking the ‘Parser’ class can be done from within the context of a web application just as easily as it can from within the ‘Main’ method entry point of a Console application. There are, however, a few sticking points that I’m working around: Splitting arguments into the ‘args’ array like the command line: When you invoke a standard .NET console application you get the arguments that were passed in by the user split into a handy array (this is the ‘args’ parameter referenced above). Generally speaking they get split by whitespace, but it’s also clever enough to handle things like ignoring whitespace in a phrase that is surrounded by quotes. We’ll need to re-create this logic within our web application so that we can give the ‘args’ value to CLAP just like a console application would. Providing a response to the user: If you were writing a console application, you might just use Console.WriteLine to provide responses to the user as to the progress and eventual outcome of the command. We can’t use Console.WriteLine within a web application, so I’ll need to find another way to provide feedback to the user. Preferably this approach would allow me to use the same handler classes from both a Console application and a web application, so some kind of strategy pattern will likely emerge from this effort. Submitting files: Often an internal tool needs to support doing some kind of operation in bulk, and the easiest way to submit the data needed to support the bulk operation is in a file. Getting the file uploaded and available to the CLAP handler classes will take a little bit of effort. Mimicking the console experience: This isn’t really a requirement so much as a “nice to have”. To start out, the command-line interface in the web application will probably be a single ‘textarea’ control with a button to submit the contents to a handler that will pass it along to CLAP to be parsed and run. I think it would be interesting to use some javascript and CSS trickery to change that page into something with more of a “shell” interface look and feel. I’ll be blogging more about this effort in the future and will include some code snippets (or maybe even a full blown example app) as I progress. I also think that I’ll probably end up either submitting some pull requests to the CLAP project or possibly forking/wrapping it into a more web-friendly package and open sourcing that.

    Read the article

  • ADDS: 1 - Introducing and designing

    - by marc dekeyser
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} What is ADDS?  Every Microsoft oriented infrastructure in today's enterprises will depend largely on the active directory version built by Microsoft. It is the foundation stone on which all other products (Exchange, update services, office communicator, the system center family, etc) rely on to get their information. And that is just looking at it from an infrastructure perspective. A well designed and implemented Active Directory implementation makes life for IT personnel and user alike a lot easier. Centralised management and the abilities opened up  by having it in place are ample.  But what is Active Directory Domain Services? We can look at ADDS as a centralised directory containing all objects your infrastructure runs on in one way or another. Since it is a Microsoft product you'll obviously not be seeing linux or mac clients listed in here (exceptions exist) but in general we can say it contains everything your company has in place in one form or another.  The domain name services. The domain naming service (or DNS for short) is a service which translates IP address (the identifiers for each computer in your domain) into readable and easy to understand names. This service is a prequisite for ADDA to work and having wrong record in a DNS server will make any ADDS service fail. Generally speaking a DNS service will be run on the same server as the ADDS service but it is worth wile to remember that this is not necessary. You could, for example, run your DNS services on a linux box (which would need special preparing to host an ADDS integrated DNS zone) and run the ADDS service of another box… Where to start? If the aim is to put in place a first time implementation of ADDS in your enterprise there are plenty of things to consider depending on what you are going to do in the long run. Great care has to be taken when first designing and implementing as having it set up wrong will cause a headache down the line. It is for that reason that I like to start building from the bottom up and start with a generic installation of ADDS (which will still differ for every client) and make it adaptable for future services which can hook in to the existing environment. Adapting existing environments is out of scope for this document (and series) although it is possible to take the pointers and change your existing environment to run in a smoother manor. Take great care when changing things as one small slip of the hand can give you a forest wide failure… Whenever starting with an ADDS deployment I ask the client the following questions:  What are your long term plans and goals?  How flexible do you want it? Are you currently linux heavy and want to keep this or can we go for an all Microsoft design? Those three questions should give some sort of indicator what direction can be taken and if the client has thought about some things themselves :).  The technical side of things  What is next to consider is what kind of infrastructure is already in place. For these series I'll keep it simple and introduce some general concepts without going in to depth on integrating ADDS with other DNS services.  Building from the ground up means we need to consider our layers on which our infrastructure will rely. In my view that goes as follows:  Network (WAN/LAN links and physical sites DNS Namespacing All in one domain or split up in different domains/forests? Security (both for ADDS and physical sites) The network side of things  Looking at how the network is currently set up can potentially teach us a large deal about the client. Do they have multiple physical site? What network speeds exist between these sites, etc… Depending on this information we will design our site links (which controls replication) in future stages. DNS Namespacing Maybe the single most intresting thing to know is what the domain will be named (ADDS will need a DNS domain with the same name) and where this will be hosted. Note that active directory can be set up with a singe name (aka contoso instead of contoso.com) but it is highly recommended to never do this. If you do end up with a domain like that for some reason there will be a lot of services that are going to give you good grief in the future (exchange being one of them). So one of the best practises would be always to use a double name (contoso.com or contoso.lan for example). Internal namespace A single namespace is just what it sounds like. You have a DNS domain which is different internally from what the client has as an external namespace. f.e. contoso.com as an external name (out on the internet) and contoso.lan on the internal network. his setup is has its advantages in that you have more obscurity from the internet in the DNS side of this but it will require additional work to publish services to the web. External namespace Quite like the internal namespace only here you do not differ the internal namespace of the company from what is known on the internet. In this implementation you would host your own DNS servers for the external domain inside the network. Or in other words, any external computer doing a DNS lookup would contact your internal DNS server for the resolution. Generally speaking this set up is a bad idea from the security side of things. Split DNS Whilst using an external namespace design is fairly easy it involves a lot of security risks. Opening up you ADDS DSN servers for lookups exposes your entire network to the internet and should be avoided at any cost. And that is where the "split DNS" design comes in. In this setup up would still have the same namespace internally and externally but you would be using different DNS servers for lookups on the external network who have no records of your internal resources unless you explicitly publish them. All in one or not? In determining your active directory design you can look at the following possibilities:  Single forest, Single domain Single forest, multiple domains Multiple forests, multiple domains I've listed the possibilities for design in increasing order of administrative magnitude. Microsoft recommends trying to use a single forest, single domain in as much situations as possible. It is, however, always possible that you require your services to be seperated from your users in a resource forest with trusts set up between the different forests. To start out I would go with the single forest design to avoid complexity unless there are strict requirements to have multiple forests. Security What kind of security is required on the domain and does this reflect the physical security on the sites? Not every client can afford to have a domain controller in a secluded server room on every site and it is exactly for that reason that Microsoft introduced the RODC (read only domain controller). A RODC is a domain controller that has been limited in functionality, in essence it will only cache the data you explicitly tell it to cache and in the case of a DC compromise (it being stolen) only a limited number of accounts will need to be affected. Th- Th- Th- That’s all folks! Well at least for now! In future editions of this series we’ll be walking through the different task that need to be done and the thought which needs to be put in to it. But for all editions we’ll be going from the concept of running a single forest, single domain with a split DNS setup… See you next time!

    Read the article

  • How to Audit and Monitor BI Publisher Reports Access?

    - by kanichiro.nishida
    Do you know who is accessing to which report at what time at your reporting environment ? As you delivered the BI Publisher reports to the production environment and your users start using them as part of their daily business operations you might wonder such questions. With compliance becoming an integral part of any business requirement, auditing your reporting environment is also becoming one of the most critical and hot agenda in today’s enterprise reporting deployments. Also, I believe that auditing the reporting environment is not just for the compliance, but also the way to understand how your users are using the reports and be able to improve the user reporting experience. BI Publisher have introduced Enterprise Level Auditing feature with its 11G release, with an integration of Oracle Fusion Middleware Audit Framework, which comes out of the box with the installation. Yes, this is another great example of the benefit of its tight integration with Fusion Middleware introduced with BI Publisher 11g release. What Information Can I Know about our Reporting Environment? With this new Auditing feature you can now gain the following insights. When a particular user login or logout What report is accessed by who and when and how How long does it take to process a particular report Yes, it’s all there. This is a great news for 10G users, right ? I used to be one of them working with many different IT organizations and were craving for this, but it’s here now with 11G! How Can I Access to the Auditing Information? With the Fusion Middleware Auditing Framework, BI Publisher feed such information either to a log file or to a database. If you decided to get the data into the database then, of course you know, you can use BI Publisher to report and publish, or visualize the data to gain more insights. One thing though, in order to feed the data it requires a few extra steps, which I’ll cover it later.  Regardless of whether it’s the log file or the database to store the Auditing data, first, you need to enable the Auditing feature, which is not enabled as default. So, let’s take a look at how to enable it. How to Enable Auditing Feature? Here is a quick list of the steps: Enable Auditing related properties in BI Publisher configuration file Copy component_events.xml file to Fusion Middleware Audit Framework’s location Enable Auditing Policy with Fusion Middleware Control (Enterprise Manager) Restart WebLogic Server Enable Auditing related properties in BI Publisher configuration file Open xmlp-server-config.xml file, which is located under $BI_HOME/ user_projects/domains/bifoundation_domain/config/bipublisher/repository/Admin/Configuration directory. Set the following three properties values to ‘true’. AUDIT_ENABLED MONITORING_ENABLED AUDIT_JPS_INTEGRATION The ‘AUDIT_JPS_INTEGRATION’ is not in the file as default, so you need to add this. Here is an example of how it looks for the xmlp-server-config.xml file after the modification. <?xml version="1.0" encoding="UTF-8" standalone="no"?><xmlpConfigxmlns="http://xmlns.oracle.com/oxp/xmlp"> <property name="SAW_SERVER" value="adc6160510"/> <property name="SAW_SESSION_TIMEOUT" value="90"/> <property name="DEBUG_LEVEL" value="exception"/> <property name="SAW_PORT" value="7001"/> <property name="SAW_PASSWORD" value=""/> <property name="SAW_PROTOCOL" value="http"/> <property name="SAW_VERSION" value="v6"/> <property name="SAW_USERNAME" value=""/> <property name="SAW_URL_SUFFIX" value="analytics/saw.dll"/> <property name="MONITORING_ENABLED" value="true"/> <property name="MONITORING_DEFAULT_HISTORY_SIZE" value="30"/> <property name="AUDIT_ENABLED" value="true"/> <property name="JSESSION_RESET_DISABLED" value="true"/> <property name="SECURITY_MODEL" value="ORACLE_AS_JPS"/> <property name="AUDIT_JPS_INTEGRATION" value="true"/> </xmlpConfig>   Copy component_events.xml file to Audit Framework’s location There is a Audit related configuration file provided by BI Publisher that needs to be copied to the Audit Framework location. 1. Go to the following directory. $BI_HOME /oracle_common/modules/oracle.iau_11.1.1/components 2. Create a directory called ‘xmlpserver’ 3. Copy component_events.xml file from /user_projects/domains/bifoundation_domain/config/bipublisher/repository/Admin/Audit To the newly created ‘xmlpserver’ directory. Enable Auditing Policy with Fusion Middleware Control (EM) Now you can set a level of the auditing for each BI Publisher’s auditing type by using Fusion Middleware Control (a.k.a. Enterprise Manager). 1. Login to Fusion Middleware Control UI http://hostname:port/em (e.g. reporting.oracle.com:7001/em) 2. Access to Audit Policy configuration UI from the menu Under WebLogic Domain, right-click bifoundation_domain, select Security and then click Audit Policy.   3. Set Audit Level for BI Publisher. While you can select ‘Custom’ to set a customized level of Auditing for each component, I’m selecting ‘Medium’ for this exercise.   Restart WebLogic Server After all the above settings, now you need to restart the WebLogic Server instance in order to take those changes in effect. If you’re on Windows you can simply do this by selecting ‘Stop BI Servers’ and ‘Start BI Servers’ from the Start menu. If you’re on Linux then you can run ‘stopWebLogic.sh’ and ‘startWebLogic.sh’, which can be found under $BI_HOME/user_projects/domains/bifoundation_domain/bin Start Auditing! Now assuming that you have completed the above steps successfully, then from this point on any reporting activity should be audited and stored in the auditing log file, which can be found at $BI_HOME/user_projects/domains/bifoundation_domain/servers/AdminServer/logs/auditlogs/xmlpserver/audit.log And here is a sample of the log file: 2011-02-18 02:25:49.928 "" "ReportRendering" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "200" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" "pdf" "RTF Corp Styles" "en_US" - - - - - - - - - - - - - - 86608512 486989824 24517 169 - - - 2011-02-18 02:25:49.929 "steve.jobs" "ReportRequest" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportAccess" "200" "" "" "pdf" "RTF Corp Styles" - - - true - - - - - - - - - - - - - - - - - - 2011-02-18 03:25:49.554 "" "ReportDataProcess" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "260" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" - - - - - - - - - - - - - - - - - 34980200 554033152 - 134 - - - 2011-02-18 03:25:50.282 "" "ReportRendering" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "263" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" "pdf" "RTF Corp Styles" "en_US" - - - - - - - - - - - - - - 16158944 554033152 24517 503 - - - 2011-02-18 03:25:50.282 "steve.jobs" "ReportRequest" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportAccess" "263" "" "" "pdf" "RTF Corp Styles" - - - true - - - - - - - - - - - - - - - - - - 2011-02-18 03:30:00.448 "barack.obama" "UserLogin" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000406,0" - - - - "bipublisher(11.1.1)" "UserSession" "26" "" - - - - - - - - - - - - - - - - - - - - - - - - - From the above log file you can tell a user ‘steve.jobs’ was running some reports like ‘Balance Letter’ around afternoon on 2/18 and another user ‘barack.obama’ logged into the system at 3:30 on the same day. Yes, every login and log out will be recorded, and every report access will be recorded in this log file. Now, looking at this text file to understand what’s going on is pretty overwhelming. And accessing to this log file, which is located at the server’s file system where the BI Publisher/WebLogic Server are running, is another challenge in typical deployment scenarios. And that’s where the database storage option for the Auditing data  comes into a picture. I’ll talk about this tomorrow, so stay tuned!  

    Read the article

  • Oracle Solaris Zones Physical to virtual (P2V)

    - by user939057
    IntroductionThis document describes the process of creating and installing a Solaris 10 image build from physical system and migrate it into a virtualized operating system environment using the Oracle Solaris 10 Zones Physical-to-Virtual (P2V) capability.Using an example and various scenarios, this paper describes how to take advantage of theOracle Solaris 10 Zones Physical-to-Virtual (P2V) capability with other Oracle Solaris features to optimize performance using the Solaris 10 resource management advanced storage management using Solaris ZFS plus improving operating system visibility with Solaris DTrace. The most common use for this tool is when performing consolidation of existing systems onto virtualization enabled platforms, in addition to that we can use the Physical-to-Virtual (P2V) capability  for other tasks for example backup your physical system and move them into virtualized operating system environment hosted on the Disaster Recovery (DR) site another option can be building an Oracle Solaris 10 image repository with various configuration and a different software packages in order to reduce provisioning time.Oracle Solaris ZonesOracle Solaris Zones is a virtualization and partitioning technology supported on Oracle Sun servers powered by SPARC and Intel processors.This technology provides an isolated and secure environment for running applications. A zone is a virtualized operating system environment created within a single instance of the Solaris 10 Operating System.Each virtual system is called a zone and runs a unique and distinct copy of the Solaris 10 operating system.Oracle Solaris Zones Physical-to-Virtual (P2V)A new feature for Solaris 10 9/10.This feature provides the ability to build a Solaris 10 images from physical system and migrate it into a virtualized operating system environmentThere are three main steps using this tool1. Image creation on the source system, this image includes the operating system and optionally the software in which we want to include within the image. 2. Preparing the target system by configuring a new zone that will host the new image.3. Image installation on the target system using the image we created on step 1. The host, where the image is built, is referred to as the source system and the host, where theimage is installed, is referred to as the target system. Benefits of Oracle Solaris Zones Physical-to-Virtual (P2V)Here are some benefits of this new feature:  Simple- easy build process using Oracle Solaris 10 built-in commands.  Robust- based on Oracle Solaris Zones a robust and well known virtualization technology.  Flexible- support migration between V series servers into T or -M-series systems.For the latest server information, refer to the Sun Servers web page. PrerequisitesThe target Oracle Solaris system should be running the latest version of the patching patch cluster. and the minimum Solaris version on the target system should be Solaris 10 9/10.Refer to the latest Administration Guide for Oracle Solaris for a complete procedure on how todownload and install Oracle Solaris. NOTE: If the source system that used to build the image is an older version then the targetsystem, then during the process, the operating system will be upgraded to Solaris 10 9/10(update on attach).Creating the Image Used to distribute the software.We will create an image on the source machine. We can create the image on the local file system and then transfer it to the target machine, or build it into a NFS shared storage andmount the NFS file system from the target machine.Optional  before creating the image we need to complete the software installation that we want to include with the Solaris 10 image.An image is created by using the flarcreate command:Source # flarcreate -S -n s10-system -L cpio /var/tmp/solaris_10_up9.flarThe command does the following:  -S specifies that we skip the disk space check and do not write archive size data to the archive (faster).  -n specifies the image name.  -L specifies the archive format (i.e cpio). Optionally, we can add descriptions to the archive identification section, which can help to identify the archive later.Source # flarcreate -S -n s10-system -e "Oracle Solaris with Oracle DB10.2.0.4" -a "oracle" -L cpio /var/tmp/solaris_10_up9.flarYou can see example of the archive identification section in Appendix A: archive identification section.We can compress the flar image using the gzip command or adding the -c option to the flarcreate commandSource # gzip /var/tmp/solaris_10_up9.flarAn md5 checksum can be created for the image in order to ensure no data tamperingSource # digest -v -a md5 /var/tmp/solaris_10_up9.flar Moving the image into the target system.If we created the image on the local file system, we need to transfer the flar archive from the source machine to the target machine.Source # scp /var/tmp/solaris_10_up9.flar target:/var/tmpConfiguring the Zone on the target systemAfter copying the software to the target machine, we need to configure a new zone in order to host the new image on that zone.To install the new zone on the target machine, first we need to configure the zone (for the full zone creation options see the following link: http://docs.oracle.com/cd/E18752_01/html/817-1592/index.html  )ZFS integrationA flash archive can be created on a system that is running a UFS or a ZFS root file system.NOTE: If you create a Solaris Flash archive of a Solaris 10 system that has a ZFS root, then bydefault, the flar will actually be a ZFS send stream, which can be used to recreate the root pool.This image cannot be used to install a zone. You must create the flar with an explicit cpio or paxarchive when the system has a ZFS root.Use the flarcreate command with the -L archiver option, specifying cpio or pax as themethod to archive the files. (For example, see Step 1 in the previous section).Optionally, on the target system you can create the zone root folder on a ZFS file system inorder to benefit from the ZFS features (clones, snapshots, etc...).Target # zpool create zones c2t2d0 Create the zone root folder:Target # chmod 700 /zones Target # zonecfg -z solaris10-up9-zonesolaris10-up9-zone: No such zone configuredUse 'create' to begin configuring a new zone.zonecfg:solaris10-up9-zone> createzonecfg:solaris10-up9-zone> set zonepath=/zoneszonecfg:solaris10-up9-zone> set autoboot=truezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone:net> set address=192.168.0.1zonecfg:solaris10-up9-zone:net> set physical=nxge0zonecfg:solaris10-up9-zone:net> endzonecfg:solaris10-up9-zone> verifyzonecfg:solaris10-up9-zone> commitzonecfg:solaris10-up9-zone> exit Installing the Zone on the target system using the imageInstall the configured zone solaris10-up9-zone by using the zoneadm command with the install -a option and the path to the archive.The following example shows how to create an Image and sys-unconfig the zone.Target # zoneadm -z solaris10-up9-zone install -u -a/var/tmp/solaris_10_up9.flarLog File: /var/tmp/solaris10-up9-zone.install_log.AJaGveInstalling: This may take several minutes...The following example shows how we can preserve system identity.Target # zoneadm -z solaris10-up9-zone install -p -a /var/tmp/solaris_10_up9.flar Resource management Some applications are sensitive to the number of CPUs on the target Zone. You need tomatch the number of CPUs on the Zone using the zonecfg command:zonecfg:solaris10-up9-zone>add dedicated-cpuzonecfg:solaris10-up9-zone> set ncpus=16DTrace integrationSome applications might need to be analyzing using DTrace on the target zone, you canadd DTrace support on the zone using the zonecfg command:zonecfg:solaris10-up9-zone>setlimitpriv="default,dtrace_proc,dtrace_user" Exclusive IP stack An Oracle Solaris Container running in Oracle Solaris 10 can have a shared IP stack with the global zone, or it can have an exclusive IP stack (which was released in Oracle Solaris 10 8/07). An exclusive IP stack provides a complete, tunable, manageable and independent networking stack to each zone. A zone with an exclusive IP stack can configure Scalable TCP (STCP), IP routing, IP multipathing, or IPsec. For an example of how to configure an Oracle Solaris zone with an exclusive IP stack, see the following example zonecfg:solaris10-up9-zone set ip-type=exclusivezonecfg:solaris10-up9-zone> add netzonecfg:solaris10-up9-zone> set physical=nxge0 When the installation completes, use the zoneadm list -i -v options to list the installedzones and verify the status.Target # zoneadm list -i -vSee that the new Zone status is installedID NAME STATUS PATH BRAND IP0 global running / native shared- solaris10-up9-zone installed /zones native sharedNow boot the ZoneTarget # zoneadm -z solaris10-up9-zone bootWe need to login into the Zone order to complete the zone set up or insert a sysidcfg file beforebooting the zone for the first time see example for sysidcfg file in Appendix B: sysidcfg filesectionTarget # zlogin -C solaris10-up9-zoneTroubleshootingIf an installation fails, review the log file. On success, the log file is in /var/log inside the zone. Onfailure, the log file is in /var/tmp in the global zone.If a zone installation is interrupted or fails, the zone is left in the incomplete state. Use uninstall -F to reset the zone to the configured state.Target # zoneadm -z solaris10-up9-zone uninstall -FTarget # zonecfg -z solaris10-up9-zone delete -FConclusionOracle Solaris Zones P2V tool provides the flexibility to build pre-configuredimages with different software configuration for faster deployment and server consolidation.In this document, I demonstrated how to build and install images and to integrate the images with other Oracle Solaris features like ZFS and DTrace.Appendix A: archive identification sectionWe can use the head -n 20 /var/tmp/solaris_10_up9.flar command in order to access theidentification section that contains the detailed description.Target # head -n 20 /var/tmp/solaris_10_up9.flarFlAsH-aRcHiVe-2.0section_begin=identificationarchive_id=e4469ee97c3f30699d608b20a36011befiles_archived_method=cpiocreation_date=20100901160827creation_master=mdet5140-1content_name=s10-systemcreation_node=mdet5140-1creation_hardware_class=sun4vcreation_platform=SUNW,T5140creation_processor=sparccreation_release=5.10creation_os_name=SunOScreation_os_version=Generic_142909-16files_compressed_method=nonecontent_architectures=sun4vtype=FULLsection_end=identificationsection_begin=predeploymentbegin 755 predeployment.cpio.ZAppendix B: sysidcfg file sectionTarget # cat sysidcfgsystem_locale=Ctimezone=US/Pacificterminal=xtermssecurity_policy=NONEroot_password=HsABA7Dt/0sXXtimeserver=localhostname_service=NONEnetwork_interface=primary {hostname= solaris10-up9-zonenetmask=255.255.255.0protocol_ipv6=nodefault_route=192.168.0.1}name_service=NONEnfs4_domain=dynamicWe need to copy this file before booting the zoneTarget # cp sysidcfg /zones/solaris10-up9-zone/root/etc/

    Read the article

  • Introducing Oracle Multitenant

    - by OracleMultitenant
    0 0 1 1142 6510 Oracle Corporation 54 15 7637 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} The First Database Designed for the Cloud Today Oracle announced the general availability (GA) of Oracle Database 12c, the first database designed for the Cloud. Oracle Multitenant, new with Oracle Database 12c, is a key component of this – a new architecture for consolidating databases and simplifying operations in the Cloud. With this, the inaugural post in the Multitenant blog, my goal is to start the conversation about Oracle Multitenant. We are very proud of this new architecture, which we view as a major advance for Oracle. Customers, partners and analysts who have had previews are very excited about its capabilities and its flexibility. This high level review of Oracle Multitenant will touch on our design considerations and how we re-architected our database for the cloud. I’ll briefly describe our new multitenant architecture and explain it’s key benefits. Finally I’ll mention some of the major use cases we see for Oracle Multitenant. Industry Trends We always start by talking to our customers about the pressures and challenges they’re facing and what trends they’re seeing in the industry. Some things don’t change. They face the same pressures and the same requirements as ever: Pressure to do more with less; be faster, leaner, cheaper, and deliver services 24/7. Big companies have achieved scale. Now they want to realize economies of scale. As ever, DBAs are faced with the challenges of patching and upgrading large numbers of databases, and provisioning new ones.  Requirements are familiar: Performance, scalability, reliability and high availability are non-negotiable. They need ever more security in this threatening climate. There’s no time to stop and retool with new applications. What’s new are the trends. These are the techniques to use to respond to these pressures within the constraints of the requirements. With the advent of cloud computing and availability of massively powerful servers – even engineered systems such as Exadata – our customers want to consolidate many applications into fewer larger servers. There’s a move to standardized services – even self-service. Consolidation Consolidation is not new; companies have tried various different approaches to consolidation of databases in the cloud. One approach is to partition a powerful server between several virtual machines, one per application. A downside of this is that you have the resource and management overheads of OS and RDBMS per VM – that is, per application. Another is that you have replaced physical sprawl with virtual sprawl and virtual sprawl is still expensive to manage. In the dedicated database model, we have a single physical server supporting multiple databases, one per application. So there’s a shared OS overhead, but RDBMS process and memory overhead are replicated per application. Let's think about our traditional Oracle Database architecture. Every time we create a database, be it a production database, a development or a test database, what do we do? We create a set of files, we allocate a bunch of memory for managing the data, and we kick off a series of background processes. This is replicated for every one of the databases that we create. As more and more databases are fired up, these replicated overheads quickly consume the available server resources and this limits the number of applications we can run on any given server. In Oracle Database 11g and earlier the highest degree of consolidation could be achieved by what we call schema consolidation. In this model we have one big server with one big database. Individual applications are installed in separate schemas or table-owners. Database overheads are shared between all applications, which affords maximum consolidation. The shortcomings are that application changes are often required. There is no tenant isolation. One bad apple can spoil the whole batch. New Architecture & Benefits In Oracle Database 12c, we have a new multitenant architecture, featuring pluggable databases. This delivers all the resource utilization advantages of schema consolidation with none of the downsides. There are two parts to the term “pluggable database”: "pluggable", which is new, and "database", which is familiar.  Before we get to the exciting new stuff let’s discuss what hasn’t changed. A pluggable database is a fully functional Oracle database. It’s not watered down in any way. From the perspective of an application or an end user it hasn’t changed at all. This is very important because it means that no application changes are required to adopt this new architecture. There are many thousands of applications built on Oracle databases and they are all ready to run on Oracle Multitenant. So we have these self-contained pluggable databases (PDBs), and as their name suggests, they are plugged into a multitenant container database (CDB). The CDB behaves as a single database from the operations point of view. Very much as we had with the schema consolidation model, we only have a single set of Oracle background processes and a single, shared database memory requirement. This gives us very high consolidation density, which affords maximum reduction in capital expenses (CapEx). By performing management operations at the CDB level – “managing many as one” – we can achieve great reductions in operating expenses (OpEx) as well, but we retain granular control where appropriate. Furthermore, the “pluggability” capability gives us portability and this adds a tremendous amount of agility. We can simply unplug a PDB from one CDB and plug it into another CDB, for example to move it from one SLA tier to another. I'll explore all these new capabilities in much more detail in a future posting.  Use Cases We can identify a number of use cases for Oracle Multitenant. Here are a few of the major ones. 0 0 1 113 650 Oracle Corporation 5 1 762 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} Development / Testing where individual engineers need rapid provisioning and recycling of private copies of a few "master test databases" Consolidation of disparate applications using fewer, more powerful servers Software as a Service deploying separate copies of identical applications to individual tenants Database as a Service typically self-service provisioning of databases on the private cloud Application Distribution from ISV / Installation by Customer Eliminating many typical installation steps (create schema, import seed data, import application code PL/SQL…) - just plug in a PDB! High volume data distribution literally via disk drives in envelopes distributed by truck! - distribution of things like GIS or MDM master databases …various others! Benefits Previous approaches to consolidation have involved a trade-off between reductions in Capital Expenses (CapEx) and Operating Expenses (OpEx), and they’ve usually come at the expense of agility. With Oracle Multitenant you can have your cake and eat it: Minimize CapEx More Applications per server Minimize OpEx Manage many as one Standardized procedures and services Rapid provisioning Maximize Agility Cloning for development and testing Portability through pluggability Scalability with RAC Ease of Adoption Applications run unchanged It’s a pure deployment choice. Neither the database backend nor the application needs to be changed. In future postings I’ll explore various aspects in more detail. However, if you feel compelled to devour everything you can about Oracle Multitenant this very minute, have no fear. Visit the Multitenant page on OTN and explore the various resources we have available there. Among these, Oracle Distinguished Product Manager Bryn Llewellyn has written an excellent, thorough, and exhaustively detailed White Paper about Oracle Multitenant, which is available here.  Follow me  I tweet @OraclePDB #OracleMultitenant

    Read the article

  • Oracle Announces Oracle Exadata X3 Database In-Memory Machine

    - by jgelhaus
    Fourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point ORACLE OPENWORLD, SAN FRANCISCO – October 1, 2012 News Facts During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Next-Generation Technologies Deliver Dramatic Performance Improvements Oracle Exadata X3 Database In-Memory Machines use a combination of scale-out servers and storage, InfiniBand networking, smart storage, PCI Flash, smart memory caching, and Hybrid Columnar Compression to deliver extreme performance and availability for all Oracle Database Workloads. Oracle Exadata X3 Database In-Memory Machine systems leverage next-generation technologies to deliver significant performance enhancements, including: Four times the Flash memory capacity of the previous generation; with up to 40 percent faster response times and 100 GB/second data scan rates. Combined with Exadata’s unique Hybrid Columnar Compression capabilities, hundreds of Terabytes of user data can now be managed entirely within Flash; 20 times more capacity for database writes through updated Exadata Smart Flash Cache software. The new Exadata Smart Flash Cache software also runs on previous generation Exadata systems, increasing their capacity for writes tenfold; 33 percent more database CPU cores in the Oracle Exadata X3-2 Database In-Memory Machine, using the latest 8-core Intel® Xeon E5-2600 series of processors; Expanded 10Gb Ethernet connectivity to the data center in the Oracle Exadata X3-2 provides 40 10Gb network ports per rack for connecting users and moving data; Up to 30 percent reduction in power and cooling. Configured for Your Business, Available Today Oracle Exadata X3-2 Database In-Memory Machine systems are available in a Full-Rack, Half-Rack, Quarter-Rack, and the new low-cost Eighth-Rack configuration to satisfy the widest range of applications. Oracle Exadata X3-8 Database In-Memory Machine systems are available in a Full-Rack configuration, and both X3 systems enable multi-rack configurations for virtually unlimited scalability. Oracle Exadata X3-2 and X3-8 Database In-Memory Machines are fully compatible with prior Exadata generations and existing systems can also be upgraded with Oracle Exadata X3-2 servers. Oracle Exadata X3 Database In-Memory Machine systems can be used immediately with any application certified with Oracle Database 11g R2 and Oracle Real Application Clusters, including SAP, Oracle Fusion Applications, Oracle’s PeopleSoft, Oracle’s Siebel CRM, the Oracle E-Business Suite, and thousands of other applications. Supporting Quotes “Forward-looking enterprises are moving towards Cloud Computing architectures,” said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. “Oracle Exadata’s unique ability to run any database application on a fully scale-out architecture using a combination of massive memory for extreme performance and low-cost disk for high capacity delivers the ideal solution for Cloud-based database deployments today.” Supporting Resources Oracle Press Release Oracle Exadata Database Machine Oracle Exadata X3-2 Database In-Memory Machine Oracle Exadata X3-8 Database In-Memory Machine Oracle Database 11g Follow Oracle Database via Blog, Facebook and Twitter Oracle OpenWorld 2012 Oracle OpenWorld 2012 Keynotes Like Oracle OpenWorld on Facebook Follow Oracle OpenWorld on Twitter Oracle OpenWorld Blog Oracle OpenWorld on LinkedIn Mark Hurd's keynote with Andy Mendelsohn and Juan Loaiza - - watch for the replay to be available soon at http://www.youtube.com/user/Oracle or http://www.oracle.com/openworld/live/on-demand/index.html

    Read the article

  • Oracle Virtualization at Oracle OpenWorld 2012

    - by Chris Kawalek
    Mini-Series Entry 1 of 3: Hands-On Virtualization This is the first entry of a 3 part mini-series aimed at highlighting server and desktop virtualization at this year’s Oracle OpenWorld.  Oracle OpenWorld 2012 is fast approaching! If you are as excited as we are about the fascinating new Oracle virtualization content featured at Oracle OpenWorld 2012, you won’t want to miss this blog mini-series. We will be highlighting sessions that cover advances and innovations in our products, our product strategy and roadmap, and hands on labs for step-by-step instructions from our field and product experts. In the blog mini-series you will learn about: The Oracle Virtualization general keynote session Hands-on labs  Key Oracle server and desktop virtualization sessions In this entry, we will cover the Oracle Virtualization keynote session and the hands-on labs you won't want to miss. General Session: Oracle Virtualization Strategy and Roadmap Session ID: GEN8725 Oracle offers the industry’s most complete and integrated virtualization portfolio enabling organizations to realize benefits beyond simple consolidation as they transform their data centers into flexible cloud-based infrastructures. Join Oracle executives and experts to learn about Oracle’s desktop-to-data-center virtualization solutions, such as the OS, with built-in management integration at all layers that can help you virtualize and manage the complete computing environment, from physical servers to virtual servers and applications. This “don’t-miss” session offers details of the latest product updates and strategy; product roadmaps; integration with enterprise applications; and real-world examples of how Oracle server, desktop, and storage virtualization is benefiting customers. Here are our top picks for Hands-On Labs for Oracle OpenWorld 2012: Oracle Virtual Desktop Infrastructure Performance and Tablet Mobility Session ID: HOL9907 This hands-on lab demonstrates the performance (using an industry-standard load tester) and roaming capabilities of Oracle Virtual Desktop Infrastructure with Oracle’s Sun Ray Clients, Apple iPad and other clients. Deploying an IaaS Environment with Oracle VM: Hands-On Lab  Session ID: HOL9558 This hands-on lab takes you through the planning and deployment of an infrastructure as a service (IaaS) environment with Oracle VM as the foundation. It covers a range of topics, from planning storage capacity, LUN creation, network bandwidth planning, and best practices to designing and streamlining the environment for ease of management. Learn from deeply experienced field engineers and product experts. Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-On Lab Session ID: HOL9559 This hands-on lab is for application architects or system administrators who will need to deploy and manage Oracle Applications. You’ll learn how Oracle VM Templates can turn you into a power user who can virtualize and deploy complex Oracle Applications in minutes. Longtime field-experienced engineers and product experts will show you, step by step, how to download and import templates and deploy the applications. x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance Session ID: HOL9870 The purpose of this hands-on lab is to demonstrate the functionality and usage of Oracle’s enterprise cloud infrastructure for x86 with Oracle VM 3.x. It covers:  Creation of VMs Migration of VMs  Quick and easy deployment of Oracle applications with Oracle VM Templates  Usage of the Storage Connect plug-in for the Sun ZFS Storage Appliance You can find these and other great sessions on the Oracle OpenWorld 2012 Content Catalogue. Start checking now to better plan and organize your week at the conference. Then you’ll be ready to sign up for all of your sessions in mid-July when the scheduling tool goes live. While the hands-on labs allow you to directly interact with Oracle virtualization products, the conference sessions allow you to hear from a wide variety of industry experts on how they're using they technology in real world deployments, solving specific challenges, and more. In tomorrow's entry, we'll start talking about the many conference sessions related to Oracle server and desktop virtualization you can attend during the show. See you then! - The Oracle Virtualization marketing team

    Read the article

  • BizTalk 2009 - BizTalk Benchmark Wizard: Installation

    - by StuartBrierley
    As previously detailed, I have completed a single server installation of BizTalk Server 2009 standard on my development laptop; a MacBook Pro Core2Duo running at 2.16Ghz with 2Gb of RAM.  Following this I also posted on my use of the BizTalk Server Best Practices Anaylser and how to configure the BizTalk SQL Server Jobs.  All of which means that I should have some confidence that I have a decent working BizTalk Server 2009 environment, Next I thought that it would be a good idea to try and get some idea of how this setup performs by carrying out some baseline tests that can then be replicated on the test and live servers. The aim of this would be to allow confident predictions to be made of how any solutions developed on a single "server" installation may be expected to perform when deployed to these multi-server BizTalk Server 2009 standard installations. The BizTalk Benchmark Wizard would seem to be the perfect tool for the job. The BizTalk Benchmark Wizard is a ultility that can be used to gain some validation of a BizTalk installation, giving a level of guidance on whether it is performing as might be expected. This utility should be used after BizTalk Server has been installed and before any solutions are deployed to the environment.  This will ensure that you are getting consistent and clean results from the BizTalk Benchmark Wizard. The BizTalk Benchmark Wizard applies load to the BizTalk Server environment under a choice of specific scenarios. During these scenarios performance counter information is collected and assessed against statistics that are appropriate to the BizTalk Server environment: "The executed scenarios may or may not be relative to any realistic scenario, and is only intended for testing. The BizTalk Benchmark Wizard has been developed in relation to the BizTalk Server 2009 Scale Out Testing Study. More information about the study can be found here: http://msdn.microsoft.com/en-us/library/ee377068(BTS.10).aspx" After downloading and installing the wizard you will need set up the Hosts, Instances and Adapter handlers.  This is done by running a script file using the “cscript” detailed below.  To do this you will need to open a command prompt window and navigate to the script folder; assuming the default installation location this would be C:\Program Files\Blogical\BizTalk Benchmark Wizard\Artefacts\BizTalk. In this folder you should find an InstallHosts.vbs file which can be executed using the following parameters: NTGroupName - The name of the Windows NT group. UserName – The name of the user account running the service instances. Password – The password of the user account running the service instances. Receive Host – The name of the server where you want to run the receive host instance.  Send Host - The name of the server where you want to run the sen host instance. Processing Host - The name of the server where you want to run the process host instance. By default the script is set up for 64 bit hosts, so if you are running in 32 bit environment make sure that you change the following line in the script before continuing: from:   objHS.IsHost32BitOnly = False to:    objHS.IsHost32BitOnly = True If you have a single box installation, your script command might look like this: cscript InstallHosts.vbs "BizTalk Application Users" “\MyUser” “MyPassword” “BtsServer1” “BtsServer1” “BtsServer1” If you have a multi server installation, your script command might look like this: cscript InstallHosts.vbs "MyDomain\BizTalk Application Users" “MyDomain\MyUser” “MyPassword” “BtsServer1” “BtsServer2” “BtsServer2” Running this script will create: Three hosts (BBW_RxHost, BBW_TxHost and BBW_PxHost) Three host instances One send and one receive adapter handler for the WCF NetTcp adapter. You will then need to import the BizTalk MSI via the BizTalk Administration Console.  Open the BizTalk Administration Console, point to the “Applications” node and import the BizTalk Benchmark Wizard.msi found in the same folder as the script above. This will create a “BizTalk Benchmark Wizard” application along with all ports and orchestrations needed. To finish the installation you will need to run the BizTalk Benchmark Wizard.msi on all BizTalk servers to add the assemblies to the Global Assembly Cache (GAC). Next I will look at running the BizTalk Benchmark Wizard.

    Read the article

  • Big Data – Buzz Words: What is MapReduce – Day 7 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is Hadoop. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – MapReduce. What is MapReduce? MapReduce was designed by Google as a programming model for processing large data sets with a parallel, distributed algorithm on a cluster. Though, MapReduce was originally Google proprietary technology, it has been quite a generalized term in the recent time. MapReduce comprises a Map() and Reduce() procedures. Procedure Map() performance filtering and sorting operation on data where as procedure Reduce() performs a summary operation of the data. This model is based on modified concepts of the map and reduce functions commonly available in functional programing. The library where procedure Map() and Reduce() belongs is written in many different languages. The most popular free implementation of MapReduce is Apache Hadoop which we will explore tomorrow. Advantages of MapReduce Procedures The MapReduce Framework usually contains distributed servers and it runs various tasks in parallel to each other. There are various components which manages the communications between various nodes of the data and provides the high availability and fault tolerance. Programs written in MapReduce functional styles are automatically parallelized and executed on commodity machines. The MapReduce Framework takes care of the details of partitioning the data and executing the processes on distributed server on run time. During this process if there is any disaster the framework provides high availability and other available modes take care of the responsibility of the failed node. As you can clearly see more this entire MapReduce Frameworks provides much more than just Map() and Reduce() procedures; it provides scalability and fault tolerance as well. A typical implementation of the MapReduce Framework processes many petabytes of data and thousands of the processing machines. How do MapReduce Framework Works? A typical MapReduce Framework contains petabytes of the data and thousands of the nodes. Here is the basic explanation of the MapReduce Procedures which uses this massive commodity of the servers. Map() Procedure There is always a master node in this infrastructure which takes an input. Right after taking input master node divides it into smaller sub-inputs or sub-problems. These sub-problems are distributed to worker nodes. A worker node later processes them and does necessary analysis. Once the worker node completes the process with this sub-problem it returns it back to master node. Reduce() Procedure All the worker nodes return the answer to the sub-problem assigned to them to master node. The master node collects the answer and once again aggregate that in the form of the answer to the original big problem which was assigned master node. The MapReduce Framework does the above Map () and Reduce () procedure in the parallel and independent to each other. All the Map() procedures can run parallel to each other and once each worker node had completed their task they can send it back to master code to compile it with a single answer. This particular procedure can be very effective when it is implemented on a very large amount of data (Big Data). The MapReduce Framework has five different steps: Preparing Map() Input Executing User Provided Map() Code Shuffle Map Output to Reduce Processor Executing User Provided Reduce Code Producing the Final Output Here is the Dataflow of MapReduce Framework: Input Reader Map Function Partition Function Compare Function Reduce Function Output Writer In a future blog post of this 31 day series we will explore various components of MapReduce in Detail. MapReduce in a Single Statement MapReduce is equivalent to SELECT and GROUP BY of a relational database for a very large database. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – HDFS. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >