Search Results

Search found 2343 results on 94 pages for 'masterchris 99'.

Page 28/94 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • MySQL greatest value in row?

    - by noryb009
    Hi! I'm using MySQL with PHP. This is like my table: (I'm using 3 values, but there are more) id | 1 | 2 | 3 1 | 3 |12 |-29 2 | 5 |8 |8 3 | 99|7 |NULL I need to get the greatest value in a certain row. If should get: (id - output) (1 - 2), (2-2), (3-1). Is there any queries for this? I've been trying, but I can't get it to work right.

    Read the article

  • Merging some columns of two mysql tables where id = fileid

    - by garg
    There are two tables TableA filedata_id | user_id | filename 1 | 1 | file.txt 2 | 1 | file2.txt TableB a_id | date | filedataid | counter | state | cat_id | subcat_id | med_id 99 | 1242144 | 1 | 2 | v | 55 | 56 | 90 100 | 1231232 | 2 | 3 | i | 44 | 55 | 110 I want to move columns cat_id, subcat_id, med_id to TableA where tableA.filedata_id = TableB.filedataid The result should be: TableA filedata_id | user_id | filename | cat_id | subcat_id | med_id 1 | 1 | file.txt | 55 | 56 | 90 2 | 1 | file2.txt | 44 | 55 | 110 and so on. Is there a way to do this easily?

    Read the article

  • How can I build pyv8 from source on FreeBSD against the v8 port?

    - by Utkonos
    I am unable to build pyv8 from source on FreeBSD. I have installed the /usr/ports/lang/v8 port, and I'm running into the following error. It seems that pyv8 wants to build v8 itself even though v8 is already built and installed. How can I point pyv8 to the already installed location of v8? # python setup.py build Found Google v8 base on V8_HOME , update it to the latest SVN trunk at running build ==================== INFO: Installing or updating GYP... -------------------- INFO: Check out GYP from SVN ... DEBUG: make dependencies ERROR: Check out GYP from SVN failed: code=2 DEBUG: "Makefile", line 43: Missing dependency operator "Makefile", line 45: Need an operator "Makefile", line 46: Need an operator "Makefile", line 48: Need an operator "Makefile", line 50: Need an operator "Makefile", line 52: Need an operator "Makefile", line 54: Missing dependency operator "Makefile", line 56: Need an operator "Makefile", line 58: Missing dependency operator "Makefile", line 60: Need an operator "Makefile", line 62: Missing dependency operator "Makefile", line 64: Need an operator "Makefile", line 66: Missing dependency operator "Makefile", line 68: Need an operator "Makefile", line 70: Missing dependency operator "Makefile", line 72: Need an operator "Makefile", line 73: Missing dependency operator "Makefile", line 75: Need an operator "Makefile", line 77: Missing dependency operator "Makefile", line 79: Need an operator "Makefile", line 81: Missing dependency operator "Makefile", line 83: Need an operator "Makefile", line 85: Missing dependency operator "Makefile", line 87: Need an operator "Makefile", line 89: Need an operator "Makefile", line 91: Missing dependency operator "Makefile", line 93: Need an operator "Makefile", line 95: Need an operator "Makefile", line 97: Need an operator "Makefile", line 99: Missing dependency operator "Makefile", line 101: Need an operator "Makefile", line 103: Missing dependency operator "Makefile", line 105: Need an operator "Makefile", line 107: Missing dependency operator "Makefile", line 109: Need an operator "Makefile", line 111: Missing dependency operator "Makefile", line 113: Need an operator "Makefile", line 115: Missing dependency operator "Makefile", line 117: Need an operator Error expanding embedded variable. ==================== INFO: Patching the GYP scripts INFO: patch the Google v8 build/standalone.gypi file to enable RTTI and C++ Exceptions ==================== INFO: building Google v8 with GYP for x64 platform with release mode -------------------- INFO: build v8 from SVN ... DEBUG: make verifyheap=off component=shared_library visibility=on gdbjit=off liveobjectlist=off regexp=native disassembler=off objectprint=off debuggersupport=on extrachecks=off snapshot=on werror=on x64.release ERROR: build v8 from SVN failed: code=2 DEBUG: "Makefile", line 43: Missing dependency operator "Makefile", line 45: Need an operator "Makefile", line 46: Need an operator "Makefile", line 48: Need an operator "Makefile", line 50: Need an operator "Makefile", line 52: Need an operator "Makefile", line 54: Missing dependency operator "Makefile", line 56: Need an operator "Makefile", line 58: Missing dependency operator "Makefile", line 60: Need an operator "Makefile", line 62: Missing dependency operator "Makefile", line 64: Need an operator "Makefile", line 66: Missing dependency operator "Makefile", line 68: Need an operator "Makefile", line 70: Missing dependency operator "Makefile", line 72: Need an operator "Makefile", line 73: Missing dependency operator "Makefile", line 75: Need an operator "Makefile", line 77: Missing dependency operator "Makefile", line 79: Need an operator "Makefile", line 81: Missing dependency operator "Makefile", line 83: Need an operator "Makefile", line 85: Missing dependency operator "Makefile", line 87: Need an operator "Makefile", line 89: Need an operator "Makefile", line 91: Missing dependency operator "Makefile", line 93: Need an operator "Makefile", line 95: Need an operator "Makefile", line 97: Need an operator "Makefile", line 99: Missing dependency operator "Makefile", line 101: Need an operator "Makefile", line 103: Missing dependency operator "Makefile", line 105: Need an operator "Makefile", line 107: Missing dependency operator "Makefile", line 109: Need an operator "Makefile", line 111: Missing dependency operator "Makefile", line 113: Need an operator "Makefile", line 115: Missing dependency operator "Makefile", line 117: Need an operator Error expanding embedded variable. The files that are installed by the v8 port are the following (in /usr/local): bin/d8 include/v8.h include/v8-debug.h include/v8-preparser.h include/v8-profiler.h include/v8-testing.h include/v8stdint.h lib/libv8.so lib/libv8.so.1

    Read the article

  • Getting attacked, please what do I do?

    - by E3pO
    Getting millions of these requests! How can i stop these??? Gecko/20100101 Firefox/12.0" 173.59.227.11 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416620414 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 173.72.197.39 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416641552 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 2.222.7.143 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416647004 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 62.83.154.11 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416572373 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 65.35.221.207 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416453921 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; BOIE8;ENUS)" 68.40.182.244 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338415880184 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 5.1; rv:12.0) Gecko/20100101 Firefox/12.0" 99.244.26.33 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338384208421 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:12.0) Gecko/20100101 Firefox/12.0" 65.12.234.229 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338415812217 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 173.59.227.11 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416620415 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 68.40.182.244 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338415881181 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 5.1; rv:12.0) Gecko/20100101 Firefox/12.0" 188.82.242.197 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338414398872 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:12.0) Gecko/20100101 Firefox/12.0" 99.244.26.33 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338384208454 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:12.0) Gecko/20100101 Firefox/12.0" 173.59.227.11 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416620424 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 68.40.182.244 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338415882180 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 5.1; rv:12.0) Gecko/20100101 Firefox/12.0" 65.12.234.229 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338415812229 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 95.34.134.51 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416367865 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.52 Safari/536.5" 65.35.221.207 - - [30/May/2012:18:23:45 -0400] "GET /?id=1338416453937 HTTP/1.1" 200 28 "http://108.166.97.22/" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; BOIE8;ENUS)" How can i filter GET requests containing "http://108.166.97.22/" ?

    Read the article

  • How to configure DNS Server on Fedora

    - by user863873
    I want to learn how to configure my home PC server into a web server with domain and host. My IP is 109.99.141.133 and now points to a phpinfo page host on my home server. My registed domain is: anunta-anunturi.ro I searched for a tutorial and I've read that I have to configure /etc/named.conf and the file sources for the new zone that I create. So, from the tutorials, my /etc/named.conf looks like this: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { localhost; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "anunta-anunturi.ro" IN { type master; file "/etc/anunta-anunturi.db"; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; My /etc/anunta-anunturi.db file looks like this — I'm not sure if this is okay, or if it's the easy one. $TTL 86400 anunta-anunturi.ro. IN SOA serveur.anunta-anunturi.ro. root.serveur.anunta-anunturi.ro. ( 1997022700 ; Serial 28800 ; Refresh 14400 ; Retry 3600000 ; Expire 86400 ) ; Minumun IN NS serveur.anunta-anunturi.ro. IN MX 10 mail.anunta-anunturi.ro. serveur.anunta-anunturi.ro. IN A 192.168.1.37 www.anunta-anunturi.ro. IN A 192.168.1.37 mail.anunta-anunturi.ro. IN A 192.168.1.37 Extra info: At home I receive internet from my ISP through a router. My home PC and server recieve their IP automatically from the router when I start/restart. In my local home network, my server receives the IP 192.168.1.37 from the router. When I enter 109.99.141.133 in my browser, it points to the rooter that forwards port 80 to local IP 192.168.1.37 (my home server) Questions: Are my two files good? What/where is my nameserver that I need to copy/paste to my top level domain (where I registered my domain: rotld.ro)?

    Read the article

  • how to make bridge networking with KVM work in Fedora19

    - by netllama
    I'm attempting to get several virtual machines setup on a Fedora-19 host system, with the traditional bridge network devices (br0, br1, etc). I've done this many times before with older versions of Fedora (16, 14, etc), and it just works. However, for reasons that I cannot figure out, the bridge doesn't seem to be working in Fedora19. While I can successfully connect to the outside world (local network + internet) from inside a VM, nothing can communicate with the VM from outside (local network). I'm referring to something as trivial as pinging. From inside the VM, I can ping anything successfully (0% packet loss). However, from outside the VM (on the host, or any other system on the same network), I see 100% packet loss when pinging the IP address of the VM. My first question is simply, does anyone else have this working successfully in F19? And if so, what steps did you need to follow? I'm not using NetworkManager at all, its all the network service. There are no firewalls involved anywhere (iptables & firewall services are currently disabled). Here's the current host configuration: # brctl show bridge name bridge id STP enabled interfaces br0 8000.38eaa792efe5 no em2 vnet1 br1 8000.38eaa792efe6 no em3 br2 8000.38eaa792efe7 no em4 vnet0 virbr0 8000.525400db3ebf yes virbr0-nic # more /etc/sysconfig/network-scripts/ifcfg-em2 TYPE=Ethernet BRIDGE="br0" NAME=em2 DEVICE="em2" UUID=aeaa839e-c89c-4d6e-9daa-79b6a1b919bd ONBOOT=yes HWADDR=38:EA:A7:92:EF:E5 NM_CONTROLLED="no" # more /etc/sysconfig/network-scripts/ifcfg-br0 TYPE=Bridge NM_CONTROLLED="no" BOOTPROTO=dhcp NAME=br0 DEVICE="br0" ONBOOT=yes # ifconfig em2 ;ifconfig br0 em2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet6 fe80::3aea:a7ff:fe92:efe5 prefixlen 64 scopeid 0x20<link> ether 38:ea:a7:92:ef:e5 txqueuelen 1000 (Ethernet) RX packets 100093 bytes 52354831 (49.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 25321 bytes 15791341 (15.0 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device memory 0xf7d00000-f7e00000 br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.31.99.226 netmask 255.255.252.0 broadcast 10.31.99.255 inet6 fe80::3aea:a7ff:fe92:efe5 prefixlen 64 scopeid 0x20<link> ether 38:ea:a7:92:ef:e5 txqueuelen 0 (Ethernet) RX packets 19619 bytes 1963328 (1.8 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 11 bytes 1074 (1.0 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Relevant section from /etc/libvirt/qemu/foo.xml (one of the VMs with this problem): <interface type='bridge'> <mac address='52:54:00:26:22:9d'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> # ps -ef | grep qemu qemu 1491 1 82 13:25 ? 00:42:09 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name cuda-linux64-build5 -S -machine pc-0.13,accel=kvm,usb=off -cpu SandyBridge,+pdpe1gb,+osxsave,+dca,+pcid,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 16384 -smp 6,sockets=6,cores=1,threads=1 -uuid 6e930234-bdfd-044d-2787-22d4bbbe30b1 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/cuda-linux64-build5.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/cuda-linux64-build5.img,if=none,id=drive-virtio-disk0,format=raw,cache=writeback -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:26:22:9d,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:1 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 I can provide additional information, if requested. thanks!

    Read the article

  • How to understand the memory usage and load average in linux server

    - by Tim
    Hi, I am using a linux server which has 128GB of memory and 24 cores. I use top to see how much it is used. Its output is pasted at the end of the post. Here are two questions: (1) I see that each of the running processes occupies a very small percentage of memory (%MEM no more than 0.2%, and most just 0.0%), but how the total memory is almost used as in the fourth line of output ("Mem: 130766620k total, 130161072k used, 605548k free, 919300k buffers")? The sum of used percentage of memory over all processes seems unlikely to achieve almost 100%, doesn't it? (2) how to understand the load average on the first line ("load average: 14.04, 14.02, 14.00")? Thanks and regards! Edit: Thanks! I also really like to hear some rough numbers based on used percentage of memory to determine if a server is heavily loaded, since I once became the one who cramed the server without understanding the current load. Is swap regarded as almost the same as memory? For example, when memory and swap are almost of same size, if the memory is almost running out but the swap is still largely free, may I just view it as if the used percentage of memory + swap is still not high and run other new processes? How would you consider together CPU or memory (or memory + swap) usage? Do you become worried if either of them reaches too high or both? Output of top: $ top top - 12:45:33 up 19 days, 23:11, 18 users, load average: 14.04, 14.02, 14.00 Tasks: 484 total, 12 running, 472 sleeping, 0 stopped, 0 zombie Cpu(s): 36.7%us, 19.7%sy, 0.0%ni, 43.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 130766620k total, 130161072k used, 605548k free, 919300k buffers Swap: 63111312k total, 500556k used, 62610756k free, 124437752k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6529 sanchez 18 -2 1075m 219m 13m S 100 0.2 13760:23 MATLAB 13210 timothy 18 -2 48336 37m 1216 R 100 0.0 3:56.75 absurdity 13888 timothy 18 -2 48336 37m 1204 R 100 0.0 2:04.89 absurdity 14542 timothy 18 -2 48336 37m 1196 R 100 0.0 1:08.34 absurdity 14544 timothy 18 -2 2888 2076 400 R 100 0.0 1:06.14 gatherData 6183 sanchez 18 -2 1133m 195m 13m S 100 0.2 13676:04 MATLAB 6795 sanchez 18 -2 1079m 210m 13m S 100 0.2 13734:26 MATLAB 10178 timothy 18 -2 48336 37m 1204 R 100 0.0 11:33.93 absurdity 12438 timothy 18 -2 48336 37m 1216 R 100 0.0 5:38.17 absurdity 13661 timothy 18 -2 48336 37m 1216 R 100 0.0 2:44.13 absurdity 14098 timothy 18 -2 48336 37m 1204 R 100 0.0 1:58.31 absurdity 14335 timothy 18 -2 48336 37m 1196 R 100 0.0 1:08.93 absurdity 14765 timothy 18 -2 48336 37m 1196 R 99 0.0 0:32.57 absurdity 13445 timothy 18 -2 48336 37m 1216 R 99 0.0 3:01.37 absurdity 28990 root 20 0 0 0 0 S 2 0.0 65:50.21 pdflush 12141 tim 18 -2 19380 1660 1024 R 1 0.0 0:04.04 top 1240 root 15 -5 0 0 0 S 0 0.0 16:07.11 kjournald 9019 root 20 0 296m 4460 2616 S 0 0.0 82:19.51 kdm_greet 1 root 20 0 4028 728 592 S 0 0.0 0:03.11 init 2 root 15 -5 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT -5 0 0 0 S 0 0.0 0:01.01 migration/0 4 root 15 -5 0 0 0 S 0 0.0 0:08.13 ksoftirqd/0 5 root RT -5 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT -5 0 0 0 S 0 0.0 17:27.31 migration/1 7 root 15 -5 0 0 0 S 0 0.0 0:01.21 ksoftirqd/1 8 root RT -5 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root RT -5 0 0 0 S 0 0.0 10:02.56 migration/2 10 root 15 -5 0 0 0 S 0 0.0 0:00.34 ksoftirqd/2 11 root RT -5 0 0 0 S 0 0.0 0:00.00 watchdog/2 12 root RT -5 0 0 0 S 0 0.0 4:29.53 migration/3 13 root 15 -5 0 0 0 S 0 0.0 0:00.34 ksoftirqd/3

    Read the article

  • fail2ban custom action to permanent ban IPs from China

    - by John Magnolia
    When a IP address gets banned how can I check if the banned IP address is from China. If yes, then add it to the permanent ban list. I have found this nice guide which write the banned IP to file. Reason: I am getting a lot of brute force attacks from China daily, thankfully fail2ban is helping restrict this although they appear to be getting worse and they are just changing their IP Address. Or even better would be if there was a maintained database of known hacker IP addresses. Example 1 Hi, The IP 60.169.78.77 has just been banned by Fail2Ban after 4 attempts against vsftpd. Here are more information about 60.169.78.77: % [whois.apnic.net node-7] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html inetnum: 60.166.0.0 - 60.175.255.255 netname: CHINANET-AH descr: CHINANET anhui province network descr: China Telecom descr: A12,Xin-Jie-Kou-Wai Street descr: Beijing 100088 country: CN admin-c: CH93-AP tech-c: JW89-AP mnt-by: APNIC-HM mnt-routes: MAINT-CHINANET-AH mnt-lower: MAINT-CHINANET-AH status: ALLOCATED PORTABLE changed: [email protected] 20040721 source: APNIC person: Chinanet Hostmaster nic-hdl: CH93-AP e-mail: [email protected] address: No.31 ,jingrong street,beijing address: 100032 phone: +86-10-58501724 fax-no: +86-10-58501724 country: CN changed: [email protected] 20070416 mnt-by: MAINT-CHINANET source: APNIC person: Jinneng Wang address: 17/F, Postal Building No.120 Changjiang address: Middle Road, Hefei, Anhui, China country: CN phone: +86-551-2659073 fax-no: +86-551-2659287 e-mail: [email protected] nic-hdl: JW89-AP mnt-by: MAINT-NEW changed: [email protected] 19990818 source: APNIC Regards, Fail2Ban Example 2 Hi, The IP 60.169.78.81 has just been banned by Fail2Ban after 4 attempts against vsftpd. Here are more information about 60.169.78.81: % [whois.apnic.net node-6] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html inetnum: 60.166.0.0 - 60.175.255.255 netname: CHINANET-AH descr: CHINANET anhui province network descr: China Telecom descr: A12,Xin-Jie-Kou-Wai Street descr: Beijing 100088 country: CN admin-c: CH93-AP tech-c: JW89-AP mnt-by: APNIC-HM mnt-routes: MAINT-CHINANET-AH mnt-lower: MAINT-CHINANET-AH status: ALLOCATED PORTABLE changed: [email protected] 20040721 source: APNIC person: Chinanet Hostmaster nic-hdl: CH93-AP e-mail: [email protected] address: No.31 ,jingrong street,beijing address: 100032 phone: +86-10-58501724 fax-no: +86-10-58501724 country: CN changed: [email protected] 20070416 mnt-by: MAINT-CHINANET source: APNIC person: Jinneng Wang address: 17/F, Postal Building No.120 Changjiang address: Middle Road, Hefei, Anhui, China country: CN phone: +86-551-2659073 fax-no: +86-551-2659287 e-mail: [email protected] nic-hdl: JW89-AP mnt-by: MAINT-NEW changed: [email protected] 19990818 source: APNIC Regards, Fail2Ban Example 3 Hi, The IP 222.133.244.99 has just been banned by Fail2Ban after 4 attempts against vsftpd. Here are more information about 222.133.244.99: % [whois.apnic.net node-6] % Whois data copyright terms http://www.apnic.net/db/dbcopyright.html inetnum: 222.133.244.96 - 222.133.244.127 netname: LCZFFHQ country: CN descr: liaochenggovermentfanghuoqiang admin-c: DS95-AP tech-c: DS95-AP status: ASSIGNED NON-PORTABLE changed: [email protected] 20060122 mnt-by: MAINT-CNCGROUP-SD source: APNIC route: 222.132.0.0/14 descr: CNC Group CHINA169 Shandong Province Network country: CN origin: AS4837 mnt-by: MAINT-CNCGROUP-RR changed: [email protected] 20060118 source: APNIC person: Data Communication Bureau Shandong nic-hdl: DS95-AP e-mail: [email protected] address: No.77 Jingsan Road,Jinan,Shandong,P.R.China phone: +86-531-6052611 fax-no: +86-531-6052414 country: CN changed: [email protected] 20050330 mnt-by: MAINT-CNCGROUP-SD source: APNIC Regards, Fail2Ban

    Read the article

  • Server currently under DDOS, not sure what to do

    - by Volex
    My web server is currently under a DDOS attack I believe, the messages log is full of these kind of messages: May 13 15:51:19 kernel: nf_conntrack: table full, dropping packet. May 13 15:51:19 last message repeated 9 times May 13 15:51:24 kernel: __ratelimit: 78 callbacks suppressed May 13 15:51:24 kernel: nf_conntrack: table full, dropping packet. May 13 15:52:06 kernel: possible SYN flooding on port 80. Sending cookies. and a netstat has a huge amount of the following: tcp 0 0 my.host.com:http bb176da0.virtua.com.br:4998 SYN_RECV tcp 0 0 my.host.com:http 187.0.43.109:2694 SYN_RECV tcp 0 0 my.host.com:http 109.229.4.145:1722 SYN_RECV tcp 0 0 my.host.com:http 189-84-163-244.sodobr:63267 SYN_RECV tcp 0 0 my.host.com:http bd66839d.virtua.com.br:3469 SYN_RECV tcp 0 0 my.host.com:http 69.101.56.190.dsl.int:52552 SYN_RECV tcp 0 0 my.host.com:http pc-62-230-47-190.cm.vt:2262 SYN_RECV tcp 0 0 my.host.com:http 189-84-163-244.sodobr:63418 SYN_RECV tcp 0 0 my.host.com:http pc-62-230-47-190.cm.vt:1741 SYN_RECV tcp 0 0 my.host.com:http zaq3d739320.zaq.ne.jp:2141 SYN_RECV tcp 0 0 my.host.com:http netacc-gpn-4-80-73.po:52676 SYN_RECV tcpdump shows: 7:11:08.564510 IP 187-4-1xx-4.xxx.ipd.brasiltelecom.net.br.54821 > my.host.com.http: S 999692166:999692166(0) win 65535 <mss 1452,nop,nop,sackOK> 17:11:08.566347 IP 114-44-171-67.dynamic.hinet.net.1129 > my.host.com.http: S 605369055:605369055(0) win 65535 <mss 1440,nop,nop,sackOK> 17:11:08.570210 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5590 > my.host.com.http: S 2813379182:2813379182(0) win 16384 <mss 1460,nop,nop,sackOK> 17:11:08.571290 IP dsl-189-143-30-99-dyn.prod-infinitum.com.mx.1615 > my.host.com.http: S 281542700:281542700(0) win 65535 <mss 1452,nop,nop,sackOK> 17:11:08.583847 IP dsl-189-143-30-99-dyn.prod-infinitum.com.mx.1617 > my.host.com.http: S 499413892:499413892(0) win 65535 <mss 1452,nop,nop,sackOK> 17:11:08.588680 IP 170.51.229.112.2569 > my.host.com.http: S 2195084898:2195084898(0) win 65535 <mss 1460,nop,nop,sackOK> 17:11:08.588773 IP gw2-1.211.ru.3180 > my.host.com.http: F 2315901786:2315901786(0) ack 2620913033 win 64240 17:11:08.590656 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5614 > my.host.com.http: S 2813715032:2813715032(0) win 16384 <mss 1460,nop,nop,sackOK> 17:11:08.591212 IP 203.82.82.54.15848 > my.host.com.http: S 4070423507:4070423507(0) win 16384 <mss 1400,nop,nop,sackOK> 17:11:08.591254 IP 203.82.82.54.2545 > my.host.com.http: S 1790910784:1790910784(0) win 16384 <mss 1400,nop,nop,sackOK> 17:11:08.591289 IP 203.82.82.54.28306 > my.host.com.http: S 578615626:578615626(0) win 16384 <mss 1400,nop,nop,sackOK> 17:11:08.591591 IP gw2-1.211.ru.3191 > my.host.com.http: F 2316435991:2316435991(0) ack 2634205972 win 64240 17:11:08.591790 IP 200-101-13-130.pvoce300.ipd.brasiltelecom.net.br.5593 > my.host.com.http: S 2813659017:2813659017(0) win 16384 <mss 1460,nop,nop,sackOK> 17:11:08.593691 IP gw2-1.211.ru.3203 > my.host.com.http: F 2316834420:2316834420(0) ack 2629074987 win 64240 I'm not sure what I can do to limit/mitigate this, currently no webpages are being served, any help gratefully appreciated.

    Read the article

  • CPU Utilization LAMP stack

    - by Max
    We've got an ec2 m2.4xlarge running Magento (centos 5.6, httpd 2.2, php 5.2.17 with eaccelerator 0.9.5.3, mysql 5.1.52). Right now we're getting a large traffic spike, and our top looks like this: top - 09:41:29 up 31 days, 1:12, 1 user, load average: 120.01, 129.03, 113.23 Tasks: 1190 total, 18 running, 1172 sleeping, 0 stopped, 0 zombie Cpu(s): 97.3%us, 1.8%sy, 0.0%ni, 0.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.4%st Mem: 71687720k total, 36898928k used, 34788792k free, 49692k buffers Swap: 880737784k total, 0k used, 880737784k free, 1586524k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2433 mysql 15 0 23.6g 4.5g 7112 S 564.7 6.6 33607:34 mysqld 24046 apache 16 0 411m 65m 28m S 26.4 0.1 0:09.05 httpd 24360 apache 15 0 410m 60m 25m S 26.4 0.1 0:03.65 httpd 24993 apache 16 0 410m 57m 21m S 26.1 0.1 0:01.41 httpd 24838 apache 16 0 428m 74m 20m S 24.8 0.1 0:02.37 httpd 24359 apache 16 0 411m 62m 26m R 22.3 0.1 0:08.12 httpd 23850 apache 15 0 411m 64m 27m S 16.8 0.1 0:14.54 httpd 25229 apache 16 0 404m 46m 17m R 10.2 0.1 0:00.71 httpd 14594 apache 15 0 404m 63m 34m S 8.4 0.1 1:10.26 httpd 24955 apache 16 0 404m 50m 21m R 8.4 0.1 0:01.66 httpd 24313 apache 16 0 399m 46m 22m R 8.1 0.1 0:02.30 httpd 25119 apache 16 0 411m 59m 23m S 6.8 0.1 0:01.45 httpd Questions: Would giving msyqld more memory help it cache queries and react faster? If so, how? Other than splitting mysql and php to separate servers (which we're about to do) is there anything else we could/should be doing? Thanks! UPDATE: Here's our my.cnf along with the output of mysqltuner. It looks like a cache problem. Thanks again! # cat /etc/my.cnf [client] port = **** socket = /var/lib/mysql/mysql.sock [mysqld] datadir=/mnt/persistent/mysql port=**** socket=/var/lib/mysql/mysql.sock key_buffer = 512M max_allowed_packet = 64M table_cache = 1024 sort_buffer_size = 8M read_buffer_size = 4M read_rnd_buffer_size = 2M myisam_sort_buffer_size = 64M thread_cache_size = 128M tmp_table_size = 128M join_buffer_size = 1M query_cache_limit = 2M query_cache_size= 64M query_cache_type = 1 max_connections = 1000 thread_stack = 128K thread_concurrency = 48 log-bin=mysql-bin server-id = 1 wait_timeout = 300 innodb_data_home_dir = /mnt/persistent/mysql/ innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size = 20G innodb_additional_mem_pool_size = 20M innodb_log_file_size = 64M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 1 innodb_lock_wait_timeout = 50 innodb_thread_concurrency = 48 ft_min_word_len=3 [myisamchk] ft_min_word_len=3 key_buffer = 128M sort_buffer_size = 128M read_buffer = 2M write_buffer = 2M # ./mysqltuner.pl >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.52-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB +Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 2G (Tables: 26) [--] Data in InnoDB tables: 749M (Tables: 250) [!!] Total fragmented tables: 262 -------- Security Recommendations ------------------------------------------- -------- Performance Metrics ------------------------------------------------- [--] Up for: 31d 2h 30m 38s (680M q [253.371 qps], 2M conn, TX: 4825B, RX: 236B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 20.6G global + 15.1M per thread (1000 max threads) [OK] Maximum possible memory usage: 35.4G (51% of installed RAM) [OK] Slow queries: 0% (35K/680M) [OK] Highest usage of available connections: 53% (537/1000) [OK] Key buffer size / total MyISAM indexes: 512.0M/457.2M [OK] Key buffer hit rate: 100.0% (9B cached / 264K reads) [OK] Query cache efficiency: 42.3% (260M cached / 615M selects) [!!] Query cache prunes per day: 4384652 [OK] Sorts requiring temporary tables: 0% (1K temp sorts / 38M sorts) [!!] Joins performed without indexes: 100404 [OK] Temporary tables created on disk: 17% (7M on disk / 45M total) [OK] Thread cache hit rate: 99% (537 created / 2M connections) [!!] Table cache hit rate: 0% (1K open / 946K opened) [OK] Open file limit used: 9% (453/5K) [OK] Table locks acquired immediately: 99% (758M immediate / 758M locks) [OK] InnoDB data size / buffer pool: 749.3M/20.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 64M) join_buffer_size (> 1.0M, or always use indexes with joins) table_cache (> 1024)

    Read the article

  • Where's my memory?! Nginx + PHP-FPM front end webserver slows to a crawl...

    - by incredimike
    I'm not sure if I have a problem with a memory leak (as my hosting company suggests), or if we both need to read http://linuxatemyram.com. Maybe you clever people can help us out? This is a front-end webserver VM running essentially only nginx & php-fpm on RHEL 5.5. This server is powering Magento, a PHP eCommerce thinggy. The server is running in a shared environment, but we're changing that soon. Anyway.. after a reboot the server runs just fine, but within a day it will grind itself into nothingness. Pages will take literally 2 minutes to load, CPU spikes like crazy, etc.. The console is even sluggish when I SSH in. It's like my whole server is being brought to its knees. I've also been monitoring the DB server via top and tcpdumping incoming traffic. The DB stays idle for a good portion of that "slow" load time. When i start seeing queries coming from the front-end server, the page loads soon afterward. Here are some stats after me logging in during a slow-down, after restarting php-fpm: [mike@front01 ~]$ free -m total used free shared buffers cached Mem: 5963 5217 745 0 192 314 -/+ buffers/cache: 4711 1252 Swap: 4047 4 4042 [mike@front01 ~]$ top top - 11:38:55 up 2 days, 1:01, 3 users, load average: 0.06, 0.17, 0.21 Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.3%sy, 0.0%ni, 99.3%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 6106800k total, 5361288k used, 745512k free, 199960k buffers Swap: 4144728k total, 4976k used, 4139752k free, 328480k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31806 apache 15 0 601m 120m 37m S 0.0 2.0 0:22.23 php-fpm 31805 apache 15 0 549m 66m 31m S 0.0 1.1 0:14.54 php-fpm 31809 apache 16 0 547m 65m 32m S 0.0 1.1 0:12.84 php-fpm 32285 apache 15 0 546m 63m 33m S 0.0 1.1 0:09.22 php-fpm 32373 apache 15 0 546m 62m 32m S 0.0 1.1 0:09.66 php-fpm 31808 apache 16 0 543m 60m 35m S 0.0 1.0 0:18.93 php-fpm 31807 apache 16 0 533m 49m 30m S 0.0 0.8 0:08.93 php-fpm 32092 apache 15 0 535m 48m 27m S 0.0 0.8 0:06.67 php-fpm 4392 root 18 0 194m 10m 7184 S 0.0 0.2 0:06.96 cvd 4064 root 15 0 154m 8304 4220 S 0.0 0.1 3:55.57 snmpd 4394 root 15 0 119m 5660 2944 S 0.0 0.1 0:02.84 EvMgrC 31804 root 15 0 519m 5180 932 S 0.0 0.1 0:00.46 php-fpm 4138 ntp 15 0 23396 5032 3904 S 0.0 0.1 0:02.38 ntpd 643 nginx 15 0 95276 4408 1524 S 0.0 0.1 0:01.15 nginx 5131 root 16 0 90128 3340 2600 S 0.0 0.1 0:01.41 sshd 28467 root 15 0 90128 3340 2600 S 0.0 0.1 0:00.35 sshd 32602 root 16 0 90128 3332 2600 S 0.0 0.1 0:00.36 sshd 1614 root 16 0 90128 3308 2588 S 0.0 0.1 0:00.02 sshd 2817 root 5 -10 7216 3140 1724 S 0.0 0.1 0:03.80 iscsid 4161 root 15 0 66948 2340 800 S 0.0 0.0 0:10.35 sendmail 1617 nicole 17 0 53876 2000 1516 S 0.0 0.0 0:00.02 sftp-server ... Is there anything else I should be looking at, or any more information that might be useful? I'm just a developer, but the slowdowns on this system worry me and make it hard to do my work.. Help me out, ServerFault!

    Read the article

  • Why load increase ,ssd iops increase but cpu iowait decrease?

    - by mq44944
    There is a strange thing on my server which has a mysql running on it. The QPS is more than 4000 but TPS is less than 20. The server load is more than 80 and cpu usr is more than 86% but iowait is less than 8%. The disk iops is more than 16000 and util of disk is more than 99%. When the QPS decreases, the load decreases, the cpu iowait increases. I can't catch this! root@mypc # dmidecode | grep "Product Name" Product Name: PowerEdge R510 Product Name: 084YMW root@mypc # megacli -PDList -aALL |grep "Inquiry Data" Inquiry Data: SEAGATE ST3600057SS ES656SL316PT Inquiry Data: SEAGATE ST3600057SS ES656SL30THV Inquiry Data: ATA INTEL SSDSA2CW300362CVPR201602A6300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR2044037K300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR204402PX300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR204403WN300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR202000HU300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR202001E7300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR204402WE300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR204404E5300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR204401QF300EGN Inquiry Data: ATA INTEL SSDSA2CW300362CVPR20450001300EGN the mysql data files lie on the ssd disks which are organizaed using RAID 10. root@mypc # megacli -LDInfo -L1 -a0 Adapter 0 -- Virtual Drive Information: Virtual Disk: 1 (Target Id: 1) Name: RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0 Size:1427840MB State: Optimal Stripe Size: 64kB Number Of Drives:2 Span Depth:5 Default Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Exit Code: 0x00 -------- -----load-avg---- ---cpu-usage--- ---swap--- -------------------------io-usage----------------------- -QPS- -TPS- -Hit%- time | 1m 5m 15m |usr sys idl iow| si so| r/s w/s rkB/s wkB/s queue await svctm %util| ins upd del sel iud| lor hit| 09:05:29|79.80 64.49 42.00| 82 7 6 5| 0 0|16421.1 10.6262705.9 85.2 8.3 0.5 0.1 99.5| 0 0 0 3968 0| 495482 96.58| 09:05:30|79.80 64.49 42.00| 79 7 8 6| 0 0|15907.4 230.6254409.7 6357.5 8.4 0.5 0.1 98.5| 0 0 0 4195 0| 496434 96.68| 09:05:31|81.34 65.07 42.31| 81 7 7 5| 0 0|16198.7 8.6259029.2 99.8 8.1 0.5 0.1 99.3| 0 0 0 4220 0| 508983 96.70| 09:05:32|81.34 65.07 42.31| 82 7 5 5| 0 0|16746.6 8.7267853.3 92.4 8.5 0.5 0.1 99.4| 0 0 0 4084 0| 503834 96.54| 09:05:33|81.34 65.07 42.31| 81 7 6 5| 0 0|16498.7 9.6263856.8 92.3 8.0 0.5 0.1 99.3| 0 0 0 4030 0| 507051 96.60| 09:05:34|81.34 65.07 42.31| 80 8 7 6| 0 0|16328.4 11.5261101.6 95.8 8.1 0.5 0.1 98.3| 0 0 0 4119 0| 504409 96.63| 09:05:35|81.31 65.33 42.52| 82 7 6 5| 0 0|16374.0 8.7261921.9 92.5 8.1 0.5 0.1 99.7| 0 0 0 4127 0| 507279 96.66| 09:05:36|81.31 65.33 42.52| 81 8 6 5| 0 0|16496.2 8.6263832.0 84.5 8.5 0.5 0.1 99.2| 0 0 0 4100 0| 505054 96.59| 09:05:37|81.31 65.33 42.52| 82 8 6 4| 0 0|16239.4 9.6259768.8 84.3 8.0 0.5 0.1 99.1| 0 0 0 4273 0| 510621 96.72| 09:05:38|81.31 65.33 42.52| 81 7 6 5| 0 0|16349.6 8.7261439.2 81.4 8.2 0.5 0.1 98.9| 0 0 0 4171 0| 510145 96.67| 09:05:39|81.31 65.33 42.52| 82 7 6 5| 0 0|16116.8 8.7257667.6 96.5 8.0 0.5 0.1 99.1| 0 0 0 4348 0| 513093 96.74| 09:05:40|79.60 65.24 42.61| 79 7 7 7| 0 0|16154.2 242.9258390.4 6388.4 8.5 0.5 0.1 99.0| 0 0 0 4033 0| 507244 96.70| 09:05:41|79.60 65.24 42.61| 79 7 8 6| 0 0|16583.1 21.2265129.6 173.5 8.2 0.5 0.1 99.1| 0 0 0 3995 0| 501474 96.57| 09:05:42|79.60 65.24 42.61| 81 8 6 5| 0 0|16281.0 9.7260372.2 69.5 8.3 0.5 0.1 98.7| 0 0 0 4221 0| 509322 96.70| 09:05:43|79.60 65.24 42.61| 80 7 7 6| 0 0|16355.3 8.7261515.5 104.3 8.2 0.5 0.1 99.6| 0 0 0 4087 0| 502052 96.62| -------- -----load-avg---- ---cpu-usage--- ---swap--- -------------------------io-usage----------------------- -QPS- -TPS- -Hit%- time | 1m 5m 15m |usr sys idl iow| si so| r/s w/s rkB/s wkB/s queue await svctm %util| ins upd del sel iud| lor hit| 09:05:44|79.60 65.24 42.61| 83 7 5 4| 0 0|16469.4 11.6263387.0 138.8 8.2 0.5 0.1 98.7| 0 0 0 4292 0| 509979 96.65| 09:05:45|79.07 65.37 42.77| 80 7 6 6| 0 0|16659.5 9.7266478.7 85.0 8.4 0.5 0.1 98.5| 0 0 0 3899 0| 496234 96.54| 09:05:46|79.07 65.37 42.77| 78 7 7 8| 0 0|16752.9 8.7267921.8 97.1 8.4 0.5 0.1 98.9| 0 0 0 4126 0| 508300 96.57| 09:05:47|79.07 65.37 42.77| 82 7 6 5| 0 0|16657.2 9.6266439.3 84.3 8.3 0.5 0.1 98.9| 0 0 0 4086 0| 502171 96.57| 09:05:48|79.07 65.37 42.77| 79 8 6 6| 0 0|16814.5 8.7268924.1 77.6 8.5 0.5 0.1 99.0| 0 0 0 4059 0| 499645 96.52| 09:05:49|79.07 65.37 42.77| 81 7 6 5| 0 0|16553.0 6.8264708.6 42.5 8.3 0.5 0.1 99.4| 0 0 0 4249 0| 501623 96.60| 09:05:50|79.63 65.71 43.01| 79 7 7 7| 0 0|16295.1 246.9260475.0 6442.4 8.7 0.5 0.1 99.1| 0 0 0 4231 0| 511032 96.70| 09:05:51|79.63 65.71 43.01| 80 7 6 6| 0 0|16568.9 8.7264919.7 104.7 8.3 0.5 0.1 99.7| 0 0 0 4272 0| 517177 96.68| 09:05:53|79.63 65.71 43.01| 79 7 7 6| 0 0|16539.0 8.6264502.9 87.6 8.4 0.5 0.1 98.9| 0 0 0 3992 0| 496728 96.52| 09:05:54|79.63 65.71 43.01| 79 7 7 7| 0 0|16527.5 11.6264363.6 92.6 8.5 0.5 0.1 98.8| 0 0 0 4045 0| 502944 96.59| 09:05:55|79.63 65.71 43.01| 80 7 7 6| 0 0|16374.7 12.5261687.2 134.9 8.6 0.5 0.1 99.2| 0 0 0 4143 0| 507006 96.66| 09:05:56|76.05 65.20 42.96| 77 8 8 8| 0 0|16464.9 9.6263314.3 111.9 8.5 0.5 0.1 98.9| 0 0 0 4250 0| 505417 96.64| 09:05:57|76.05 65.20 42.96| 79 7 6 7| 0 0|16460.1 8.8263283.2 93.4 8.3 0.5 0.1 98.8| 0 0 0 4294 0| 508168 96.66| 09:05:58|76.05 65.20 42.96| 80 7 7 7| 0 0|16176.5 9.6258762.1 127.3 8.3 0.5 0.1 98.9| 0 0 0 4160 0| 509349 96.72| 09:05:59|76.05 65.20 42.96| 75 7 9 10| 0 0|16522.0 10.7264274.6 93.1 8.6 0.5 0.1 97.5| 0 0 0 4034 0| 492623 96.51| -------- -----load-avg---- ---cpu-usage--- ---swap--- -------------------------io-usage----------------------- -QPS- -TPS- -Hit%- time | 1m 5m 15m |usr sys idl iow| si so| r/s w/s rkB/s wkB/s queue await svctm %util| ins upd del sel iud| lor hit| 09:06:00|76.05 65.20 42.96| 79 7 7 7| 0 0|16369.6 21.2261867.3 262.5 8.4 0.5 0.1 98.9| 0 0 0 4305 0| 494509 96.59| 09:06:01|75.33 65.23 43.09| 73 6 9 12| 0 0|15864.0 209.3253685.4 6238.0 10.0 0.6 0.1 98.7| 0 0 0 3913 0| 483480 96.62| 09:06:02|75.33 65.23 43.09| 73 7 8 12| 0 0|15854.7 12.7253613.2 93.6 11.0 0.7 0.1 99.0| 0 0 0 4271 0| 483771 96.64| 09:06:03|75.33 65.23 43.09| 75 7 9 9| 0 0|16074.8 8.7257104.3 81.7 8.1 0.5 0.1 98.5| 0 0 0 4060 0| 480701 96.55| 09:06:04|75.33 65.23 43.09| 76 7 8 9| 0 0|16221.7 9.7259500.1 139.4 8.1 0.5 0.1 97.6| 0 0 0 3953 0| 486774 96.56| 09:06:05|74.98 65.33 43.24| 78 7 8 8| 0 0|16330.7 8.7261166.5 85.3 8.2 0.5 0.1 98.5| 0 0 0 3957 0| 481775 96.53| 09:06:06|74.98 65.33 43.24| 75 7 9 9| 0 0|16093.7 11.7257436.1 93.7 8.2 0.5 0.1 99.2| 0 0 0 3938 0| 489251 96.60| 09:06:07|74.98 65.33 43.24| 75 7 5 13| 0 0|15758.9 19.2251989.4 188.2 14.7 0.9 0.1 99.7| 0 0 0 4140 0| 494738 96.70| 09:06:08|74.98 65.33 43.24| 69 7 10 15| 0 0|16166.3 8.7258474.9 81.2 8.9 0.5 0.1 98.7| 0 0 0 3993 0| 487162 96.58| 09:06:09|74.98 65.33 43.24| 74 7 9 10| 0 0|16071.0 8.7257010.9 93.3 8.2 0.5 0.1 99.2| 0 0 0 4098 0| 491557 96.61| 09:06:10|70.98 64.66 43.14| 71 7 9 12| 0 0|15549.6 216.1248701.1 6188.7 8.3 0.5 0.1 97.8| 0 0 0 3879 0| 480832 96.66| 09:06:11|70.98 64.66 43.14| 71 7 10 13| 0 0|16233.7 22.4259568.1 257.1 8.2 0.5 0.1 99.2| 0 0 0 4088 0| 493200 96.62| 09:06:12|70.98 64.66 43.14| 78 7 8 7| 0 0|15932.4 10.6254779.5 108.1 8.1 0.5 0.1 98.6| 0 0 0 4168 0| 489838 96.63| 09:06:13|70.98 64.66 43.14| 71 8 9 12| 0 0|16255.9 11.5259902.3 103.9 8.3 0.5 0.1 98.0| 0 0 0 3874 0| 481246 96.52| 09:06:14|70.98 64.66 43.14| 60 6 16 18| 0 0|15621.0 9.7249826.1 81.9 8.0 0.5 0.1 99.3| 0 0 0 3956 0| 480278 96.65|

    Read the article

  • Building services with the .NET framework Cont’d

    - by Allan Rwakatungu
    In my previous blog I wrote an introductory post on services and how you can build services using the .NET frameworks Windows Communication Foundation (WCF) In this post I will show how to develop a real world application using WCF The problem During the last meeting we realized developers in Uganda are not so cool – they don’t use twitter so may not get the latest news and updates from the technology world. We also noticed they mostly use kabiriti phones (jokes). With their kabiriti phones they are unable to access the twitter web client or alternative twitter mobile clients like tweetdeck , twirl or tweetie. However, the kabiriti phones support SMS (Yeeeeeeei). So what we going to do to make these developers cool and keep them updated is by enabling them to receive tweets via SMS. We shall also enable them to develop their own applications that can extend this functionality Analysis Thanks to services and open API’s solving our problem is going to be easy.  1. To get tweets we can use the twitter service for FREE 2. To send SMS we shall use www.clickatell.com/ as they can send SMS to any country in the world. Besides we could not find any local service that offers API's for sending SMS :(. 3. To enable developers to integrate with our application so that they can extend it and build even cooler applications we use WCF. In addittion , because connectivity might be an issue we decided to use WCF because if has a inbuilt queing features. We also choose WCF because this is a post about .NET and WCF :). The Code Accessing the tweets To consume twitters REST API we shall use the WCF REST starter kit. Like it name indicates , the REST starter kit is a set of .NET framework classes that enable developers to create and access REST style services ( like the twitter service). Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Http; using System.Net; using System.Xml.Linq;   namespace UG.Demo {     public class TwitterService     {         public IList<TwitterStatus> SomeMethodName()         {             //Connect to the twitter service (HttpClient is part of the REST startkit classes)             HttpClient cl = new HttpClient("http://api.twitter.com/1/statuses/friends_timeline.xml");             //Supply your basic authentication credentials             cl.TransportSettings.Credentials = new NetworkCredential("ourusername", "ourpassword");             //issue an http             HttpResponseMessage resp = cl.Get();             //ensure we got reponse 200             resp.EnsureStatusIsSuccessful();             //use XLinq to parse the REST XML             var statuses = from r in resp.Content.ReadAsXElement().Descendants("status")                            select new TwitterStatus                            {                                User = r.Element("user").Element("screen_name").Value,                                Status = r.Element("text").Value                            };             return statuses.ToList();         }     }     public class TwitterStatus     {         public string User { get; set; }         public string Status { get; set; }     } }  Sending SMS Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} public class SMSService     {         public void Send(string phone, string message)         {                         HttpClient cl1 = new HttpClient();              //the clickatell XML format for sending SMS             string xml = String.Format("<clickAPI><sendMsg><api_id>3239621</api_id><user>ourusername</user><password>ourpassword</password><to>{0}</to><text>{1}</text></sendMsg></clickAPI>",phone,message);             //Post form data             HttpUrlEncodedForm form = new HttpUrlEncodedForm();             form.Add("data", xml);             System.Net.ServicePointManager.Expect100Continue = false;             string uri = @"http://api.clickatell.com/xml/xml";             HttpResponseMessage resp = cl1.Post(uri, form.CreateHttpContent());             resp.EnsureStatusIsSuccessful();         }     }

    Read the article

  • How Oracle Data Integration Customers Differentiate Their Business in Competitive Markets

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 With data being a central force in driving innovation and competing effectively, data integration has become a key IT approach to remove silos and ensure working with consistent and trusted data. Especially with the release of 12c version, Oracle Data Integrator and Oracle GoldenGate offer easy-to-use and high-performance solutions that help companies with their critical data initiatives, including big data analytics, moving to cloud architectures, modernizing and connecting transactional systems and more. In a recent press release we announced the great momentum and analyst recognition Oracle Data Integration products have achieved in the data integration and replication market. In this press release we described some of the key new features of Oracle Data Integrator 12c and Oracle GoldenGate 12c. In addition, a few from our 4500+ customers explained how Oracle’s data integration platform helped them achieve their business goals. In this blog post I would like to go over what these customers shared about their experience. Land O’Lakes is one of America’s premier member-owned cooperatives, and offers an extensive line of agricultural supplies, as well as production and business services. Rich Bellefeuille, manager, ETL & data warehouse for Land O’Lakes told us how GoldenGate helped them modernize their critical ERP system without impacting service and how they are moving to new projects with Oracle Data Integrator 12c: “With Oracle GoldenGate 11g, we've been able to migrate our enterprise-wide implementation of Oracle’s JD Edwards EnterpriseOne, ERP system, to a new database and application server platform with minimal downtime to our business. Using Oracle GoldenGate 11g we reduced database migration time from nearly 30 hours to less than 30 minutes. Given our quick success, we are considering expansion of our Oracle GoldenGate 12c footprint. We are also in the midst of deploying a solution leveraging Oracle Data Integrator 12c to manage our pricing data to handle orders more effectively and provide a better relationship with our clients. We feel we are gaining higher productivity and flexibility with Oracle's data integration products." ICON, a global provider of outsourced development services to the pharmaceutical, biotechnology and medical device industries, highlighted the competitive advantage that a solid data integration foundation brings. Diarmaid O’Reilly, enterprise data warehouse manager, ICON plc said “Oracle Data Integrator enables us to align clinical trials intelligence with the information needs of our sponsors. It helps differentiate ICON’s services in an increasingly competitive drug-development industry."  You can find more info on ICON's implementation here. A popular use case for Oracle GoldenGate’s real-time data integration is offloading operational reporting from critical transaction processing systems. SolarWorld, one of the world’s largest solar-technology producers and the largest U.S. solar panel manufacturer, implemented Oracle GoldenGate for real-time data integration of manufacturing data for fast analysis. Russ Toyama, U.S. senior database administrator for SolarWorld told us real-time data helps their operations and GoldenGate’s solution supports high performance of their manufacturing systems: “We use Oracle GoldenGate for real-time data integration into our decision support system, which performs real-time analysis for manufacturing operations to continuously improve product quality, yield and efficiency. With reliable and low-impact data movement capabilities, Oracle GoldenGate also helps ensure that our critical manufacturing systems are stable and operate with high performance."  You can watch the full interview with SolarWorld's Russ Toyama here. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Starwood Hotels and Resorts is one of the many customers that found out how well Oracle Data Integration products work with Oracle Exadata. Gordon Light, senior director of information technology for StarWood Hotels, says they had notable performance gain in loading Oracle Exadata reporting environment: “We leverage Oracle GoldenGate to replicate data from our central reservations systems and other OLTP databases – significantly decreasing the overall ETL duration. Moving forward, we plan to use Oracle GoldenGate to help the company achieve near-real-time reporting.”You can listen about Starwood Hotels' implementation here. Many companies combine the power of Oracle GoldenGate with Oracle Data Integrator to have a single, integrated data integration platform for variety of use cases across the enterprise. Ufone is another good example of that. The leading mobile communications service provider of Pakistan has improved customer service using timely customer data in its data warehouse. Atif Aslam, head of management information systems for Ufone says: “Oracle Data Integrator and Oracle GoldenGate help us integrate information from various systems and provide up-to-date and real-time CRM data updates hourly, rather than daily. The applications have simplified data warehouse operations and allowed business users to make faster and better informed decisions to protect revenue in the fast-moving Pakistani telecommunications market.” You can read more about Ufone's use case here. In our Oracle Data Integration 12c launch webcast back in November we also heard from BT’s CTO Surren Parthab about their use of GoldenGate for moving to private cloud architecture. Surren also shared his perspectives on Oracle Data Integrator 12c and Oracle GoldenGate 12c releases. You can watch the video here. These are only a few examples of leading companies that have made data integration and real-time data access a key part of their data governance and IT modernization initiatives. They have seen real improvements in how their businesses operate and differentiate in today’s competitive markets. You can read about other customer examples in our Ebook: The Path to the Future and access resources including white papers, data sheets, podcasts and more via our Oracle Data Integration resource kit. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Mocking the Unmockable: Using Microsoft Moles with Gallio

    - by Thomas Weller
    Usual opensource mocking frameworks (like e.g. Moq or Rhino.Mocks) can mock only interfaces and virtual methods. In contrary to that, Microsoft’s Moles framework can ‘mock’ virtually anything, in that it uses runtime instrumentation to inject callbacks in the method MSIL bodies of the moled methods. Therefore, it is possible to detour any .NET method, including non-virtual/static methods in sealed types. This can be extremely helpful when dealing e.g. with code that calls into the .NET framework, some third-party or legacy stuff etc… Some useful collected resources (links to website, documentation material and some videos) can be found in my toolbox on Delicious under this link: http://delicious.com/thomasweller/toolbox+moles A Gallio extension for Moles Originally, Moles is a part of Microsoft’s Pex framework and thus integrates best with Visual Studio Unit Tests (MSTest). However, the Moles sample download contains some additional assemblies to also support other unit test frameworks. They provide a Moled attribute to ease the usage of mole types with the respective framework (there are extensions for NUnit, xUnit.net and MbUnit v2 included with the samples). As there is no such extension for the Gallio platform, I did the few required lines myself – the resulting Gallio.Moles.dll is included with the sample download. With this little assembly in place, it is possible to use Moles with Gallio like that: [Test, Moled] public void SomeTest() {     ... What you can do with it Moles can be very helpful, if you need to ‘mock’ something other than a virtual or interface-implementing method. This might be the case when dealing with some third-party component, legacy code, or if you want to ‘mock’ the .NET framework itself. Generally, you need to announce each moled type that you want to use in a test with the MoledType attribute on assembly level. For example: [assembly: MoledType(typeof(System.IO.File))] Below are some typical use cases for Moles. For a more detailed overview (incl. naming conventions and an instruction on how to create the required moles assemblies), please refer to the reference material above.  Detouring the .NET framework Imagine that you want to test a method similar to the one below, which internally calls some framework method:   public void ReadFileContent(string fileName) {     this.FileContent = System.IO.File.ReadAllText(fileName); } Using a mole, you would replace the call to the File.ReadAllText(string) method with a runtime delegate like so: [Test, Moled] [Description("This 'mocks' the System.IO.File class with a custom delegate.")] public void ReadFileContentWithMoles() {     // arrange ('mock' the FileSystem with a delegate)     System.IO.Moles.MFile.ReadAllTextString = (fname => fname == FileName ? FileContent : "WrongFileName");       // act     var testTarget = new TestTarget.TestTarget();     testTarget.ReadFileContent(FileName);       // assert     Assert.AreEqual(FileContent, testTarget.FileContent); } Detouring static methods and/or classes A static method like the below… public static string StaticMethod(int x, int y) {     return string.Format("{0}{1}", x, y); } … can be ‘mocked’ with the following: [Test, Moled] public void StaticMethodWithMoles() {     MStaticClass.StaticMethodInt32Int32 = ((x, y) => "uups");       var result = StaticClass.StaticMethod(1, 2);       Assert.AreEqual("uups", result); } Detouring constructors You can do this delegate thing even with a class’ constructor. The syntax for this is not all  too intuitive, because you have to setup the internal state of the mole, but generally it works like a charm. For example, to replace this c’tor… public class ClassWithCtor {     public int Value { get; private set; }       public ClassWithCtor(int someValue)     {         this.Value = someValue;     } } … you would do the following: [Test, Moled] public void ConstructorTestWithMoles() {     MClassWithCtor.ConstructorInt32 =            ((@class, @value) => new MClassWithCtor(@class) {ValueGet = () => 99});       var classWithCtor = new ClassWithCtor(3);       Assert.AreEqual(99, classWithCtor.Value); } Detouring abstract base classes You can also use this approach to ‘mock’ abstract base classes of a class that you call in your test. Assumed that you have something like that: public abstract class AbstractBaseClass {     public virtual string SaySomething()     {         return "Hello from base.";     } }      public class ChildClass : AbstractBaseClass {     public override string SaySomething()     {         return string.Format(             "Hello from child. Base says: '{0}'",             base.SaySomething());     } } Then you would set up the child’s underlying base class like this: [Test, Moled] public void AbstractBaseClassTestWithMoles() {     ChildClass child = new ChildClass();     new MAbstractBaseClass(child)         {                 SaySomething = () => "Leave me alone!"         }         .InstanceBehavior = MoleBehaviors.Fallthrough;       var hello = child.SaySomething();       Assert.AreEqual("Hello from child. Base says: 'Leave me alone!'", hello); } Setting the moles behavior to a value of  MoleBehaviors.Fallthrough causes the ‘original’ method to be called if a respective delegate is not provided explicitly – here it causes the ChildClass’ override of the SaySomething() method to be called. There are some more possible scenarios, where the Moles framework could be of much help (e.g. it’s also possible to detour interface implementations like IEnumerable<T> and such…). One other possibility that comes to my mind (because I’m currently dealing with that), is to replace calls from repository classes to the ADO.NET Entity Framework O/R mapper with delegates to isolate the repository classes from the underlying database, which otherwise would not be possible… Usage Since Moles relies on runtime instrumentation, mole types must be run under the Pex profiler. This only works from inside Visual Studio if you write your tests with MSTest (Visual Studio Unit Test). While other unit test frameworks generally can be used with Moles, they require the respective tests to be run via command line, executed through the moles.runner.exe tool. A typical test execution would be similar to this: moles.runner.exe <mytests.dll> /runner:<myframework.console.exe> /args:/<myargs> So, the moled test can be run through tools like NCover or a scripting tool like MSBuild (which makes them easy to run in a Continuous Integration environment), but they are somewhat unhandy to run in the usual TDD workflow (which I described in some detail here). To make this a bit more fluent, I wrote a ReSharper live template to generate the respective command line for the test (it is also included in the sample download – moled_cmd.xml). - This is just a quick-and-dirty ‘solution’. Maybe it makes sense to write an extra Gallio adapter plugin (similar to the many others that are already provided) and include it with the Gallio download package, if  there’s sufficient demand for it. As of now, the only way to run tests with the Moles framework from within Visual Studio is by using them with MSTest. From the command line, anything with a managed console runner can be used (provided that the appropriate extension is in place)… A typical Gallio/Moles command line (as generated by the mentioned R#-template) looks like that: "%ProgramFiles%\Microsoft Moles\bin\moles.runner.exe" /runner:"%ProgramFiles%\Gallio\bin\Gallio.Echo.exe" "Gallio.Moles.Demo.dll" /args:/r:IsolatedAppDomain /args:/filter:"ExactType:TestFixture and Member:ReadFileContentWithMoles" -- Note: When using the command line with Echo (Gallio’s console runner), be sure to always include the IsolatedAppDomain option, otherwise the tests won’t use the instrumentation callbacks! -- License issues As I already said, the free mocking frameworks can mock only interfaces and virtual methods. if you want to mock other things, you need the Typemock Isolator tool for that, which comes with license costs (Although these ‘costs’ are ridiculously low compared to the value that such a tool can bring to a software project, spending money often is a considerable gateway hurdle in real life...).  The Moles framework also is not totally free, but comes with the same license conditions as the (closely related) Pex framework: It is free for academic/non-commercial use only, to use it in a ‘real’ software project requires an MSDN Subscription (from VS2010pro on). The demo solution The sample solution (VS 2008) can be downloaded from here. It contains the Gallio.Moles.dll which provides the here described Moled attribute, the above mentioned R#-template (moled_cmd.xml) and a test fixture containing the above described use case scenarios. To run it, you need the Gallio framework (download) and Microsoft Moles (download) being installed in the default locations. Happy testing…

    Read the article

  • Beyond Chatting: What ‘Social’ Means for CRM

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Steve Diamond, Senior Director, Outbound Product Management, Oracle In a recent post on this blog, my colleague Steve Boese asked three questions related to the widespread popularity and incredibly rapid growth of Facebook, Pinterest, and LinkedIn. Steve then addressed the many applications for collaborative solutions in the area of Human Capital Management. So, in turning to a conversation about Customer Relationship Management (CRM) and Sales Force Automation (SFA), let me ask you one simple question. How many sales people, particularly at business-to-business companies, consistently meet or beat their quotas in their roles by working alone, with no collaboration among fellow sales people, sales executives, employees in product groups, in service, in Legal, third-party partners, etc.? Hello? Is anybody out there? What’s that cricket noise I hear? That’s correct. Nobody! When it comes to Sales, introverts arguably have a distinct disadvantage. While it’s certainly a truism that “success” in most professional endeavors requires working with people, it’s a mandatory success factor in Sales. This fact became abundantly clear to me one early morning in the late 1990s when I joined the former Hyperion Solutions (now part of Oracle) and attended a Sales Award Ceremony. The Head of Sales at that time gave out dozens of awards – none of them to individuals and all of them to TEAMS of individuals. That’s how it works in Sales. Your colleagues help provide you with product intelligence and competitive intelligence. They help you build the best presentations, pitches, and proposals. They help you develop the most killer RFPs. They align you with the best product people to ensure you’re matching the best products for the opportunity and join you in critical meetings. They help knock the socks of your prospects in “bake off” demo’s. They bring in the best partners to either add complementary products to your opportunity or help you implement a solution. They work with you as a collective team. And so how is all this collaboration STILL typically done today? Through email. And yet we all silently or not so silently grimace about email. It’s relatively siloed. It’s painful to search. It’s difficult to align by topic. And it’s nearly impossible to re-trace meaningful and helpful conversations that occurred among a group or a team at some point in history. This is where social networking for Sales comes into play. It’s about PURPOSEFUL social networking versus chattering. What is purposeful social networking? It’s collaboration that’s built around opportunities, accounts, and contacts. It’s collaboration that delivers valuable context – on the target company, and on key competitors – just to name two examples. It’s collaboration that can scale to provide coaching for larger numbers of sales representatives, both for general purposes, and as we’ve largely discussed here, for specific ‘deals.’ And it’s collaboration that allows a team of people to collectively edit and iterate on a document like an RFP or a soon-to-be killer presentation that is maintained in a central repository, with no time wasted searching for it or worrying about version control. But lest we get carried away, let’s remember that collaboration “happens” among sales people whether there is specialized software to support it or not. The human practice of sales has not changed much in the last 80 to 90 years. Collaboration has been a mainstay during this entire time. But what social networking in general, and Oracle Social Networking in particular delivers, is the opportunity for sales teams to dramatically increase their effectiveness and efficiency – to identify and close more high quality and lucrative opportunities more quickly. For most sales organizations, this is how the game is won. To learn more please visit Oracle Social Network and Oracle Fusion Customer Relationship Management on oracle.com Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • How-to tell the ViewCriteria a user chose in an af:query component

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The af:query component defines a search form for application users to enter search conditions for a selected View Criteria. A View Criteria is a named where clauses that you can create declaratively on the ADF Business Component View Object. A default View Criteria that allows users to search in all attributes exists by default and exposed in the Data Controls panel. To create an ADF Faces search form, expand the View Object node that contains the View Criteria definition in the Data Controls panel. Drag the View Criteria that should be displayed as the default criteria onto the page and choose Query in the opened context menu. One of the options within the Query option is to create an ADF Query Panel with Table, which displays the result set in a table view, which can have additional column filters defined. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} To intercept the user query for modification, or just to know about the selected View Criteria, you override the QueryListener property on the af:query component of the af:table component. Overriding the QueryListener on the table makes sense if the table allows users to further filter the result set using column filters.To override the default QueryListener, copy the existing string referencing the binding layer to the clipboard and then select Edit from the field context menu (press the arrow icon to open it) to selecte or create a new managed bean and method to handle the query event.  The code below is from a managed bean with custom query listener handlers defined for the af:query component and the af:table component. The default listener entry copied to the clipboard was "#{bindings.ImplicitViewCriteriaQuery.processQuery}"  public void onQueryList(QueryEvent queryEvent) {   // The generated QueryListener replaced by this method   //#{bindings.ImplicitViewCriteriaQuery.processQuery}        QueryDescriptor qdes = queryEvent.getDescriptor();          //print or log selected View Criteria   System.out.println("NAME "+qdes.getName());           //call default Query Event        invokeQueryEventMethodExpression("      #{bindings.ImplicitViewCriteriaQuery.processQuery}",queryEvent);  } public void onQueryTable(QueryEvent queryEvent) {   // The generated QueryListener replaced by this method   //#{bindings.ImplicitViewCriteriaQuery.processQuery}   QueryDescriptor qdes = queryEvent.getDescriptor();   //print or log selected View Criteria   System.out.println("NAME "+qdes.getName());                   invokeQueryEventMethodExpression(     "#{bindings.ImplicitViewCriteriaQuery.processQuery}",queryEvent); } private void invokeQueryEventMethodExpression(                        String expression, QueryEvent queryEvent){   FacesContext fctx = FacesContext.getCurrentInstance();   ELContext elctx = fctx.getELContext();   ExpressionFactory efactory   fctx.getApplication().getExpressionFactory();     MethodExpression me =     efactory.createMethodExpression(elctx,expression,                                     Object.class,                                     new Class[]{QueryEvent.class});     me.invoke(elctx, new Object[]{queryEvent}); } Of course, this code also can be used as a starting point for other query manipulations and also works with saved custom criterias. To read more about the af:query component, see: http://download.oracle.com/docs/cd/E15523_01/apirefs.1111/e12419/tagdoc/af_query.html

    Read the article

  • Who is Jeremiah Owyang?

    - by Michael Hylton
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: What’s your current role and what career path brought you here? J.O.: I'm currently a partner and one of the founding team members at Altimeter Group.  I'm currently the Research Director, as well as wear the hat of Industry Analyst. Prior to joining Altimeter, I was an Industry Analyst at Forrester covering Social Computing, and before that, deployed and managed the social media program at Hitachi Data Systems in Santa Clara.  Around that time, I started a career blog called Web Strategy which focused on how companies were using the web to connect with customers --and never looked back. Q: As an industry analyst, what are you focused on these days? J.O.: There are three trends that I'm focused my research on at this time:  1) The Dynamic Customer Journey:  Individuals (both b2c and b2b) are given so many options in their sources of data, channels to choose from and screens to consume them on that we've found that at each given touchpoint there are 75 potential permutations.  Companies that can map this, then deliver information to individuals when they need it will have a competitive advantage and we want to find out who's doing this.  2) One of the sub themes that supports this trend is Social Performance.  Yesterday's social web was disparate engagement of humans, but the next phase will be data driven, and soon new technologies will emerge to help all those that are consuming, publishing, and engaging on the social web to be more efficient with their time through forms of automation.  As you might expect, this comes with upsides and downsides.  3) The Sentient World is our research theme that looks out the furthest as the world around us (even inanimate objects) become 'self aware' and are able to talk back to us via digital devices and beyond.  Big data, internet of things, mobile devices will all be this next set. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? J.O.: The lines between our work and personal lives are dissolving, and this leads to a greater upside of being always connected and have deeper relationships with those that are not.  It also means a downside of society expectations that we're always around and available for colleagues, customers, and beyond.  In the future, a balance will be sought as we seek to achieve the goals of family, friends, work, and our own personal desires.  All of this is being ironically written at 430 am on a Sunday am.  Q: How can people keep up with what you’re working on? J.O.: A great question, thanks.  There are a few sources of information to find out, I'll lead with the first which is my blog at web-strategist.com.  A few times a week I'll publish my industry insights (hires, trends, forces, funding, M&A, business needs) as well as on twitter where I'll point to all the news that's fit to print @jowyang.  As my research reports go live (we publish them for all to read --called Open Research-- at no cost) they'll emerge on my blog, or checkout the research tab to find out more now.  http://www.web-strategist.com/blog/research/ Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon?  J.O.: Absolutely! This coming Thursday, September 13th, I’m doing a webcast with Oracle on “Managing Social Relationships for the Enterprise”. This is going to be a great discussion with Reggie Bradford, Senior Vice President of Product Development at Oracle and Christian Finn, Senior Director of Product Management for Oracle WebCenter. I’m looking forward to a great discussion around all those issues that so many companies are struggling with these days as they realize how much social media is impacting their business. It’s changing the way your customers and employees interact with your brand. Today it’s no longer a matter of when to become a social-enabled enterprise, but how to become a successful one. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast?  J.O.: Below is a 15 minute video which encapsulates Altimeter’s themes on the Dynamic Customer Journey and the Sentient World. I’m really proud to have taken an active role in the first ever LeWeb outside of Paris. This one, which was featured in downtown London across the street from Westminster Abbey was sold out. If you’ve not heard of LeWeb, this is a global Internet conference hosted by Loic and Geraldine Le Meur, a power couple that stem from Paris but are also living in Silicon Valley, this is one of my favorite conferences to connect with brands, technology innovators, investors and friends. Altimeter was able to play a minor role in suggesting the theme for the event “Faster Than Real Time” which stems off previous LeWebs that focused on the “Real time web”. In this radical state, companies are able to anticipate the needs of their customers by using data, technology, and devices and deliver meaningful experiences before customers even know they need it. I explore two of three of Altimeter’s research themes, the Dynamic Customer Journey, and the Sentient World in my speech, but due to time, did not focus on Adaptive Organization.

    Read the article

  • Who is Jeremiah Owyang?

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: What’s your current role and what career path brought you here? J.O.: I'm currently a partner and one of the founding team members at Altimeter Group.  I'm currently the Research Director, as well as wear the hat of Industry Analyst. Prior to joining Altimeter, I was an Industry Analyst at Forrester covering Social Computing, and before that, deployed and managed the social media program at Hitachi Data Systems in Santa Clara.  Around that time, I started a career blog called Web Strategy which focused on how companies were using the web to connect with customers --and never looked back. Q: As an industry analyst, what are you focused on these days? J.O.: There are three trends that I'm focused my research on at this time:  1) The Dynamic Customer Journey:  Individuals (both b2c and b2b) are given so many options in their sources of data, channels to choose from and screens to consume them on that we've found that at each given touchpoint there are 75 potential permutations.  Companies that can map this, then deliver information to individuals when they need it will have a competitive advantage and we want to find out who's doing this.  2) One of the sub themes that supports this trend is Social Performance.  Yesterday's social web was disparate engagement of humans, but the next phase will be data driven, and soon new technologies will emerge to help all those that are consuming, publishing, and engaging on the social web to be more efficient with their time through forms of automation.  As you might expect, this comes with upsides and downsides.  3) The Sentient World is our research theme that looks out the furthest as the world around us (even inanimate objects) become 'self aware' and are able to talk back to us via digital devices and beyond.  Big data, internet of things, mobile devices will all be this next set. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? J.O.: The lines between our work and personal lives are dissolving, and this leads to a greater upside of being always connected and have deeper relationships with those that are not.  It also means a downside of society expectations that we're always around and available for colleagues, customers, and beyond.  In the future, a balance will be sought as we seek to achieve the goals of family, friends, work, and our own personal desires.  All of this is being ironically written at 430 am on a Sunday am.  Q: How can people keep up with what you’re working on? J.O.: A great question, thanks.  There are a few sources of information to find out, I'll lead with the first which is my blog at web-strategist.com.  A few times a week I'll publish my industry insights (hires, trends, forces, funding, M&A, business needs) as well as on twitter where I'll point to all the news that's fit to print @jowyang.  As my research reports go live (we publish them for all to read --called Open Research-- at no cost) they'll emerge on my blog, or checkout the research tab to find out more now.  http://www.web-strategist.com/blog/research/ Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon?  J.O.: Absolutely! This coming Thursday, September 13th, I’m doing a webcast with Oracle on “Managing Social Relationships for the Enterprise”. This is going to be a great discussion with Reggie Bradford, Senior Vice President of Product Development at Oracle and Christian Finn, Senior Director of Product Management for Oracle WebCenter. I’m looking forward to a great discussion around all those issues that so many companies are struggling with these days as they realize how much social media is impacting their business. It’s changing the way your customers and employees interact with your brand. Today it’s no longer a matter of when to become a social-enabled enterprise, but how to become a successful one. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast?  J.O.: Below is a 15 minute video which encapsulates Altimeter’s themes on the Dynamic Customer Journey and the Sentient World. I’m really proud to have taken an active role in the first ever LeWeb outside of Paris. This one, which was featured in downtown London across the street from Westminster Abbey was sold out. If you’ve not heard of LeWeb, this is a global Internet conference hosted by Loic and Geraldine Le Meur, a power couple that stem from Paris but are also living in Silicon Valley, this is one of my favorite conferences to connect with brands, technology innovators, investors and friends. Altimeter was able to play a minor role in suggesting the theme for the event “Faster Than Real Time” which stems off previous LeWebs that focused on the “Real time web”. In this radical state, companies are able to anticipate the needs of their customers by using data, technology, and devices and deliver meaningful experiences before customers even know they need it. I explore two of three of Altimeter’s research themes, the Dynamic Customer Journey, and the Sentient World in my speech, but due to time, did not focus on Adaptive Organization.

    Read the article

  • Java Cloud Service Integration using Web Service Data Control

    - by Jani Rautiainen
    Java Cloud Service (JCS) provides a platform to develop and deploy business applications in the cloud. In Fusion Applications Cloud deployments customers do not have the option to deploy custom applications developed with JDeveloper to ensure the integrity and supportability of the hosted application service. Instead the custom applications can be deployed to the JCS and integrated to the Fusion Application Cloud instance.This series of articles will go through the features of JCS, provide end-to-end examples on how to develop and deploy applications on JCS and how to integrate them with the Fusion Applications instance.In this article a custom application integrating with Fusion Application using Web Service Data Control will be implemented. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Pre-requisites Access to Cloud instance In order to deploy the application access to a JCS instance is needed, a free trial JCS instance can be obtained from Oracle Cloud site. To register you will need a credit card even if the credit card will not be charged. To register simply click "Try it" and choose the "Java" option. The confirmation email will contain the connection details. See this video for example of the registration. Once the request is processed you will be assigned 2 service instances; Java and Database. Applications deployed to the JCS must use Oracle Database Cloud Service as their underlying database. So when JCS instance is created a database instance is associated with it using a JDBC data source. The cloud services can be monitored and managed through the web UI. For details refer to Getting Started with Oracle Cloud. JDeveloper JDeveloper contains Cloud specific features related to e.g. connection and deployment. To use these features download the JDeveloper from JDeveloper download site by clicking the “Download JDeveloper 11.1.1.7.1 for ADF deployment on Oracle Cloud” link, this version of JDeveloper will have the JCS integration features that will be used in this article. For versions that do not include the Cloud integration features the Oracle Java Cloud Service SDK or the JCS Java Console can be used for deployment. For details on installing and configuring the JDeveloper refer to the installation guide. For details on SDK refer to Using the Command-Line Interface to Monitor Oracle Java Cloud Service and Using the Command-Line Interface to Manage Oracle Java Cloud Service. Create Application In this example the “JcsWsDemo” application created in the “Java Cloud Service Integration using Web Service Proxy” article is used as the base. Create Web Service Data Control In this example we will use a Web Service Data Control to integrate with Credit Rule Service in Fusion Applications. The data control will be used to query data from Fusion Applications using a web service call and present the data in a table. To generate the data control choose the “Model” project and navigate to "New -> All Technologies -> Business Tier -> Data Controls -> Web Service Data Control" and enter following: Name: CreditRuleServiceDC URL: https://ic-[POD].oracleoutsourcing.com/icCnSetupCreditRulesPublicService/CreditRuleService?WSDL Service: {{http://xmlns.oracle.com/apps/incentiveCompensation/cn/creditSetup/creditRule/creditRuleService/}CreditRuleService On step 2 select the “findRule” operation: Skip step 3 and on step 4 define the credentials to access the service. Do note that in this example these credentials are only used if testing locally, for JCS deployment credentials need to be manually updated on the EAR file: Click “Finish” and the proxy generation is done. Creating UI In order to use the data control we will need to populate complex objects FindCriteria and FindControl. For simplicity in this example we will create logic in a managed bean that populates the objects. Open “JcsWsDemoBean.java” and add the following logic: Map findCriteria; Map findControl; public void setFindCriteria(Map findCriteria) { this.findCriteria = findCriteria; } public Map getFindCriteria() { findCriteria = new HashMap(); findCriteria.put("fetchSize",10); findCriteria.put("fetchStart",0); return findCriteria; } public void setFindControl(Map findControl) { this.findControl = findControl; } public Map getFindControl() { findControl = new HashMap(); return findControl; } Open “JcsWsDemo.jspx”, navigate to “Data Controls -> CreditRuleServiceDC -> findRule(Object, Object) -> result” and drag and drop the “result” node into the “af:form” element in the page: On the “Edit Table Columns” remove all columns except “RuleId” and “Name”: On the “Edit Action Binding” window displayed enter reference to the java class created above by selecting “#{JcsWsDemoBean.findCriteria}”: Also define the value for the “findControl” by selecting “#{JcsWsDemoBean.findControl}”. Deploy to JCS For WS DC the authentication details need to be updated on the connection details before deploying. Open “connections.xml” by navigating “Application Resources -> Descriptors -> ADF META-INF -> connections.xml”: Change the user name and password entry from: <soap username="transportUserName" password="transportPassword" To match the access details for the target environment. Follow the same steps as documented in previous article ”Java Cloud Service ADF Web Application”. Once deployed the application can be accessed with URL: https://java-[identity domain].java.[data center].oraclecloudapps.com/JcsWsDemo-ViewController-context-root/faces/JcsWsDemo.jspx When accessed the first 10 rules in the system are displayed: Summary In this article we learned how to integrate with Fusion Applications using a Web Service Data Control in JCS. In future articles various other integration techniques will be covered. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • Big Data Matters with ODI12c

    - by Madhu Nair
    contributed by Mike Eisterer On October 17th, 2013, Oracle announced the release of Oracle Data Integrator 12c (ODI12c).  This release signifies improvements to Oracle’s Data Integration portfolio of solutions, particularly Big Data integration. Why Big Data = Big Business Organizations are gaining greater insights and actionability through increased storage, processing and analytical benefits offered by Big Data solutions.  New technologies and frameworks like HDFS, NoSQL, Hive and MapReduce support these benefits now. As further data is collected, analytical requirements increase and the complexity of managing transformations and aggregations of data compounds and organizations are in need for scalable Data Integration solutions. ODI12c provides enterprise solutions for the movement, translation and transformation of information and data heterogeneously and in Big Data Environments through: The ability for existing ODI and SQL developers to leverage new Big Data technologies. A metadata focused approach for cataloging, defining and reusing Big Data technologies, mappings and process executions. Integration between many heterogeneous environments and technologies such as HDFS and Hive. Generation of Hive Query Language. Working with Big Data using Knowledge Modules  ODI12c provides developers with the ability to define sources and targets and visually develop mappings to effect the movement and transformation of data.  As the mappings are created, ODI12c leverages a rich library of prebuilt integrations, known as Knowledge Modules (KMs).  These KMs are contextual to the technologies and platforms to be integrated.  Steps and actions needed to manage the data integration are pre-built and configured within the KMs.  The Oracle Data Integrator Application Adapter for Hadoop provides a series of KMs, specifically designed to integrate with Big Data Technologies.  The Big Data KMs include: Check Knowledge Module Reverse Engineer Knowledge Module Hive Transform Knowledge Module Hive Control Append Knowledge Module File to Hive (LOAD DATA) Knowledge Module File-Hive to Oracle (OLH-OSCH) Knowledge Module  Nothing to beat an Example: To demonstrate the use of the KMs which are part of the ODI Application Adapter for Hadoop, a mapping may be defined to move data between files and Hive targets.  The mapping is defined by dragging the source and target into the mapping, performing the attribute (column) mapping (see Figure 1) and then selecting the KM which will govern the process.  In this mapping example, movie data is being moved from an HDFS source into a Hive table.  Some of the attributes, such as “CUSTID to custid”, have been mapped over. Figure 1  Defining the Mapping Before the proper KM can be assigned to define the technology for the mapping, it needs to be added to the ODI project.  The Big Data KMs have been made available to the project through the KM import process.   Generally, this is done prior to defining the mapping. Figure 2  Importing the Big Data Knowledge Modules Following the import, the KMs are available in the Designer Navigator. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Figure 3  The Project View in Designer, Showing Installed IKMs Once the KM is imported, it may be assigned to the mapping target.  This is done by selecting the Physical View of the mapping and examining the Properties of the Target.  In this case MOVIAPP_LOG_STAGE is the target of our mapping. Figure 4  Physical View of the Mapping and Assigning the Big Data Knowledge Module to the Target Alternative KMs may have been selected as well, providing flexibility and abstracting the logical mapping from the physical implementation.  Our mapping may be applied to other technologies as well. The mapping is now complete and is ready to run.  We will see more in a future blog about running a mapping to load Hive. To complete the quick ODI for Big Data Overview, let us take a closer look at what the IKM File to Hive is doing for us.  ODI provides differentiated capabilities by defining the process and steps which normally would have to be manually developed, tested and implemented into the KM.  As shown in figure 5, the KM is preparing the Hive session, managing the Hive tables, performing the initial load from HDFS and then performing the insert into Hive.  HDFS and Hive options are selected graphically, as shown in the properties in Figure 4. Figure 5  Process and Steps Managed by the KM What’s Next Big Data being the shape shifting business challenge it is is fast evolving into the deciding factor between market leaders and others. Now that an introduction to ODI and Big Data has been provided, look for additional blogs coming soon using the Knowledge Modules which make up the Oracle Data Integrator Application Adapter for Hadoop: Importing Big Data Metadata into ODI, Testing Data Stores and Loading Hive Targets Generating Transformations using Hive Query language Loading Oracle from Hadoop Sources For more information now, please visit the Oracle Data Integrator Application Adapter for Hadoop web site, http://www.oracle.com/us/products/middleware/data-integration/hadoop/overview/index.html Do not forget to tune in to the ODI12c Executive Launch webcast on the 12th to hear more about ODI12c and GG12c. Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • I have 6 updates that won't install on Ubuntu 12.04?

    - by Taylor
    I'm an Ubuntu novice, so any help here is greatly appreciated! I'm running Ubuntu 12.04, and I have six updates that just won't install. I've tried Update Manger, sudo apt-get upgrade, and sudo apt-get update. Nothing has worked so far. Here are the details I get from Update Manager: installArchives() failed: Setting up linux-image-3.2.0-24-generic-pae (3.2.0-24.37) ... Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-24-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 Setting up linux-image-3.2.0-27-generic-pae (3.2.0-27.43) ... No apport report written because MaxReports is reached already Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-27-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 No apport report written because MaxReports is reached already Setting up linux-image-3.2.0-29-generic-pae (3.2.0-29.46) ... Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-29-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 No apport report written because MaxReports is reached already Setting up udev (175-0ubuntu9.1) ... udev stop/waiting udev start/running, process 3685 /var/lib/dpkg/info/udev.postinst: 87: /var/lib/dpkg/info/udev.postinst: update-initramfs: not found dpkg: error processing udev (--configure): subprocess installed post-installation script returned error exit status 127 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of xserver-xorg-core: xserver-xorg-core depends on udev (= 149); however: Package udev is not configured yet. dpkg: error processing xserver-xorg-core (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of fglrx: fglrx depends on xserver-xorg-core; however: Package xserver-xorg-core is not configured yet. dpkg: error processing fglrx (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of fglrx-amdcccle: fglrx-amdcccle depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-amdcccle (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of linux-image-generic-pae: linux-image-generic-pae depends on linux-image-3.2.0-24-generic-pae; however: Package linux-image-3.2.0-24-generic-pae is not configured yet. dpkg: error processing linux-image-generic-pae (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of linux-generic-pae: linux-generic-pae depends on linux-image-generic-pae (= 3.2.0.24.26); however: Package linux-image-generic-pae is not configured yet. dpkg: error processing linux-generic-pae (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of xserver-xorg-video-intel: xserver-xorg-video-intel depends on xorg-video-abi-11; however: Package xorg-video-abi-11 is not installed. Package xserver-xorg-core which provides xorg-video-abi-11 is not configured yet. xserver-xorg-video-intel depends on xserver-xorg-core (= 2:1.10.99.901); however: Package xserver-xorg-core is not configured yet. dpkg: error processing xserver-xorg-video-intel (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of fglrx-dev:No apport report written because MaxReports is reached already fglrx-dev depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-dev (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-24-generic-pae linux-image-3.2.0-27-generic-pae linux-image-3.2.0-29-generic-pae udev xserver-xorg-core fglrx fglrx-amdcccle linux-image-generic-pae linux-generic-pae xserver-xorg-video-intel fglrx-dev Error in function: Setting up linux-image-3.2.0-24-generic-pae (3.2.0-24.37) ... Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-24-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 Setting up linux-image-3.2.0-29-generic-pae (3.2.0-29.46) ... Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-29-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 Setting up linux-image-3.2.0-27-generic-pae (3.2.0-27.43) ... Running depmod. sh: 1: /usr/sbin/update-initramfs: not found Failed to create initrd image. dpkg: error processing linux-image-3.2.0-27-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 Setting up udev (175-0ubuntu9.1) ... udev stop/waiting udev start/running, process 3782 /var/lib/dpkg/info/udev.postinst: 87: /var/lib/dpkg/info/udev.postinst: update-initramfs: not found dpkg: error processing udev (--configure): subprocess installed post-installation script returned error exit status 127 dpkg: dependency problems prevent configuration of linux-image-generic-pae: linux-image-generic-pae depends on linux-image-3.2.0-24-generic-pae; however: Package linux-image-3.2.0-24-generic-pae is not configured yet. dpkg: error processing linux-image-generic-pae (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of xserver-xorg-core: xserver-xorg-core depends on udev (= 149); however: Package udev is not configured yet. dpkg: error processing xserver-xorg-core (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of fglrx: fglrx depends on xserver-xorg-core; however: Package xserver-xorg-core is not configured yet. dpkg: error processing fglrx (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic-pae: linux-generic-pae depends on linux-image-generic-pae (= 3.2.0.24.26); however: Package linux-image-generic-pae is not configured yet. dpkg: error processing linux-generic-pae (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of xserver-xorg-video-intel: xserver-xorg-video-intel depends on xorg-video-abi-11; however: Package xorg-video-abi-11 is not installed. Package xserver-xorg-core which provides xorg-video-abi-11 is not configured yet. xserver-xorg-video-intel depends on xserver-xorg-core (= 2:1.10.99.901); however: Package xserver-xorg-core is not configured yet. dpkg: error processing xserver-xorg-video-intel (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of fglrx-amdcccle: fglrx-amdcccle depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-amdcccle (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of fglrx-dev: fglrx-dev depends on fglrx; however: Package fglrx is not configured yet. dpkg: error processing fglrx-dev (--configure): dependency problems - leaving unconfigured

    Read the article

  • People, Process & Engagement: WebCenter Partner Keste

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Within the WebCenter group here at Oracle, discussions about people, process and engagement cross over many vertical industries and products. Amidst our growing partner ecosystem, the community provides us insight into great customer use cases every day. Such is the case with our partner, Keste, who provides us a guest post on our blog today with an overview of their innovative solution for a customer in the transportation industry. Keste is an Oracle software solutions and development company headquartered in Dallas, Texas. As a Platinum member of the Oracle® PartnerNetwork, Keste designs, develops and deploys custom solutions that automate complex business processes. Seamless Customer Self-Service Experience in the Trucking Industry with Oracle WebCenter Portal  Keste, Oracle Platinum Partner Customer Overview Omnitracs, Inc., a Qualcomm company provides mobility solutions for trucking fleets to companies in the transportation industry. Omnitracs’ mobility services include basic communications such as text as well as advanced monitoring services such as GPS tracking, temperature tracking of perishable goods, load tracking and weighting distribution, and many others. Customer Business Needs Already the leading provider of mobility solutions for large trucking fleets, they chose to target smaller trucking fleets as new customers. However their existing high-touch customer support method would not be a cost effective or scalable method to manage and service these smaller customers. Omnitracs needed to provide several self-service features to make customer support more scalable while keeping customer satisfaction levels high and the costs manageable. The solution also had to be very intuitive and easy to use. The systems that Omnitracs sells to these trucking customers require professional installation and smaller customers need to track and schedule the installation. Information captured in Oracle eBusiness Suite needed to be readily available for new customers to track these purchases and delivery details. Omnitracs wanted a high impact User Interface to significantly improve customer experience with the ability to integrate with EBS, provisioning systems as well as CRM systems that were already implemented. Omnitracs also wanted to build an architecture platform that could potentially be extended to other Portals. Omnitracs’ stated goal was to deliver an “eBay-like” or “Amazon-like” experience for all of their customers so that they could reach a much broader market beyond their large company customer base. Solution Overview In order to manage the increased complexity, the growing support needs of global customers and improve overall product time-to-market in a cost-effective manner, IT began to deliver a self-service model. This self service model not only transformed numerous business processes but is also allowing the business to keep up with the growing demands of the (internal and external) customers. This solution was a customer service Portal that provided self service capabilities for large and small customers alike for Activation of mobility products, managing add-on applications for the devices (much like the Apple App Store), transferring services when trucks are sold to other companies as well as deactivation all without the involvement of a call service agent or sending multiple emails to different Omnitracs contacts. This is a conceptual view of the Customer Portal showing the details of the components that make up the solution. 12.00 The portal application for transactions was entirely built using ADF 11g R2. Omnitracs’ business had a pressing requirement to have a portal available 24/7 for its customers. Since there were interactions with EBS in the back-end, the downtimes on the EBS would negate this availability. Omnitracs devised a decoupling strategy at the database side for the EBS data. The decoupling of the database was done using Oracle Data Guard and completely insulated the solution from any eBusiness Suite down time. The customer has no knowledge whether eBS is running or not. Here are two sample screenshots of the portal application built in Oracle ADF. Customer Benefits The Customer Portal not only provided the scalability to grow the business but also provided the seamless integration with other disparate applications. Some of the key benefits are: Improved Customer Experience: With a modern look and feel and a Portal that has the aspects of an App Store, the customer experience was significantly improved. Page response times went from several seconds to sub-second for all of the pages. Enabled new product launches: After successfully dominating the large fleet market, Omnitracs now has a scalable solution to sell and manage smaller fleet customers giving them a huge advantage over their nearest competitors. Dozens of new customers have been acquired via this portal through an onboarding process that now takes minutes Seamless Integrations Improves Customer Support: ADF 11gR2 allowed Omnitracs to bring a diverse list of applications into one integrated solution. This provided a seamless experience for customers to route them from Marketing focused application to a customer-oriented portal. Internally, it also allowed Sales Representatives to have an integrated flow for taking a prospect through the various steps to onboard them as a customer. Key integrations included: Unity Core Salesforce.com Merchant e-Solution for credit card Custom Omnitracs Applications like CUPS and AUTO Security utilizing OID and OVD Back end integration with EBS (Data Guard) and iQ Database Business Impact Significant business impacts were realized through the launch of customer portal. It not only allows the business to push through in underserved segments, but also reduces the time it needs to spend on customer support—allowing the business to focus more on sales and identifying the market for new products. Some of the Immediate Benefits are The entire onboarding process is now completely automated and now completes in minutes. This represents an 85% productivity improvement over their previous processes. And it was 160 times faster! With the success of this self-service solution, the business is now targeting about 3X customer growth in the next five years. This represents a tripling of their overall customer base and significant downstream revenue for the ongoing services. 90%+ improvement of customer onboarding and management process by utilizing, single sign on integration using OID/OAM solution, performance improvements and new self-service functionality Unified login for all Customers, Partners and Internal Users enables login to a common portal and seamless access to all other integrated applications targeted at the respective audience Significantly improved customer experience with a better look and feel with a more user experience focused Portal screens. Helped sales of the new product by having an easy way of ordering and activating the product. Data Guard helped increase availability of the Portal to 99%+ and make it independent of EBS downtime. This gave customers the feel of high availability of the portal application. Some of the anticipated longer term Benefits are: Platform that can be leveraged to launch any new product introduction and enable all product teams to reach new customers and new markets Easy integration with content management to allow business owners more control of the product catalog Overall reduced TCO with standardization of the Oracle platform Managed IT support cost savings through optimization of technology skills needed to support and modify this solution ------------------------------------------------------------ 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif";}

    Read the article

  • JQuery Mobile Code Snippets 1

    - by Yousef_Jadallah
     I want to share with you some important codes that you may need during JQuery Mobile development.These codes are tested on Alpha 4 version. Beta 1 has been released before two days, Therefore I will test them in my current project and let you know if there is any changes : Normal 0 false false false EN-US X-NONE AR-SA /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi;}    Normal 0 false false false EN-US X-NONE AR-SA /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi;} Show and hide back button in your Application    $(document).bind("mobileinit", function () {           $.mobile.page.prototype.options.addBackBtn = true;        });     Customizing the back button text $(document).bind("mobileinit", function () {$.mobile.page.prototype.options.backBtnText = "previous";});       Hide "Close button" for dialog programatically:   $('[data-role=dialog]div[id="YourDiaogdivID"]').live('pagecreate', function (event) {     $("a[data-icon='delete']").hide();          });  Change Select option element index:      var myselect = $("select#foo");       myselect[0].selectedIndex = 0; //The new index        myselect.selectmenu("refresh"); //uset this line of code after any updating on the select element      Change Select optoin elemetn text value:    $("select#foo").parent().contents().children('.ui-btn-text').text('Your Text Here');    Refreshing a checkbox    $("select#foo").parent().contents().children('.ui-btn-text').text('Your Text Here');     Hide select option element  $('#foo').parent().hide();     Hide and Show Page Loading Message :  $.mobile.pageLoading(); //Show $.mobile.pageLoading(true); //hide            overriding $.mobile.loadingMessage  $(document).bind("mobileinit", function () {    $.mobile.loadingMessage = 'My Loading Message';    });    Hide and Show jQuery-Mobile-Themed-DatePicker    $(".ui-datepicker").hide();  $(".ui-datepicker").show();       Build your Custom Loading Message :           $('#CustomeLoadingMessage').hide();//Hide the div               $('# CustomeLoadingMessage').ajaxStart(function () {                $(this).show();            });             $('# CustomeLoadingMessage').ajaxStop(function () {                $(this).hide();            });   I wil publish other important codes soon.Hope that helps.

    Read the article

  • Securing an ADF Application using OES11g: Part 1

    - by user12587121
    Future releases of the Oracle stack should allow ADF applications to be secured natively with Oracle Entitlements Server (OES). In a sequence of postings here I explore one way to achive this with the current technology, namely OES 11.1.1.5 and ADF 11.1.1.6. ADF Security Basics ADF Bascis The Application Development Framework (ADF) is Oracle’s preferred technology for developing GUI based Java applications.  It can be used to develop a UI for Swing applications or, more typically in the Oracle stack, for Web and J2EE applications.  ADF is based on and extends the Java Server Faces (JSF) technology.  To get an idea, Oracle provides an online demo to showcase ADF components. ADF can be used to develop just the UI part of an application, where, for example, the data access layer is implemented using some custom Java beans or EJBs.  However ADF also has it’s own data access layer, ADF Business Components (ADF BC) that will allow rapid integration of data from data bases and Webservice interfaces to the ADF UI component.   In this way ADF helps implement the MVC  approach to building applications with UI and data components. The canonical tutorial for ADF is to open JDeveloper, define a connection to a database, drag and drop a table from the database view to a UI page, build and deploy.  One has an application up and running very quickly with the ability to quickly integrate changes to, for example, the DB schema. ADF allows web pages to be created graphically and components like tables, forms, text fields, graphs and so on to be easily added to a page.  On top of JSF Oracle have added drag and drop tooling with JDeveloper and declarative binding of the UI to the data layer, be it database, WebService or Java beans.  An important addition is the bounded task flow which is a reusable set of pages and transitions.   ADF adds some steps to the page lifecycle defined in JSF and adds extra widgets including powerful visualizations. It is worth pointing out that the Oracle Web Center product (portal, content management and so on) is based on and extends ADF. ADF Security ADF comes with it’s own security mechanism that is exposed by JDeveloper at development time and in the WLS Console and Enterprise Manager (EM) at run time. The security elements that need to be addressed in an ADF application are: authentication, authorization of access to web pages, task-flows, components within the pages and data being returned from the model layer. One  typically relies on WLS to handle authentication and because of this users and groups will also be handled by WLS.  Typically in a Dev environment, users and groups are stored in the WLS embedded LDAP server. One has a choice when enabling ADF security (Application->Secure->Configure ADF Security) about whether to turn on ADF authorization checking or not: In the case where authorization is enabled for ADF one defines a set of roles in which we place users and then we grant access to these roles to the different ADF elements (pages or task flows or elements in a page). An important notion here is the difference between Enterprise Roles and Application Roles. The idea behind an enterprise role is that is defined in terms of users and LDAP groups from the WLS identity store.  “Enterprise” in the sense that these are things available for use to all applications that use that store.  The other kind of role is an Application Role and the idea is that  a given application will make use of Enterprise roles and users to build up a set of roles for it’s own use.  These application roles will be available only to that application.   The general idea here is that the enterprise roles are relatively static (for example an Employees group in the LDAP directory) while application roles are more dynamic, possibly depending on time, location, accessed resource and so on.  One of the things that OES adds that is that we can define these dynamic membership conditions in Role Mapping Policies. To make this concrete, here is how, at design time in Jdeveloper, one assigns these rights in Jdeveloper, which puts them into a file called jazn-data.xml: When the ADF app is deployed to a WLS this JAZN security data is pushed to the system-jazn-data.xml file of the WLS deployment for the policies and application roles and to the WLS backing LDAP for the users and enterprise roles.  Note the difference here: after deploying the application we will see the users and enterprise roles show up in the WLS LDAP server.  But the policies and application roles are defined in the system-jazn-data.xml file.  Consult the embedded WLS LDAP server to manage users and enterprise roles by going to the domain console and then Security Realms->myrealm->Users and Groups: For production environments (or in future to share this data with OES) one would then perform the operation of “reassociating” this security policy and application role data to a DB schema (or an LDAP).  This is done in the EM console by reassociating the Security Provider.  This blog posting has more explanations and references on this reassociation process. If ADF Authentication and Authorization are enabled then the Security Policies for a deployed application can be managed in EM.  Our goal is to be able to manage security policies for the applicaiton rather via OES and it's console. Security Requirements for an ADF Application With this package tour of ADF security we can see that to secure an ADF application with we would expect to be able to take care of at least the following items: Authentication, including a user and user-group store Authorization for page access Authorization for bounded Task Flow access.  A bounded task flow has only one point of entry and so if we protect that entry point by calling to OES then all the pages in the flow are protected.  Authorization for viewing data coming from the data access layer In the next posting we will describe a sample ADF application and required security policies. References ADF Dev Guide: Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework: Enabling ADF Security in a Fusion Web Application Oracle tutorial on securing a sample ADF application, appears to require ADF 11.1.2 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >