Search Results

Search found 8021 results on 321 pages for 'resource monitoring'.

Page 125/321 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Redirect root path on root domain to subdomain

    - by maknz
    Say there's a web application that runs on example.com, would there be a penalty for 301 redirecting the root of the domain (example.com/) to subdomain.example.com for purposes of hosting the marketing website for an application? Obviously we would expect subdomain.example.com to be what is ranked in the search engine, not example.com. We would want other paths on example.com like example.com/path/to/resource to index normally, and be unaffected by the 301 on the root path.

    Read the article

  • An In-depth Look at Gentoo Linux

    <b>Kernel News:</b> "Imagine an Operating System that only includes the features that you actually want and use. An Operating System that is finely tuned to your computer hardware. One that doesn't include any resource hogging applications that you don't need such as "Desktop Search" or huge bloated software..."

    Read the article

  • How many views can be bound to a 2D texture at a time?

    - by Recker
    I am a newbie trying to learn on DX11.x. While reading about resources and views in MSDN, I thought this question For a given 2D Texture created with ID3D11Texture2Dinterface (or for that matter any kind of resource), how many of following views can be bound to it? 1) DepthStencilView 2) RenderTargetView 3) ShaderResourceView 4) UnorderedAccessView Thanks in advance. PS: I know the answer would be app specific, but still any insight into this would be helpful.

    Read the article

  • Examples of different architecture methodologies

    - by Lane
    Is there a resource or site which illustrates building the same application (desktop or web) using several different contrasting architectures? Such as MVP versus MVVM versus MVC, etc. It would be very helpful to see how they look side-by-side using real-world code instead of comparing written theory to written theory. I've often found that something can be described well in a book, but when you go to implement it, the subtleties and weaknesses of the theory become readily apparent.

    Read the article

  • Where might a newbie programmer begin with game development? [closed]

    - by Ginnjii
    I just started picking up programming and I'd love to learn the ins and outs of game development so if anyone could tell me where to begin I'd really appreciate it alot. I'm interested in flash games in particular for now. I have googled it up and such but I'm honestly lost what with so much related to the subject so a pointer in the right direction would be immensely helpful. As such any site or resource for the subject would be great.

    Read the article

  • Venture Capital SEO (Search Engine Optimization)

    Back in the day, venture capitalists relied on tips, networking and brute force to find companies to invest in. Typically, more of their time was spent actively searching for leads than marketing their services. The internet changed all of that. A quick resource for start-up companies looking for companies to invest in them, today's venture capitalists have been forced to change the way they do business - now they must spend more of their time advertising.

    Read the article

  • Where should you store variables for a search program in java?

    - by Bored915
    I'm wondering which is a more effective storage method in java. Would be better to save variables that will not change in a class or a resource? They're going to be variables that contain a set amount so that later a search program will go through the list of variables so as to recognize possible options. Also if there is a more efficient method of storing them please say so or if i doesn't matter where i store them.

    Read the article

  • Weird vps server issue

    - by anon-user0
    I have an unmanaged linux vps Ubuntu 11.10 (Oneiric Ocelot). I have LNMP installed. Also php-fpm php-apc, varnish, memcache. I have (or rather had) several live sites on it. under normal load the server uses ~700 mb memory. But since last night its using only 20mb~ memory and a lot of the services seems to be down (according to htop) I only see nginx working and mysql starts up and goes does every few minutes on a loop. Here are some information on the server that might help you help me: root@server:~# uname -a Linux server 2.6.18-308.el5.028stab099.3 #1 SMP Wed Mar 7 15:56:00 MSK 2012 i686 i686 i386 GNU/Linux - root@server:~# ifconfig -a lo Link encap:Local Loopback LOOPBACK MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.2 P-t-P:127.0.0.2 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:12515 errors:0 dropped:0 overruns:0 frame:0 TX packets:9541 errors:0 dropped:1 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:7191214 (7.1 MB) TX bytes:536726 (536.7 KB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:176.31.158.78 P-t-P:176.31.158.78 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 - root@server:~# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:http-alt *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp6 0 0 [::]:http-alt [::]:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 9307368 @/com/ubuntu/upstart - htop: http://i.stack.imgur.com/NHKYX.png EDIT: Stressed. mind was not working adding log: root@server:~# less /var/log/syslog Jun 27 05:27:42 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 05:39:01 server CRON[9298]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 05:40:01 server CRON[9463]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 05:46:21 server sm-msp-queue[9480]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=00:19:14, xdelay=00:06:18, mailer=relay, pri=122407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 05:52:39 server sm-msp-queue[9480]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=03:06:32, xdelay=00:06:18, mailer=relay, pri=842407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:00:01 server CRON[15671]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 06:06:22 server sm-msp-queue[15690]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=00:39:15, xdelay=00:06:18, mailer=relay, pri=212407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:09:01 server CRON[18114]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 06:12:40 server sm-msp-queue[15690]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=03:26:33, xdelay=00:06:18, mailer=relay, pri=932407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:20:02 server CRON[21888]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 06:26:22 server sm-msp-queue[21907]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=00:59:15, xdelay=00:06:18, mailer=relay, pri=302407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:27:02 server CRON[24021]: (root) CMD (cd / && run-parts --report /etc/cron.hourly) Jun 27 06:32:40 server sm-msp-queue[21907]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=03:46:33, xdelay=00:06:18, mailer=relay, pri=1022407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:39:01 server CRON[27941]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 06:40:02 server CRON[28110]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 06:46:22 server sm-msp-queue[28125]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=01:19:15, xdelay=00:06:18, mailer=relay, pri=392407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:52:40 server sm-msp-queue[28125]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=04:06:33, xdelay=00:06:18, mailer=relay, pri=1112407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 06:52:40 server sm-msp-queue[28125]: q5QMk7S9009582: q5R2e4uo028125: sender notify: Warning: could not send message for past 4 hours Jun 27 06:52:44 server sm-msp-queue[28125]: q5R2e4uo028125: to=root, delay=00:00:04, xdelay=00:00:04, mailer=relay, pri=33690, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:00:02 server CRON[1543]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 07:06:21 server sm-msp-queue[1560]: q5R2e4uo028125: to=root, delay=00:13:41, xdelay=00:06:18, mailer=relay, pri=123690, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:09:01 server CRON[3986]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete) Jun 27 07:12:39 server sm-msp-queue[1560]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=01:45:32, xdelay=00:06:18, mailer=relay, pri=482407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:18:57 server sm-msp-queue[1560]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=04:32:50, xdelay=00:06:18, mailer=relay, pri=1202407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:20:02 server CRON[7760]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jun 27 07:26:22 server sm-msp-queue[7775]: q5R2e4uo028125: to=root, delay=00:33:42, xdelay=00:06:18, mailer=relay, pri=213690, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:27:01 server CRON[9887]: (root) CMD (cd / && run-parts --report /etc/cron.hourly) Jun 27 07:32:40 server sm-msp-queue[7775]: q5R1R7Ue004056: to=root, ctladdr=root (0/0), delay=02:05:33, xdelay=00:06:18, mailer=relay, pri=572407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:38:58 server sm-msp-queue[7775]: q5QMk7S9009582: to=root, ctladdr=root (0/0), delay=04:52:51, xdelay=00:06:18, mailer=relay, pri=1292407, relay=[127.0.0.1] [127.0.0.1], dsn=4.0.0, stat=Deferred: Connection timed out with [127.0.0.1] Jun 27 07:39:01 server CRON[13813]: (root) CMD ([ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth : root@server:~# df -h Filesystem Size Used Avail Use% Mounted on /dev/simfs 20G 2.3G 18G 12% / - Jun 26 16:22:41 server varnishd[1413]: Child (32425) died signal=3 Jun 26 16:22:41 server varnishd[1413]: child (21687) Started Jun 26 16:22:41 server varnishd[1413]: Child (21687) said Child starts Jun 26 16:22:41 server varnishd[1413]: Child (21687) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 Jun 26 16:34:28 server -- MARK -- Jun 26 16:54:29 server -- MARK -- Jun 26 17:14:29 server -- MARK -- Jun 26 17:34:29 server -- MARK -- Jun 26 17:54:29 server -- MARK -- Jun 26 18:14:29 server -- MARK -- Jun 26 18:34:29 server -- MARK -- Jun 26 18:54:29 server -- MARK -- Jun 26 19:14:29 server -- MARK -- Jun 26 19:34:29 server -- MARK -- Jun 26 19:54:29 server -- MARK -- Jun 26 20:14:29 server -- MARK -- Jun 26 20:34:29 server -- MARK -- Jun 26 20:48:12 server exiting on signal 15 Jun 26 20:51:58 server syslogd 1.5.0#6ubuntu1: restart. Jun 26 20:52:01 server varnishd[1324]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 26 21:11:58 server -- MARK -- Jun 26 21:31:58 server -- MARK -- Jun 26 21:51:58 server -- MARK -- Jun 26 22:11:58 server -- MARK -- Jun 26 22:31:58 server -- MARK -- Jun 26 22:51:58 server -- MARK -- Jun 26 23:11:58 server -- MARK -- Jun 26 23:31:58 server -- MARK -- Jun 26 23:51:58 server -- MARK -- Jun 27 00:11:58 server -- MARK -- Jun 27 00:23:42 server exiting on signal 15 Jun 27 02:21:10 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 02:21:12 server varnishd[1341]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 27 02:41:10 server -- MARK -- Jun 27 02:46:41 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 03:20:44 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 03:20:46 server varnishd[1238]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 27 03:20:46 server varnishd[1238]: child (1239) Started Jun 27 03:20:46 server varnishd[1238]: Child (1239) said Child starts Jun 27 03:20:46 server varnishd[1238]: Child (1239) said SMF.s0 mmap'ed 1073741824 bytes of 1073741824 Jun 27 03:32:52 server exiting on signal 15 Jun 27 03:33:16 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 03:33:31 server varnishd[1372]: Platform: Linux,2.6.18-308.el5.028stab099.3,i686,-sfile,-smalloc,-hcritbit Jun 27 03:53:16 server -- MARK -- Jun 27 04:13:16 server -- MARK -- Jun 27 04:33:16 server -- MARK -- Jun 27 04:53:16 server -- MARK -- Jun 27 05:13:16 server -- MARK -- Jun 27 05:27:42 server syslogd 1.5.0#6ubuntu1: restart. Jun 27 05:53:17 server -- MARK -- Jun 27 06:13:17 server -- MARK -- Jun 27 06:33:17 server -- MARK -- Jun 27 06:53:17 server -- MARK -- Jun 27 07:13:17 server -- MARK -- Jun 27 07:33:17 server -- MARK -- Jun 27 07:53:17 server -- MARK -- Jun 27 08:13:17 server -- MARK -- Jun 27 08:33:17 server -- MARK -- Jun 27 08:53:17 server -- MARK -- Jun 27 09:13:17 server -- MARK -- Jun 27 09:33:17 server -- MARK -- Jun 27 09:53:17 server -- MARK -- Jun 27 10:13:17 server -- MARK -- Jun 27 10:33:17 server -- MARK -- Jun 27 10:53:17 server -- MARK -- Jun 27 11:13:17 server -- MARK -- Jun 27 11:33:17 server -- MARK -- Jun 27 11:53:18 server -- MARK -- Jun 27 12:13:18 server -- MARK -- Jun 27 12:33:18 server -- MARK -- Jun 27 12:53:18 server -- MARK -- Jun 27 13:13:18 server -- MARK -- Jun 27 13:33:18 server -- MARK -- Jun 27 13:53:18 server -- MARK -- Jun 27 14:13:18 server -- MARK -- Jun 27 14:33:18 server -- MARK -- Jun 27 14:53:18 server -- MARK -- -- root@server:~# cat /var/log/nginx/error.log 2012/06/27 03:32:54 [alert] 1199#0: worker process 1203 exited on signal 9 2012/06/27 03:32:54 [alert] 1199#0: worker process 1200 exited on signal 9 2012/06/27 03:32:54 [alert] 1199#0: worker process 1201 exited on signal 9 2012/06/27 03:32:54 [alert] 1199#0: worker process 1202 exited on signal 9 root@server:~# cat /var/log/nginx/access.log 31.210.99.87 - - [27/Jun/2012:09:09:08 +0400] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 172 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:08 +0400] "GET /cms/cmx.jsp HTTP/1.1" 301 184 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:08 +0400] "GET /iesvc/iesvc.jsp HTTP/1.1" 301 184 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:08 +0400] "GET /cmd2/index.jsp HTTP/1.1" 301 184 "-" "-" 88.191.138.103 - - [27/Jun/2012:13:27:09 +0400] "GET /cmd/index.jsp HTTP/1.1" 301 184 "-" "-" 58.97.147.197 - - [27/Jun/2012:17:17:19 +0400] "GET / HTTP/1.1" 301 184 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5" 58.97.147.197 - - [27/Jun/2012:17:17:37 +0400] "GET / HTTP/1.1" 301 184 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.56 Safari/536.5" 58.97.147.197 - - [27/Jun/2012:17:17:38 +0400] "-" 400 0 "-" "-" 58.97.147.197 - - [27/Jun/2012:17:17:38 +0400] "-" 400 0 "-" "-" 58.97.147.197 - - [27/Jun/2012:17:17:48 +0400] "-" 400 0 "-" "-" - root@server:~# cat /var/log/daemon.log Jun 26 20:48:10 server xinetd[1177]: Exiting... Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/daytime [file=/etc/xinetd.d/daytime] [line=28] Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/discard [file=/etc/xinetd.d/discard] [line=26] Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/echo [file=/etc/xinetd.d/echo] [line=25] Jun 26 20:51:58 server xinetd[1174]: Reading included configuration file: /etc/xinetd.d/time [file=/etc/xinetd.d/time] [line=26] Jun 26 20:51:58 server xinetd[1174]: removing chargen Jun 26 20:51:58 server xinetd[1174]: removing chargen Jun 26 20:51:58 server xinetd[1174]: removing daytime Jun 26 20:51:58 server xinetd[1174]: removing daytime Jun 26 20:51:58 server xinetd[1174]: removing discard Jun 26 20:51:58 server xinetd[1174]: removing discard Jun 26 20:51:58 server xinetd[1174]: removing echo Jun 26 20:51:58 server xinetd[1174]: removing echo Jun 26 20:51:58 server xinetd[1174]: removing time Jun 26 20:51:58 server xinetd[1174]: removing time Jun 26 20:51:58 server xinetd[1174]: xinetd Version 2.3.14 started with libwrap loadavg options compiled in. Jun 26 20:51:58 server xinetd[1174]: Started working: 0 available services Jun 26 20:52:01 server vnstatd[1330]: vnStat daemon 1.11 started. Jun 26 20:52:01 server vnstatd[1330]: Monitoring: venet0 Jun 27 00:23:41 server xinetd[1174]: Exiting... Jun 27 02:21:12 server vnstatd[1349]: vnStat daemon 1.11 started. Jun 27 02:21:12 server vnstatd[1349]: Monitoring: venet0 Jun 27 03:20:44 server xinetd[1166]: attribute: disable should not be in default section [file=/etc/xinetd.conf] [line=12] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/chargen [file=/etc/xinetd.conf] [line=15] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/daytime [file=/etc/xinetd.d/daytime] [line=28] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/discard [file=/etc/xinetd.d/discard] [line=26] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/echo [file=/etc/xinetd.d/echo] [line=25] Jun 27 03:20:44 server xinetd[1166]: Reading included configuration file: /etc/xinetd.d/time [file=/etc/xinetd.d/time] [line=26] Jun 27 03:20:44 server xinetd[1166]: removing chargen Jun 27 03:20:44 server xinetd[1166]: removing chargen Jun 27 03:20:44 server xinetd[1166]: removing daytime Jun 27 03:20:44 server xinetd[1166]: removing daytime Jun 27 03:20:44 server xinetd[1166]: removing discard Jun 27 03:20:44 server xinetd[1166]: removing discard Jun 27 03:20:44 server xinetd[1166]: removing echo Jun 27 03:20:44 server xinetd[1166]: removing echo Jun 27 03:20:44 server xinetd[1166]: removing time Jun 27 03:20:44 server xinetd[1166]: removing time Jun 27 03:20:44 server xinetd[1166]: xinetd Version 2.3.14 started with libwrap loadavg options compiled in. Jun 27 03:20:44 server xinetd[1166]: Started working: 0 available services Jun 27 03:20:46 server vnstatd[1249]: vnStat daemon 1.11 started. Jun 27 03:20:46 server vnstatd[1249]: Monitoring: venet0 Jun 27 03:32:41 server xinetd[1166]: Exiting... Jun 27 03:33:32 server vnstatd[1380]: vnStat daemon 1.11 started. Jun 27 03:33:32 server vnstatd[1380]: Monitoring: venet0 root@server:~# - Anything else you need let me know

    Read the article

  • Proactive Support Sessions at OUG London and OUG Ireland

    - by THE
    .conf td { width: 350px; border: 1px solid black; background-color: #ffcccc; } table { border: 1px solid black; } tr { border: 0px solid black; } td { border: 1px solid black; padding: 5px; } Oracle Proactive Support Technology is proud to announce that two members of its team will be speaking at the UK and Ireland User Group Conferences this year. Maurice and Greg plan to run the following sessions (may be subject to change): Maurice Bauhahn OUG Ireland BI & EPM and Technology Joint SIG Meeting 20 November 2012 BI&EPM SIG event in Ireland (09:00-17:00) and OUG London EPM & Hyperion Conference 2012 Tuesday 23rd to Wednesday 24th Oct 2012 Profit from Oracle Diagnostic Tools Embedded in EPM Oracle bundles in many of its software suites valuable toolsets to collect logs and settings, slice/dice error messages, track performance, and trace activities across services. Become familiar with several enterprise-level diagnostic tools embedded in Enterprise Performance Management (Enterprise Manager Fusion Middleware Control, Remote Diagnostic Agent, Dynamic Monitoring Service, and Oracle Diagnostic Framework). Expedite resolution of Service Requests as you learn to upload output from these tools to My Oracle Support. Who will benefit from attending the session? Geeks will find this most beneficial, but anyone who raises Oracle technical service requests will learn valuable pointers that may speed resolution. The focus is on the EPM stack, but this session will benefit almost everyone who needs to drill deeper into Oracle software environments. What will delegates learn from the session? Delegates who participate in this session will learn: How to access and run Remote Diagnostic Agent, Enterprise Manager Fusion Middleware Control, Dynamic Monitoring Service, and Oracle Diagnostic Framework. How to exploit the strengths of each tool. How to pass the outputs to My Oracle Support. How to restrict exposure of sensitive information. OUG Ireland BI & EPM and Technology Joint SIG Meeting 20 November 2012 BI&EPM SIG event in Ireland (09:00-17:00) and OUG London EPM & Hyperion Conference 2012 Tuesday 23rd to Wednesday 24th Oct 2012 Using EPM-Specific Troubleshooting Tools EPM developers have created a number of EPM-specific tools to collect logs and configuration files, centralize configuration information, and validate a configured installation (Ziplogs, EPM Registry Editor, [Deployment Report, Registry Cleanup Utility, Reset Configuration Tool, EPMSYS Hostname Check] and Validate [EPM System Diagnostic]). Learn how to use these tools on your own or to expedite Service Request resolution. Who will benefit from attending the session? Anyone who monitors Oracle EPM environments or raises service requests will learn valuable lessons that could speed resolution of those requests. Anyone from novices to experts will benefit from this review of custom troubleshooting EPM tools. What will delegates learn from the session? Learn where to locate and start EPM troubleshooting tools created by EPM developers Learn how to collect and upload outputs of EPM troubleshooting tools. Adapt to history of changes in these tools across time and version. Learn how to make critical changes in configurations. Grzegorz Reizer OUG London EPM & Hyperion Conference 2012 Tuesday 23rd to Wednesday 24th Oct 2012 EPM 11.1.2.2: Detailed overview of new features and improvements in Financial Management products. This presentation is a detailed overview of new features and improvements introduced in Enterprise Performance Management 11.1.2.2 for Financial Management products (Hyperion Financial Managment, Hyperion Planning, Financial Close Management). The presentation will cover a number of new product features from recently introduced configurable dimensionality in HFM to new functionality enhancements in Planning. We'll close the session with an overview of upgrade options from earlier product releases.

    Read the article

  • Oracle Social Network and the Flying Monkey Smart Target

    - by kellsey.ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. I teased this before OpenWorld, and for those of you who didn’t make it to the show or didn’t come by the Office Hours to take the Oracle Social Network Technical Tour Noel (@noelportugal) ran, I give you the Flying Monkey Smart Target. In brief, Noel built a target, about two feet tall, which when struck, played monkey sounds and posted a comment to an Oracle Social Network Conversation, all controlled by a Raspberry Pi. He also connected a Dropcam to record the winner just prior to the strike. I’m not sure how it all works, but maybe Noel can post the technical specifics. Here’s Noel describing the Challenge, the Target and a few other tidbit in an interview with Friend of the ‘Lab, Bob Rhubart (@brhubart). The monkey target bits are 2:12-2:54 if you’re into brevity, but watch the whole thing. Here are some screen grabs from the Oracle Social Network Conversation, including the Conversation itself, where you can see all the strikes documented, the picture captured, and the annotation capabilities: #gallery-1 { margin: auto;? } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; }    That’s Diego in one shot, looking very focused, and Ernst in the other, who kindly annotated himself, two of the development team members. You might have seen them in the Oracle Social Network Hands-On Lab during the show. There’s a trend here. Not by accident, fun stuff like this has becoming our calling card, e.g. the Kscope 12 WebCenter Rock ‘em Sock ‘em Robots. Not only are these entertaining demonstrations, but they showcase what’s possible with RESTful APIs and get developers noodling on how easy it is to connect real objects to cloud services to fix pain points. I spoke to some great folks from the City of Atlanta about extending the concepts of the flying monkey target to physical asset monitoring. Just take an internet-connected camera with REST APIs like the Dropcam, wire it up to Oracle Social Netwok, and you can hack together a monitoring device for a datacenter or a warehouse. Sure, it’s easier said than done, but we’re a lot closer to that reality than we were even two years ago. Another noteworthy bit from Noel’s interview, beginning at 2:55, is the evolution of social developer. Speaking of, make sure to check out the Oracle Social Developer Community. Look for more on the social developer in the coming months. Noel has become quite the Raspberry Pi evangelist, and why not, it’s a great tool, a low-power Linux machine, cheap ($35!) and highly extensible, perfect for makers and students alike. He attended a meetup on Saturday before OpenWorld, and during the show, I heard him evangelizing the Pi and its capabilities to many people. There is some fantastic innovation forming in that ecosystem, much of it with Java. The OTN gang raffled off five Pis, and I expect to see lots of great stuff in the very near future. Stay tuned this week for posts on all our Challenge entrants. There’s some great innovation you won’t want to miss. Find the comments. Update: I forgot to mention that Noel used Twilio, one of his favorite services, during the show to send out Challenge updates and information to all the contestants.

    Read the article

  • Monitor System Resources from the Windows 7 Taskbar

    - by Asian Angel
    The problem with most system monitoring apps is that they get covered up with all of your open windows, but you can solve that problem by adding monitoring apps to the Taskbar. Setting Up & Using SuperbarMonitor All of the individual monitors and the .dll files necessary to run them come in a single zip file for your convenience. Simply unzip the contents, add them to an appropriate “Program Files Folder”, and create shortcuts for the monitors that you would like to use on your system. For our example we created shortcuts for all five monitors and set the shortcuts up in their own “Start Menu Folder”. You can see what the five monitors (Battery, CPU, Disk, Memory, & Volume) look like when running…they are visual in appearance without text to clutter up the looks. The monitors use colors (red, green, & yellow) to indicate the amount of resources being used for a particular category. Note: Our system is desktop-based but the “Battery Monitor” was shown for the purposes of demonstration…thus the red color seen here. Hovering the mouse over the “Battery, CPU, Disk, & Memory Monitors” on our system displayed a small blank thumbnail. Note: The “Battery Monitor” may or may not display more when used on your laptop. Going one step further and hovering the mouse over the thumbnails displayed a small blank window. There really is nothing that you will need to worry with outside of watching the color for each individual monitor. Nice and simple! The one monitor with extra features on the thumbnail was the “Volume Monitor”. You can turn the volume down, up, on, or off from here…definitely useful if you have been wanting to hide the “Volume Icon” in the “System Tray”. You can also pin the monitors to your “Taskbar” if desired. Keep in mind that if you do close any of the monitors they will “temporarily” disappear from the “Taskbar” until the next time they are started. Note: If you want the monitors to start with your system each time you will need to add the appropriate shortcuts to the “Startup Sub-menu” in your “Start Menu”. Conclusion If you have been wanting a nice visual way to monitor your system’s resources then SuperbarMonitor is definitely worth trying out. Links Download SuperbarMonitor Similar Articles Productive Geek Tips Monitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersUse Windows Vista Reliability Monitor to Troubleshoot CrashesTaskbar Eliminator Does What the Name Implies: Hides Your Windows TaskbarBring Misplaced Off-Screen Windows Back to Your Desktop (Keyboard Trick)How To Fix System Tray Tooltips Not Displaying in Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites

    Read the article

  • First Day of Data Integration Track at Oracle OpenWorld 2012

    - by Irem Radzik
    OpenWorld started full speed for us today with a great set of sessions in the Data Integration track. After the exciting keynote session on Oracle Database 12c in the morning; Brad Adelberg, VP of Development for Data Integration products, presented Oracle’s data integration product strategy. His session highlighted the new requirements for data integration to achieve pervasive and continuous access to trusted data. The new requirements and product focus areas presented in this session are: Provide access to any data at any source On premise or on cloud Enable zero downtime operations and maximum performance Leverage real-time data for accurate business insights And ensure high quality data is used across the enterprise During the session Brad walked over how Oracle’s data integration products, Oracle Data Integrator, Oracle GoldenGate, Oracle Enterprise Data Quality, and Oracle Data Service Integrator, deliver on these requirements and how recent product releases build on this strategy. Soon after Brad’s session we heard from a panel of Oracle GoldenGate customers, St. Jude Medical, Equifax, and Bank of America, how they achieved zero downtime operations using Oracle GoldenGate. The panel presented different use cases of GoldenGate, from Active-Active replication to offloading reporting. Especially St. Jude Medical’s implementation, which involves the alert management system for patients that use their pacemakers, reminded me in some cases downtime of mission-critical systems can be a matter of life or death. It is very comforting to hear that GoldenGate delivers highly-reliable continuous availability for life-saving medical systems. In the afternoon, Nick Wagner from the Product Management team and I followed the customer panel with the review of Oracle GoldenGate 11gR2’s New Features.  Many questions we received from audience were about GoldenGate’s new Integrated Capture for Oracle Database and the enhanced Conflict Management features, as well as how GoldenGate compares to Oracle Streams. In addition to giving details on GoldenGate’s unique capability to capture changed data with a direct integration to the Oracle DBMS engine, we reminded the audience that enhancements to Oracle GoldenGate will continue, while Streams will be primarily maintained. Last but not least, Tim Garrod and Ryan Fonnett from Raymond James presented a unified real-time data integration solution using Oracle Data Integrator and GoldenGate for their operational data store (ODS). The ODS supports application services across the enterprise and providing timely data is a critical requirement. In this solution, Oracle GoldenGate does the log-based change data capture for Oracle Data Integrator’s near real-time data integration between heterogeneous systems. As Raymond James’ ODS supports mission-critical services for their advisors, the project team had to set up this integration environment to be highly available. During the session, Ryan and Tim explained how they use ODI to enable automated process execution and “always-on” integration processes. Their presentation included 2 demonstrations that focused on CDC patterns deployed with ODI and the automated multi-instance execution and monitoring. We are very grateful to Tim and Ryan for their very-well prepared presentation at OpenWorld this year. Day 2 (Tuesday) will be also a busy day in our track. In addition to the Fusion Middleware Innovation Awards ceremony at 11:45am at Moscone West 3001, we have the following DI sessions Real-World Operational Reporting Customer Panel 11:45am Moscone West- 3005 Oracle Data Integrator Product Update and Future Strategy 1:15pm Moscone West- 3005 High-volume OLTP with Oracle GoldenGate: Best Practices from Comcast 1:15pm Moscone West- 3005 Everything You need to Know about Monitoring Oracle GoldenGate 5pm Moscone West-3005 If you are at OpenWorld please join us in these sessions. For a full review of data integration track at OpenWorld please see our Focus-On document.

    Read the article

  • How to create (via installer script) a task that will install my bash script so it runs on DE startup?

    - by MountainX
    I've been reading for the last couple hours about Upstart, .xinitrc, .xsessions, rc.local, /etc/init.d/, /etc/xdg/autostart, @reboot in crontab and so many other things that I'm totally confused! Here is my bash script. It should start/run after the desktop environment is started and it should continue to run at all times until logout/shutdown. It should start again on reboot. Any time the DE is running, it should run. #!/bin/bash while true; do if [[ -s ~/.updateNotification.txt ]]; then read MSG < ~/.updateNotification.txt kdialog --title 'The software has been updated' --msgbox "$MSG" cat /dev/null > ~/.updateNotification.txt fi sleep 3600 done exit 0 I know zero about using Upstart, but I understand that Upstart is one way to handle this. I'll consider other approaches but most of the things I've been reading about are too complex for me. Furthermore, I can't figure out which approach will meet my requirements (which I'll detail below). There are two steps in my question: How to automatically start the script above, as described above. How to "install" that Upstart task via a bash script (i.e., my "installer"). I assume (or hope) that step 2 is almost trivial once I understand step 1. I have to support all flavors of Ubuntu desktops. Therefore, the kdialog call above will be replaced. I'm considering easybashgui for this. (Or I could use zenity on gnome DE's.) My requirements are: The setup process (installation) must be done via a bash script. I cannot use the GUI method described in the Ubuntu doc AddingProgramToSessionStartup, for example. I must be able to script/automate the setup (installing) process using bash. Currently, it is as simple as having the bash installer script copy the above script into /home/$USER/.kde/Autostart/ The setup process must be universal across Ubuntu derivatives including Unity and KDE and gnome desktops. The same setup script (installer) should run on Linux Mint, Kubuntu, Xbuntu (basically any flavor of Ubuntu and major derivatives such as Linux Mint). For example, we cannot continue to put a script file in /home/$USER/.kde/Autostart/ because that exists only on KDE. The above script should work for each of the limited flavors we use. Hence our interest in using easybashgui instead of kdialog or zenity. See below. The installed monitoring script should only be started after the desktop is started since it will display a GUI message to the user if the update is found. The monitoring script (above) should run without root privileges, of course. But the installer (bash script) can be run as root. I'm not a real developer or a sysadmin. This is a part time volunteer thing for me, so it needs to be easy/simple. I can write bash scripts and I can program a little, but I know nothing about Upstart or systemd, for example. And, unfortunately, my job doesn't give me time to become an expert on init systems or much of anything else related to development and sysadmin. So I have to stick with simple solutions. The easybashgui version of the script might look like this: #!/bin/bash source easybashgui while true; do if [[ -s ~/.updateNotification.txt ]]; then read MSG < ~/.updateNotification.txt message "$MSG" cat /dev/null > ~/.updateNotification.txt fi sleep 3600 done exit 0

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) Now Available!

    - by Javier Puerta
    Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) is now available on OTN on ALL platforms. This is the first major release since the launch of Enterprise Manager 12c in October of 2011 and the first ever Enterprise Manager release available on all platforms simultaneously. This is primarily a stability release which incorporates many of issues and feedback reported by early adopters. In addition, this release contains many new features and enhancements in areas across the board.   New Capabilities and Features   Enhanced management capabilities for enterprise private clouds: Introduces new capabilities to allow customers to build and manage a Java Platform-as-a-Service (PaaS) cloud based on Oracle Weblogic Server. The new capabilities include guided set up of PaaS Cloud, self-service provisioning, automatic scale out and metering and chargeback. Enhanced lifecycle management capabilities for Oracle WebLogic Server environments: Combining in-context multiple domain, patching and configuration file synchronizations. Integrated Hardware-Software management for Oracle Exalogic Elastic Cloud through features such as rack schematics visualization and integrated monitoring of all hardware and software components. The latest management capabilities for business-critical applications include: Business Application Management: A new Business Application (BA) target type and dashboard with flexible definitions provides a logical view of an application’s business transactions, end-user experiences and the cloud infrastructure the monitored application is running on. Enhanced User Experience Reporting: Oracle Real User Experience Insight has been enhanced to provide reporting capabilities on client-side issues for applications running in the cloud and has been more tightly coupled with Oracle Business Transaction Management to help ensure that real-time user experience and transaction tracing data is provided to users in context. Several key improvements address ease of administration, reporting and extensibility for massively scalable cloud environments including dynamic groups, self-updateable monitoring templates, bulk operations against many events, etc. New and Revised Plug-Ins:   Several plug-Ins have been updated as a part of this release resulting in either new versions or revisions. Revised plug-ins contain only bug-fixes and while new plug-ins incorporate both bug fixes as well as new functionality.   Plug-In Name Version Enterprise Manager for Oracle Database 12.1.0.2 (revision) Enterprise Manager for Oracle Fusion Middleware 12.1.0.3 (new) Enterprise Manager for Chargeback and Capacity Planning 12.1.0.3 (new) Enterprise Manager for Oracle Fusion Applications 12.1.0.3 (new) Enterprise Manager for Oracle Virtualization 12.1.0.3 (new) Enterprise Manager for Oracle Exadata 12.1.0.3 (new) Enterprise Manager for Oracle Cloud 12.1.0.4 (new) Installation and Upgrade:   All major platforms have been released simultaneously (Linux 32 / 64 bit, Solaris (SPARC), Solaris x86-64, IBM AIX 64-bit, and Windows x86-64 (64-bit) ) Enterprise Manager 12.1.0.2 is a complete release that includes both the EM OMS and Agent versions of 12.1.0.2. Installation options available with EM 12.1.0.2: User can do fresh Install or an upgrade from versions EM 10.2.0.5, 11.1, or 12.1.0.2 ( Bundle Patch 1 not mandatory). Upgrading to EM 12.1.0.2 from EM 12.1.0.1 is not a patch application (similar to Bundle Patch 1) but is achieved through a 1-system upgrade. Documentation:   Oracle Enterprise Manager Cloud Control Introduction Document provides a broad overview of capabilities and highlights"What's New" in EM 12.1.0.2.   All updated Oracle Enterprise Manager documentation can be found on OTN   Customer Webcast - EM 12c Installation and Upgrade: This webcast is for customers who are interested in learning how to successfully deploy or upgrade to EM 12.1.0.2.   Customer Webcast - Installation and Upgrade - September 21(registration and info on OTN starting September 12)   Enterprise Manager 12c R2 Resources:   OTN Download Page Upgrade Guide

    Read the article

  • Taking Your Business Scorecard Golfing

    - by tobyehatch
    Our workplace world is definitely changing. Not only are we taking work home, but we are working during odd hours in some very strange places.  I had the pleasure of interviewing Jacques Vigeant, Product Strategy Manager for Oracle Business Intelligence and Enterprise Performance Management, on a Podcast, and he enlightened me about how our mobile devices and business scorecards are enabling us to be more accountable and keep a watchful eye on business – even while on the golf course.Business scorecards have been around for many years - so I asked Jacques if he felt they had changed significantly due to technology. His answer was, “Yes, and no.”  Jacques agreed that scorecard enthusiasts are still passionate about executing the company strategy and monitoring Key Performance Indicators (KPIs), but scorecards and Business Intelligence (BI) as a whole have changed.  He explained that five to six years ago, people did BI work at the office and, for the most part, disconnected from their computer and workplace when they went home – with the exception of checking email and making a phone call or two. But now, that is no longer the case. People are virtually always connected with work and, more importantly, expect their BI and scorecards to be ‘always on,’ regardless of whether they are at their desk or somewhere else.Basically, the BI paradigm has changed from a 'pull' model, where employees are at their desks querying or pulling information from the system, to a 'push' model where employees expect their BI and scorecard systems to reach out (or push information) to them when there is something of note to learn or something on which they need to take action. I found this very interesting. However mobile devices do have their limitations with respect to screen sizes – does it really make sense to look at your strategy/scorecard on tiny devices? What kind of scorecard activities can you really expect to be able to do? Jacques’ answer was very logical. “When you think of a scorecard, it is really comprised of an organization of KPIs that are aligned with the strategic objectives of your company. KPIs are the heart of how you will execute your strategy. So, if you decompose that a little more, each KPI is well defined with the thresholds that you should keep an eye on and who is responsible for them. When we talk about scorecarding on a phone, we aren’t talking about surfing the strategy and exploring the strategy map like we do on the desktop. In a scorecarding context, we use the phone more as an alerting mechanism or simple monitoring device for your KPIs.”Jacques gave a great example of an inventory manager who took part of an afternoon off to go golfing before winter finally hit, and while on the front nine holes, his phone vibrated. His scorecard was alerting him that the inventory levels for one of the products was below some threshold that he had set.  From his phone, he had set up three options within Oracle Scorecard and Strategy Management (OSSM) for this type of situation:  1. Contact the warehouse manager directly by phone and work it out (standard phone function)  2. Tap/hold the KPI and add an annotation to the KPI in OSSM using the dictation capabilities of the phone and deal with it more fully when he gets back to the office  3. Tap/hold the KPI and invoke a business process from OSSM to transfer product from another warehouse with higher stock levels to the one that needs it  Being on a phone should still give you options to quickly deal with situations as needed, but mobile phones are not designed for nor should try to replicate the full desktop experience. We covered other interesting subjects in the interview, including how Oracle is keeping pace with mobile innovation and new devices such as Google Glasses, Galaxy Gear, Pebble Watches and more, and how Oracle is handling mobile security– which is great news for our mobile workforce. To listen to the entire Podcast, click here.To learn more about Oracle Scorecard and Strategy Management, click here.

    Read the article

  • Oracle OpenWorld Update: Demo Pods and Hands-on Labs

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Less than one week away until the start of Oracle OpenWorld 2012 and the Data Integration Solutions team is ready to go!  We have an exciting line up for you this year which we have summarized for you in the Oracle OpenWorld Focus on Data Integration Solutions document. In past posts we have discussed session themes and our customer panel, but today I would like to summarize our Hands-on Labs and Demo Pods that we have available for attendees. For Oracle GoldenGate Hands-On Labs we have two labs that we are running this year. Deep Dive into Oracle GoldenGate Thursday October 4th at 11:15AM in the Marriott Marquis Salon 1/2 Oracle GoldenGate provides real-time log-based change data capture and delivery between heterogeneous systems. It enables cost-effective, low-impact, real-time data integration and continuous availability solutions. This session covers Oracle GoldenGate 11g’s internal product architecture and includes a hands-on lab that covers configuration examples for target database instantiation and real-time change data capture and delivery. The participants will configure Oracle GoldenGate to instantiate a secondary database that can be used for disaster recovery or a reporting instance. Come learn how easy it is to use and how this can be a very valuable and easy technology solution for your organization. Introduction to Oracle GoldenGate Veridata Wednesday October 3rd 10:15AM in the Marriott Marquis Sales 1/2 Oracle GoldenGate Veridata compares one set of data with another and identifies data that is out of synchronization. In this hands-on lab, you will be introduced to the key features of this product. Using the Oracle GoldenGate Veridata Web client, you will have the opportunity to configure comparison objects and rules, initiate a comparison, review the status and output of a comparison, and review out-of-sync data. As a bonus this year, we have recorded the labs and made them available on youtube.com/oraclegoldengate. These will be available the day of the labs. Our demo pods are an opportunity for attendees to see our products but more so to meet the product management and development teams. I would like to point out that we have two Oracle GoldenGate 11gR2 demo pods, one in the database camp and the other in the middleware camp. The one in the middleware camp will be focused on all platforms while the one in the database camp will have a focus on the Oracle platform. The other two I would like to point out are the Monitoring Oracle GoldenGate and the Oracle Enterprise Manager demo pods; both of these pods will focus on methods to monitor GoldenGate but the OEM demo pod will have a specific focus on the Oracle GoldenGate Management Pack plug-in for OEM. Below is a list of our demo pods and their locations. Monitoring Oracle GoldenGate for End-to-End Visibility Moscone South, Right - S-241 Oracle Data Integrator and Oracle GoldenGate for Oracle Applications Moscone South, Right - S-240 Oracle GoldenGate 11gR2 New Features Moscone South, Right - S-239 Oracle GoldenGate 11gR2: Real-Time, Transactional Database Replication     Moscone South, Left - S-027 Oracle GoldenGate Veridata and Adapters Moscone South, Right - S-242 Oracle Enterprise Manager Moscone South, Left - S-040 Keep tuned to our blog during the show for news and highlights from the Data Integration Solutions team. See you there.

    Read the article

  • Musings on the launch of SQL Monitor

    - by Phil Factor
    For several years, I was responsible for the smooth running of a large number of enterprise database servers. We ran a network monitoring tool that was primitive by today’s standards but which performed the useful function of polling every system, including all the Servers in my charge. It ran a configurable script for each service that you needed to monitor that was merely required to return one of a number of integer values. These integer values represented the pain level of the service, from 10 (“hurtin’ real bad”) to 1 (“Things is great”). Not only could you program the visual appearance of each server on the network diagram according to the value of the integer, but you could even opt to run a sound file. Very soon, we had a large TFT Screen, high on the wall of the server room, with every server represented by an icon, and a speaker next to it that would give out a series of grunts, groans, snores, shrieks and funeral marches, depending on the problem. One glance at the display, and you could dive in with iSQL/QA/SSMS and check what was going on with your favourite diagnostic tools. If you saw a server icon burst into flames on the screen or droop like a jelly, you dropped your mug of coffee to do it.  It was real fun, but I remember it more for the huge difference it made to have that real-time visibility into how your servers are performing. The management soon stopped making jokes about the real reason we wanted the TFT screen. (It rendered DVDs beautifully they said; particularly flesh-tints). If you are instantly alerted when things start to go wrong, then there was a good chance you could fix it before being alerted to the problem by the users of the system.  There is a world of difference between this sort of tool, one that gives whoever is ‘on watch’ in the server room the first warning of a potential problem on one of any number of servers, and the breed of tool that attempts to provide some sort of prosthetic DBA Brain. I like to get the early warning, to get the right information to help to diagnose a problem: No auto-fix, but just the information. I prefer to leave the task of ascertaining the exact cause of a problem to my own routines, custom code, intuition and forensic instincts. A simulated aircraft cockpit doesn’t do anything for me, especially before I know where I should be flying.  Time has moved on, and that TFT screen is now, with SQL Monitor, an iPad or any other mobile or static device that can support a browser. Rather than trying to reproduce the conceptual topology of the servers, it lists them in their groups so as to give a display that scales with the increasing number of databases you monitor.  It gives the history of the major events and trends for the servers. It gives the icons and colours that you can spot out of the corner of your eye, but goes on to give you just enough information in drill-down to give you a much clearer idea of where to look with your DBA tools and routines. It doesn't swamp you with information.  Whereas a few server and database-level problems are pretty easily fixed, others depend on judgement and experience to sort out.  Although the idea of an application that automates the bulk of a DBA’s skills is attractive to many, I can’t see it happening soon. SQL Server’s complexity increases faster than the panaceas can be created. In the meantime, I believe that the best way of helping  DBAs  is to make the monitoring process as simple and effective as possible,  and provide the right sort of detail and ‘evidence’ to allow them to decide on the fix. In the end, it is still down to the skill of the DBA.

    Read the article

  • spring 3 AOP anotated advises

    - by Art79
    Trying to figure out how to Proxy my beans with AOP advices in annotated way. I have a simple class @Service public class RestSampleDao { @MonitorTimer public Collection<User> getUsers(){ .... return users; } } i have created custom annotation for monitoring execution time @Target({ ElementType.METHOD, ElementType.TYPE }) @Retention(RetentionPolicy.RUNTIME) public @interface MonitorTimer { } and advise to do some fake monitoring public class MonitorTimerAdvice implements MethodInterceptor { public Object invoke(MethodInvocation invocation) throws Throwable{ try { long start = System.currentTimeMillis(); Object retVal = invocation.proceed(); long end = System.currentTimeMillis(); long differenceMs = end - start; System.out.println("\ncall took " + differenceMs + " ms "); return retVal; } catch(Throwable t){ System.out.println("\nerror occured"); throw t; } } } now i can use it if i manually proxy the instance of dao like this AnnotationMatchingPointcut pc = new AnnotationMatchingPointcut(null, MonitorTimer.class); Advisor advisor = new DefaultPointcutAdvisor(pc, new MonitorTimerAdvice()); ProxyFactory pf = new ProxyFactory(); pf.setTarget( sampleDao ); pf.addAdvisor(advisor); RestSampleDao proxy = (RestSampleDao) pf.getProxy(); mv.addObject( proxy.getUsers() ); but how do i set it up in Spring so that my custom annotated methods would get proxied by this interceptor automatically? i would like to inject proxied samepleDao instead of real one. Can that be done without xml configurations? i think should be possible to just annotate methods i want to intercept and spring DI would proxy what is necessary. or do i have to use aspectj for that? would prefere simplest solution :- ) thanks a lot for help!

    Read the article

  • How can I detect message boxes popping up in another process?

    - by Frerich Raabe
    I'd like to execute some code whenever a (any!) message box (as spawned by the MessageBox Function) is shown in another process. I didn't start the process I'm monitoring. I can think of three approaches: Install a global CBT Hook procedure which tells me whenever a window is created on the desktop. Then, check whether the window belongs to the process I'm monitoring and whether the class name is #32770 (which is the class name of dialogs according to the About Window Classes page at the MSDN). This would probably work, but it would pull the DLL which contains the hook procedure into virtually every process on the desktop, and the hook procedure gets called a lot. It smells like a potential perfomance problem. Try to subclass the #32770 system window class (is this possible at all?) and look for WM_CREATE messages in my custom window procedure. Intercept the MessageBox Function API call (even though the remote process is running already!) and call my code from the hook function. So far, I only know that the first idea is feasible, but it seems really inefficient. Can anybody think of a simpler solution than that to this problem?

    Read the article

  • Problem with JMX query of Coherence node MBeans visible in JConsole

    - by Quinn Taylor
    I'm using JMX to build a custom tool for monitoring remote Coherence clusters at work. I'm able to connect just fine and query MBeans directly, and I've acquired nearly all the information I need. However, I've run into a snag when trying to query MBeans for specific caches within a cluster, which is where I can find stats about total number of gets/puts, average time for each, etc. The MBeans I'm trying to access programatically are visible when I connect to the remote process using JConsole, and have names like this: Coherence:type=Cache,service=SequenceQueue,name=SEQ%GENERATOR,nodeId=1,tier=back It would make it more flexible if I can dynamically grab all type=Cache MBeans for a particular node ID without specifying all the caches. I'm trying to query them like this: QueryExp specifiedNodeId = Query.eq(Query.attr("nodeId"), Query.value(nodeId)); QueryExp typeIsCache = Query.eq(Query.attr("type"), Query.value("Cache")); QueryExp cacheNodes = Query.and(specifiedNodeId, typeIsCache); ObjectName coherence = new ObjectName("Coherence:*"); Set<ObjectName> cacheMBeans = mBeanServer.queryMBeans(coherence, cacheNodes); However, regardless of whether I use queryMBeans() or queryNames(), the query returns a Set containing... ...0 objects if I pass the arguments shown above ...0 objects if I pass null for the first argument ...all MBeans in the Coherence:* domain (112) if I pass null for the second argument ...every single MBean (128) if I pass null for both arguments The first two results are the unexpected ones, and suggest a problem in the QueryExp I'm passing, but I can't figure out what the problem is. I even tried just passing typeIsCache or specifiedNodeId for the second parameter (with either coherence or null as the first parameter) and I always get 0 results. I'm pretty green with JMX — any insight on what the problem is? (FYI, the monitoring tool will be run on Java 5, so things like JMX 2.0 won't help me at this point.)

    Read the article

  • What programming technique not done by you, was ahead of its time?

    - by Ferds
    There are developers who understand a technology and produce a solution months or years ahead of its time. I worked with a guy who designed an system using C# beta which would monitor different system components on several servers. He used SQL Server that would pick up these system monitoring components (via reflection) and would instantiate them via an NT Service. The simplicity in this design, was that another developer (me), who understood part of a system, would produce a component that could monitor it, as he had the specialized knowledge of that part of the system. I would derive from the monitor base class (to start, stop and log info), install on the monitoring server GAC and then add an entry to the components table in sql server. Then the main engine would pick up this component and do its magic. I understood parts of it, but couldn't work out by derive from a base class, why add to the GAC etc. This was 6 years ago and it took me months to realize what he achieved. What programming technique not done by you was ahead of its time? EDIT : Techniques that you have seen at work or journals/blogs - I don't mean historically over years.

    Read the article

  • How can i initialise a server on startup?

    - by djerry
    Hey all, I need to make some connections on startup of a server. I'm using the wcf technology for this client-server application. The problem is that the constructor of the server isn't called at any time, so for the moment, i initialize the connections when the first client makes a connection. But this generates problems in a further part. This is my server setup: private static ServiceHost _svc; static void Main(string[] args) { NetTcpBinding binding = new NetTcpBinding(SecurityMode.Message); Uri address = new Uri("net.tcp://localhost:8000"); _svc = new ServiceHost(typeof(MonitoringSystemService), address); publishMetaData(_svc, "http://localhost:8001"); _svc.AddServiceEndpoint(typeof(IMonitoringSystemService), binding, "Monitoring Server"); _svc.Open(); Console.WriteLine("Listener service gestart op net.tcp://localhost:8000/Monitoring"); Console.ReadLine(); } private static void publishMetaData(ServiceHost svc, string sEndpointAddress) { ServiceMetadataBehavior smb = svc.Description.Behaviors.Find<ServiceMetadataBehavior>(); if (smb != null) { smb.HttpGetEnabled = true; smb.HttpGetUrl = new Uri(sEndpointAddress); } else { smb = new ServiceMetadataBehavior(); smb.HttpGetEnabled = true; smb.HttpGetUrl = new Uri(sEndpointAddress); svc.Description.Behaviors.Add(smb); } } How can i start the server without waiting for a client to logon so i can initialize it. Thanks in advance.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >