Search Results

Search found 2156 results on 87 pages for 'weighted average'.

Page 76/87 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Lag spikes at full CPU usage, maybe video card

    - by Roberts
    I am posting this thread in hurry so few things may be missed (I will update tomorrow). My PC specs: Motherboard Name - Gigabyte GA-945PL-S3 CPU Type - DualCore Intel Core 2 Duo E4300, 1800 MHz (9 x 200) OS - Microsoft Windows 7 Ultimate OS Kernel Type - 32-bit OS Version - 6.1.7601 I bougth a new video card one month ago. GeForce 210. I didn't have any problems. I wanted to overclock it, in other words: "Play with it". So I installed Gigabyte EasyBoost from CD and overclocked the GPU 590 + 110 mhz, memory to max to 960mhz from 800mhz. Benchmarks showed a little bit bigger score. Then I overclocked shader clock from 1405 to [..] (don't remeber really). So I was playing Modern Warfare 2 when off sudden computer froze when I wanted to select team, I was afk before that. I had to reset CMOS. After that I had problems with Skype: unread messages and no sound. Then I figured it out that when ever I open EasyBoost - Skype starts to glitch again. Now I use EVGA Precission X. Now after a month, I cleaned computer and closed the case, it was open all the time. I started to overclock GPU clock only (just a bit) because there was no problems that would stop me. So sometimes on heavy CPU load graphics starts to lag. Dragging a window is painful to watch too. Sometimes the screen freezes for 5 to 10 seconds (I can see that hard disk activity is maximal). You may say that CPU fault it is, isn't it? But sometimes lag spikes starts randomly when CPU load is at maximum. All 3 benchmark softwares (PerformanceTest, NovaBench and MSI Kombustor) shows that performance of my video card has dropped about 25%. BUT! CPU score is lower too. I ignored these problems but when I refreshed Windows Experience Index I was shocked. Month before (in latvian language but not so hard to understand): Now (upgraded RAM): This happened when I tried to capture Minecraft with Fraps on underclocked GPU to 580mhz (def: 590mhz): All drivers are up to date. Average CPU temperature from 55°C to 75°C (at 70°C sometimes starts these lag spikes). Video card's tempratures are from 45°C to 60°C (very hard to reach 60°C). So my hope is that the video card is fine, cause this card is very new and I want to upgrade CPU anyways. Aplogies for my mistakes in vocabulary (I am trying to type this as fast I can).

    Read the article

  • DCHP and Router load testing

    - by John H
    I manage a campground wifi network with an average of 10 - 60 active users. I have encountered issues where the router starts acting flaky (failing to assign DHCP or failing to pass traffic) without any clear warning (low cpu utilization, etc). I upgraded the router a couple times and ended up with a Netgear ProSafe VPN router that seems to be handling the traffic. The interesting thing is that the Netgear has lower specs than the Buffalo router it replaced, indicating the issue is with the DD-WRT firmware. While I'll be pursuing this issue on the dd-wrt forums, I need a way to test routers. My vision is having 1-2 computers connected on the LAN side and 1-2 computers connected on the WAN side. I want the LAN computers to be generating various type of traffic and connections, as well as requesting DCHP addresses. A few notes: The wireless aspect should be a non-issue. Most clients would connect to a wireless bridge and come into the router through a network cable. I had a monitoring server with Nagios running check_dhcp against the router. This server was connected directly by a network cable, eliminating wifi bridges and other devices from the equation. This question is somewhat related, but not exactly: Load testing wireless LANs I am going to look at IxChariot. While I'd ideally like to use a 1 computer on each side running Linux and preferably free software, I can entertain running Windows, multiple computers, or non-free software. Total bandwidth doesn't seem to be the issue. I can transfer large files all day. Even on the busiest days, the users seemed to only pull ~5Mbps. There is very little "LAN to LAN traffic" and most of it might never have reached the main router. The issue I need to test for seems to be tied to active users, or more appropriately, active sessions. I know active users or active clients is a meaningless term from a router standpoint and wouldn't mind having more appropriate terms to use. Summary: I need a way to test a routers ability in handling traffic from a large number of clients. My current strategy is to purchase a router, deploy it, and see how it fails in the live environment.

    Read the article

  • Fetch a specific tag from Rally in order to compute a value in another field

    - by 4jas
    I'm extremely new to Rally development so my question may sound dumb (but couldn't find how to do it from rally's help or from previous posts here) :) I've started from the rally freeform grid example - my purpose is to implement a Business Value calculator: I fill the score field with a 5-digit figure where each number is a score in the 1-5 range. Then I compute a business value as the result of a calculation, where each number is weighted by a preset weight. I can sort my stories by Business Value to help me prioritize my backlog: that's the first step, and it works. Now what I want to do is to make my freeform grid editable: I am extracting each of my digits as a separate column, but those columns are display-only. How can I turn them into something editable? What I want to do of course is update back the score field based on the values input in each custom column. Here's an example: I have a record with score "15254", which means Business Value criteria 1 scores 1 out of 5, Business Value criteria 2 scores 5 out of 5, and so on... In the end my Business Value is computed as "1*1 + 5*2 + 2*3 + 5*4 + 4*5 = 57". So far this is the part that works. Now let's say I found that the third criteria should not score 2 but 3, I want to be able to edit the value in the corresponding column and have my score field updated to "15354", and my Business Value to display 60 instead of 57. Here is my current code, I'll be really grateful if you can help me with turning that grid into something editable :) <!--Include SDK--> <script type="text/javascript" src="https://rally1.rallydev.com/apps/2.0p2/sdk-debug.js"></script> <!--App code--> <script type="text/javascript"> Rally.onReady(function() { Ext.define('BVApp', { extend: 'Rally.app.App', componentCls: 'app', launch: function() { Ext.create('Rally.data.WsapiDataStore', { model: 'UserStory', autoLoad: true, listeners: { load: this._onDataLoaded, scope: this } }); }, _onDataLoaded: function(store, data) { var records = []; var li_score; var li_bv1, li_bv2, li_bv3, li_bv4, li_bv5, li_bvtotal; var weights = new Array(1, 2, 3, 4, 5); Ext.Array.each(data, function(record) { //Let's fetch score and compute the business values... li_score = record.get('Score'); if (li_score) { li_bv1 = li_score.toString().substring(0,1); li_bv2 = li_score.toString().substring(1,2); li_bv3 = li_score.toString().substring(2,3); li_bv4 = li_score.toString().substring(3,4); li_bv5 = li_score.toString().substring(4,5); li_bvtotal = li_bv1*weights[0] + li_bv2*weights[1] + li_bv3*weights[2] + li_bv4*weights[3] + li_bv5*weights[4]; } records.push({ FormattedID: record.get('FormattedID'), ref: record.get('_ref'), Name: record.get('Name'), Score: record.get('Score'), Bv1: li_bv1, Bv2: li_bv2, Bv3: li_bv3, Bv4: li_bv4, Bv5: li_bv5, BvTotal: li_bvtotal }); }); this.add({ xtype: 'rallygrid', store: Ext.create('Rally.data.custom.Store', { data: records, pageSize: 5 }), columnCfgs: [ { text: 'FormattedID', dataIndex: 'FormattedID' }, { text: 'ref', dataIndex: 'ref' }, { text: 'Name', dataIndex: 'Name', flex: 1 }, { text: 'Score', dataIndex: 'Score' }, { text: 'BusVal 1', dataIndex: 'Bv1' }, { text: 'BusVal 2', dataIndex: 'Bv2' }, { text: 'BusVal 3', dataIndex: 'Bv3' }, { text: 'BusVal 4', dataIndex: 'Bv4' }, { text: 'BusVal 5', dataIndex: 'Bv5' }, { text: 'BusVal Total', dataIndex: 'BvTotal' } ] }); } }); Rally.launchApp('BVApp', { name: 'Business Values App' }); var exampleHtml = '<div id="example-intro"><h1>Business Values App</h1>' + '<div>Own sample app for Business Values</div>' + '</div>'; // Default app viewport uses layout: 'fit', // so we need to insert a container into the viewport var viewport = Ext.ComponentQuery.query('viewport')[0]; var appComponent = viewport.items.getAt(0); var viewportContainerItems = [{ html: exampleHtml, border: 0 }]; //hide advanced cardboard live previews in examples for now viewportContainerItems.push({ xtype: 'container', items: [appComponent] }); viewport.remove(appComponent, false); viewport.add({ xtype: 'container', layout: 'vbox', items: viewportContainerItems }); }); </script> <!--App styles--> <style type="text/css"> .app { /* Add app styles here */ } </style>

    Read the article

  • Start a software company offshore

    - by Mascarpone
    Hello Everybody, I own a small, very young, EU based (Italy) company, and among other things, we sell IT solutions. I have a degree in applied mathematics, and I mainly deal with user interfaces, embedded systems, automation and web applications. You can say that I'm an enlightened entrepreneur because I work only with open source software (OS, IDE, I release under BSD , ... everything is free as in freedom), I give high importance to post sales services and customer satisfaction, plus I think I'm the best boss someone could desire (LOL), as I have google in mind when I think about IT workers rights. But the most beautiful thing is that, although everybody advised us not to use open source, is that we are quite profitable!!! (for the sixth trimester in a row). Now I offshore most of the work to an Indian company. I divide the work in modules and I outsource the longer or more trivial ones. I spend a lot of time defining the specifications and I leave the hard work to them. Using productivity bonuses, a lot of prototypes and third-party audits I think that my software has reached a very good quality level. I would like to start my own software development company, in order to improve control over process and cut costs. Obviously I can't afford the cost of labor in the EU, so I thought about opening a company in Asia. What I need Is: 1) Cheap labor - I can afford to give productivity bonuses and higher than average wages and stay profitable just because labor is cheap. 2) Many talents - I need a good level of tertiary education, and a good number of graduates, so I can hire junior developers and train and teach them according to my needs and philosophies (e.g.: open source mind) 3) Good infrastructure - buildings, transport, internet, .... everything that a company might need. I thought about 3 possible candidates: 1) India - I already work with indian people, I know that they are realiable and speak a good english. Big cities are too expensive, but maybe a small city like lucknow http://en.wikipedia.org/wiki/Lucknow could suits my needs. 2) China - They say it's cheaper than India, but I everytime I worked with a chineese company the language was a big barrier. They work hard, are somewhat skilled and cheap but maybe it's a risky path. Plus I feel a little uncofortable with their lack of human rights. 3) Philippines - Same as china: cheaper than india, but maybe less educated. Where do you think it's the best place to start a software company? Any reading or book to advise? thank you very much

    Read the article

  • New CentOS/cPanel servers showing high load averages at idle

    - by Jax
    I have taken delivery of two identically specced CentOS/cPanel servers, showing the same behaviour of a resting load average of 1.30, 1.21, 1.16 and yet the CPU is sitting 100% idle. Hardware: Xeon(R) CPU E3-1270 4GB RAM Behavior:- top shows CPU 99.9% idle virtually no disk IO Some command output :- uname -a Linux server.myserver.com 2.6.18-308.4.1.el5PAE #1 SMP Tue Apr 17 17:47:38 EDT 2012 i686 i686 i386 GNU/Linux top top - 10:37:50 up 1:47, 1 user, load average: 1.28, 1.20, 1.17 Tasks: 199 total, 1 running, 198 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4125104k total, 438764k used, 3686340k free, 25788k buffers Swap: 2096440k total, 0k used, 2096440k free, 291080k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 2160 640 552 S 0.0 0.0 0:00.89 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/2 9 root 35 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/5 18 root 38 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/6 21 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/6 22 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/6 23 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/7 24 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/7 25 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/7 26 root 10 -5 0 0 0 S 0.0 0.0 0:06.42 events/0 27 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/1 28 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/2 29 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/3 30 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/4 31 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/5 32 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/6 33 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 events/7 34 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 khelper 35 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kthread 45 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/0 46 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/1 47 root 14 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/2 48 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/3 49 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/4 50 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/5 51 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/6 52 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kblockd/7 53 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 kacpid 189 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/0 190 root 11 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/1 191 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/2 192 root 12 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/3 193 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/4 194 root 13 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/5 195 root 14 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/6 196 root 14 -5 0 0 0 S 0.0 0.0 0:00.00 cqueue/7 199 root 10 -5 0 0 0 S 0.0 0.0 0:00.00 khubd ps axf PID TTY STAT TIME COMMAND 1 ? Ss 0:00 init [3] 2 ? S< 0:00 [migration/0] 3 ? SN 0:00 [ksoftirqd/0] 4 ? S< 0:00 [watchdog/0] 5 ? S< 0:00 [migration/1] 6 ? SN 0:00 [ksoftirqd/1] 7 ? S< 0:00 [watchdog/1] 8 ? S< 0:00 [migration/2] 9 ? SN 0:00 [ksoftirqd/2] 10 ? S< 0:00 [watchdog/2] 11 ? S< 0:00 [migration/3] 12 ? SN 0:00 [ksoftirqd/3] 13 ? S< 0:00 [watchdog/3] 14 ? S< 0:00 [migration/4] 15 ? SN 0:00 [ksoftirqd/4] 16 ? S< 0:00 [watchdog/4] 17 ? S< 0:00 [migration/5] 18 ? SN 0:00 [ksoftirqd/5] 19 ? S< 0:00 [watchdog/5] 20 ? S< 0:00 [migration/6] 21 ? SN 0:00 [ksoftirqd/6] 22 ? S< 0:00 [watchdog/6] 23 ? S< 0:00 [migration/7] 24 ? SN 0:00 [ksoftirqd/7] 25 ? S< 0:00 [watchdog/7] 26 ? S< 0:06 [events/0] 27 ? S< 0:00 [events/1] 28 ? S< 0:00 [events/2] 29 ? S< 0:00 [events/3] 30 ? S< 0:00 [events/4] 31 ? S< 0:00 [events/5] 32 ? S< 0:00 [events/6] 33 ? S< 0:00 [events/7] 34 ? S< 0:00 [khelper] 35 ? S< 0:00 [kthread] 45 ? S< 0:00 \_ [kblockd/0] 46 ? S< 0:00 \_ [kblockd/1] 47 ? S< 0:00 \_ [kblockd/2] 48 ? S< 0:00 \_ [kblockd/3] 49 ? S< 0:00 \_ [kblockd/4] 50 ? S< 0:00 \_ [kblockd/5] 51 ? S< 0:00 \_ [kblockd/6] 52 ? S< 0:00 \_ [kblockd/7] 53 ? S< 0:00 \_ [kacpid] 189 ? S< 0:00 \_ [cqueue/0] 190 ? S< 0:00 \_ [cqueue/1] 191 ? S< 0:00 \_ [cqueue/2] 192 ? S< 0:00 \_ [cqueue/3] 193 ? S< 0:00 \_ [cqueue/4] 194 ? S< 0:00 \_ [cqueue/5] 195 ? S< 0:00 \_ [cqueue/6] 196 ? S< 0:00 \_ [cqueue/7] 199 ? S< 0:00 \_ [khubd] 201 ? S< 0:00 \_ [kseriod] 301 ? S 0:00 \_ [khungtaskd] 302 ? S 0:00 \_ [pdflush] 303 ? S 0:00 \_ [pdflush] 304 ? S< 0:00 \_ [kswapd0] 305 ? S< 0:00 \_ [aio/0] 306 ? S< 0:00 \_ [aio/1] 307 ? S< 0:00 \_ [aio/2] 308 ? S< 0:00 \_ [aio/3] 309 ? S< 0:00 \_ [aio/4] 310 ? S< 0:00 \_ [aio/5] 311 ? S< 0:00 \_ [aio/6] 312 ? S< 0:00 \_ [aio/7] 472 ? S< 0:00 \_ [kpsmoused] 551 ? S< 0:00 \_ [ata/0] 552 ? S< 0:00 \_ [ata/1] 553 ? S< 0:00 \_ [ata/2] 554 ? S< 0:00 \_ [ata/3] 555 ? S< 0:00 \_ [ata/4] 556 ? S< 0:00 \_ [ata/5] 557 ? S< 0:00 \_ [ata/6] 558 ? S< 0:00 \_ [ata/7] 559 ? S< 0:00 \_ [ata_aux] 569 ? S< 0:00 \_ [scsi_eh_0] 570 ? S< 0:00 \_ [scsi_eh_1] 571 ? S< 0:00 \_ [scsi_eh_2] 572 ? S< 0:00 \_ [scsi_eh_3] 573 ? S< 0:00 \_ [scsi_eh_4] 574 ? S< 0:00 \_ [scsi_eh_5] 593 ? S< 0:00 \_ [kstriped] 630 ? S< 0:00 \_ [kjournald] 655 ? S< 0:00 \_ [kauditd] 1860 ? S< 0:00 \_ [kmpathd/0] 1861 ? S< 0:00 \_ [kmpathd/1] 1862 ? S< 0:00 \_ [kmpathd/2] 1863 ? S< 0:00 \_ [kmpathd/3] 1864 ? S< 0:00 \_ [kmpathd/4] 1865 ? S< 0:00 \_ [kmpathd/5] 1866 ? S< 0:00 \_ [kmpathd/6] 1867 ? S< 0:00 \_ [kmpathd/7] 1868 ? S< 0:00 \_ [kmpath_handlerd] 1902 ? S< 0:00 \_ [kjournald] 1904 ? S< 0:00 \_ [kjournald] 1906 ? S< 0:00 \_ [kjournald] 1908 ? S< 0:00 \_ [kjournald] 1910 ? S< 0:00 \_ [kjournald] 2184 ? S< 0:00 \_ [iscsi_eh] 2288 ? S< 0:00 \_ [cnic_wq] 2298 ? S< 0:00 \_ [bnx2i_thread/0] 2299 ? S< 0:00 \_ [bnx2i_thread/1] 2300 ? S< 0:00 \_ [bnx2i_thread/2] 2301 ? S< 0:00 \_ [bnx2i_thread/3] 2302 ? S< 0:00 \_ [bnx2i_thread/4] 2303 ? S< 0:00 \_ [bnx2i_thread/5] 2304 ? S< 0:00 \_ [bnx2i_thread/6] 2305 ? S< 0:00 \_ [bnx2i_thread/7] 2330 ? S< 0:00 \_ [ib_addr] 2359 ? S< 0:00 \_ [ib_mcast] 2360 ? S< 0:00 \_ [ib_inform] 2361 ? S< 0:00 \_ [local_sa] 2371 ? S< 0:00 \_ [iw_cm_wq] 2381 ? S< 0:00 \_ [ib_cm/0] 2382 ? S< 0:00 \_ [ib_cm/1] 2383 ? S< 0:00 \_ [ib_cm/2] 2384 ? S< 0:00 \_ [ib_cm/3] 2385 ? S< 0:00 \_ [ib_cm/4] 2386 ? S< 0:00 \_ [ib_cm/5] 2387 ? S< 0:00 \_ [ib_cm/6] 2388 ? S< 0:00 \_ [ib_cm/7] 2398 ? S< 0:00 \_ [rdma_cm] 2684 ? S< 0:00 \_ [bond0] 2882 ? S< 0:00 \_ [bond1] 3195 ? S< 0:00 \_ [kondemand/0] 3197 ? S< 0:00 \_ [kondemand/1] 3198 ? S< 0:00 \_ [kondemand/2] 3199 ? S< 0:00 \_ [kondemand/3] 3200 ? S< 0:00 \_ [kondemand/4] 3201 ? S< 0:00 \_ [kondemand/5] 3202 ? S< 0:00 \_ [kondemand/6] 3203 ? S< 0:00 \_ [kondemand/7] 688 ? S<s 0:00 /sbin/udevd -d 2425 ? S<Lsl 0:00 iscsiuio 2432 ? Ss 0:00 iscsid 2434 ? S<Ls 0:00 iscsid 3061 ? S<sl 0:00 auditd 3063 ? S<sl 0:00 \_ /sbin/audispd 3121 ? Ss 0:00 syslogd -m 0 3124 ? Ss 0:00 klogd -x 3220 ? Ss 0:00 irqbalance 3278 ? Ss 0:00 dbus-daemon --system 3324 ? Ss 0:00 /usr/sbin/acpid 3337 ? Ss 0:00 hald 3338 ? S 0:00 \_ hald-runner 3345 ? S 0:00 \_ hald-addon-acpi: listening on acpid socket /var/run/acpid.socket 3349 ? S 0:00 \_ hald-addon-keyboard: listening on /dev/input/event1 3360 ? S 0:00 \_ hald-addon-storage: polling /dev/sr0 3413 ? Ssl 0:00 automount 3435 ? Ssl 0:00 /usr/sbin/named -u named 3466 ? Ss 0:00 /usr/sbin/sshd 4072 ? Ss 0:00 \_ sshd: root@pts/0 4078 pts/0 Ss 0:00 \_ -bash 5436 pts/0 R+ 0:00 \_ ps axf 3484 ? Ss 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid 3500 ? SLs 0:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g 3514 ? S 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/server.myserver.com.pid 3575 ? Sl 0:00 \_ /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --log-error=/var/lib/mysql/server.myserver.com.err --pid-fil 3687 ? Ss 0:00 /usr/sbin/exim -bd -q1h 3709 ? Ss 0:00 /usr/sbin/dovecot 3710 ? S 0:00 \_ dovecot-auth 3725 ? S 0:00 \_ pop3-login 3726 ? S 0:00 \_ pop3-login 3727 ? S 0:00 \_ imap-login 3728 ? S 0:00 \_ imap-login 3729 ? Ss 0:00 /usr/local/apache/bin/httpd -k start -DSSL 4326 ? S 0:00 \_ /usr/bin/perl /usr/local/cpanel/bin/leechprotect 4332 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 4333 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 4334 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 4335 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 4336 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 4337 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 4382 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 4383 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 4384 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 5389 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 5390 ? S 0:00 \_ /usr/local/apache/bin/httpd -k start -DSSL 3741 ? Ss 0:00 pure-ftpd (SERVER) 3746 ? S 0:00 /usr/sbin/pure-authd -s /var/run/ftpd.sock -r /usr/sbin/pureauth 3759 ? Ss 0:00 crond 3772 ? Ss 0:00 /usr/sbin/atd 3909 ? S 0:00 cpsrvd (SSL) - waiting for connections 5435 ? Z 0:00 \_ [cpsrvd-ssl] <defunct> 3931 ? S 0:00 queueprocd - wait to process a task 3948 ? S 0:00 tailwatchd 3954 ? SN 0:00 cpanellogd - sleeping for logs 4003 ? Ss 0:00 ./nimbus /opt/nimsoft 4016 ? S 0:00 \_ nimbus(controller) 4053 ? Sl 0:00 \_ nimbus(spooler) 4066 ? S 0:00 \_ nimbus(hdb) 4069 ? S 0:00 \_ nimbus(cdm) 4070 ? S 0:00 \_ nimbus(processes) 4023 ? S 0:00 /usr/sbin/smartd -q never 4027 tty1 Ss+ 0:00 /sbin/mingetty tty1 4028 tty2 Ss+ 0:00 /sbin/mingetty tty2 4029 tty3 Ss+ 0:00 /sbin/mingetty tty3 4030 tty4 Ss+ 0:00 /sbin/mingetty tty4 4031 tty5 Ss+ 0:00 /sbin/mingetty tty5 4033 tty6 Ss+ 0:00 /sbin/mingetty tty6 4035 ttyS1 Ss+ 0:00 /sbin/agetty -h -L ttyS1 19200 vt100 vmstat 10 6 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 3718136 25684 257424 0 0 8 3 127 189 0 0 100 0 0 0 0 0 3718136 25700 257420 0 0 0 7 1013 1500 0 0 100 0 0 0 0 0 3718136 25700 257424 0 0 0 1 1013 1551 0 0 100 0 0 0 0 0 3718136 25700 257424 0 0 0 0 1012 1469 0 0 100 0 0 1 0 0 3712680 25716 257424 0 0 0 2 1013 1542 0 0 100 0 0 0 0 0 3718376 25740 257424 0 0 0 46 1017 1534 0 0 100 0 0 Can anyone advise me as to what is the cause of and how I may resolve this behaviour? A kernel/driver conflict perhaps? I don't see any processes in R or D state that might inflate the load averages artificially, I realise it may be considered low in an 8 thread system but its higher at idle than any normal behaviour I've previously come across. Thanks in advance for your time. Edit: iotop Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 26 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.29 % [events/0] 3205 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.10 % [kondemand/2] 3208 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kondemand/5] 3209 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kondemand/6] 3207 be/3 root 0.00 B/s 0.00 B/s 0.10 % 0.00 % [kondemand/4] 3210 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kondemand/7] 3227 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % irqbalance 3288 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rpciod/1] 3287 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rpciod/0] 3206 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kondemand/3] 3069 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % auditd 3070 be/2 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % audispd 655 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kauditd] 3619 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % automount 3 be/7 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0] 3068 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % auditd 29 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [events/3] 4 rt/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/0] 7 rt/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/1] 10 rt/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/2] 13 rt/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/3] 16 rt/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/4] 19 rt/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/5] 22 rt/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/6] 25 rt/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/7] 27 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [events/1] 28 be/3 root 0.00 B/s 0.00 B/s 0.29 % 0.00 % [events/2] 30 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [events/4] 31 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [events/5] 32 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [events/6] 33 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [events/7] 34 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [khelper] 35 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthread] 45 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kblockd/0]

    Read the article

  • Set up Gmail with Google apps for own domain

    - by erdomester
    I rent a server from a German company. I have remote access to it as well as WHM and CPanel. I decided to use Google's mail servers for obvious reasons. I am not an admin just an average guy trying to set up what needs to be set up. The problem is I am unable to make the necessary settings. I watched Youtube tutorials, followed written ones as well as Google's help, but there is (at least) one serious problem with my domain settings. The domain console alwasy says Your MX records are incorrect When I check dappwall.com in mxtoolbox.com it says Pref Hostname IP Address TTL 10 mail.dappwall.com 46.4.88.247 24 hrs But this is not the host name. I checked WHM and my hostname is server1.dappwall.com. I can confirm it by typing the hostname command in putty. However, if I do an mx lookup at mxtoolbox.com on server1.dappwall.com or mail.dappwall.com I get Lookup failed after 1 name servers timed out or responded non-authoritatively I ran checks on the google apps toolbox on dappwall.com and two problems emerged: 1.No Google mail exchangers found. Relayhost configuration? 10 mail.dappwall.com In Google Apps > Settings for Gmail > Advanced settings it also says that my current MX records for dappwall.com is Priority Points to 10 MAIL.DAPPWALL.COM. So mail.dappwall.com again. I also have access to a robot provided by the company I rent the server from. Here I see this mail at two places but how should I (if it's necessary) modify this? I set Email routing to Automatically Detect Configuration. 2.There SHOULD be a valid SPF record. "v=spf1 include:_spf.google.com ~all" In the DNS Zone Editor I added this spf record: Name TTL Class Type Record dappwall.com. 1440 IN TXT v=spf1 include:_spf.google.com ~all In the cPanel Email Authentication page it says SPF: Status: Enabled Warning: cPanel is unable to verify that this server is an authoritative nameserver for dappwall.com. [?] Your current raw SPF record is : v=spf1 include:_spf.google.com ~all How can I confirm that my server is an authoritative nameserver for dappwall.com? In WHM Service Configuration Mailserver selection Dovecot was set but I disabled it (i don't know if that's ok). What am I missing here? Where is that mail.dappwall.com coming from?

    Read the article

  • Wifi randomly drops on Windows 8 laptop

    - by JosiahS
    First of all, I did a lot of research on this problem, and I wasn't able to come to any helpful conclusion. I've finally decided that I need advice from those who might know where to look. So don't let me down. :P I used to have an older Windows 7 laptop, which worked great for basic office and web browsing. However, I wanted something that would play actual modern games. So I recently bought a Sager NP8235 with the Intel Wireless-AC 7260 wifi card, and installed Windows 8 Pro on it. And ever since, I've been having problems with the wifi. Generally, what happens is if I leave the laptop on but inactive for an extended amount of time (I've estimated it around an hour to two), the wifi will start dropping randomly. If I happened to have a download going at the time, it usually causes the download to fail. Or, if I put the laptop to sleep overnight, the next morning I usually have to restart the computer because the wifi device apparently stops working (it literally won't turn on). Also, and most frustrating, whenever I'm on a video chat (like Skype), after about ten minutes, the connection will start lagging like crazy, until it forces Skype to end the call. After that, I usually have to disable and reenable the wifi to get it working again. I know it isn't our internet, because all the other computers in our house (~8) don't have any issues. Even the old Windows 7 laptop (connected also over wifi) works just fine, scoring the normal ~3Mbps average on speedtest.net (yes, I know our internet is slow, we live out in the country). Additionally, when I connect the Sager directly to the router via ethernet, the internet instantly starts working just great. Like I said, I've done a lot of googling to figure out what's going on, and I haven't been able to find anything that worked for me. Is it Windows 8 conflicting with the Wifi drivers? As of this writing, I have the Intel drivers v16.1.5.2 installed (without the extra Intel software). Or is it our router? It's a TP-Link TL-WR841ND, set to the default settings. The Sager is currently being assigned to a static IP, if that makes any difference. And yet, the old windows 7 laptop has a much more stable connection than the Sager. Anyone have any ideas? At this point, I'd appreciate even knowing what the problem is.

    Read the article

  • How to allow unprivileged apache/PHP to do a root task (CentOS)

    - by Chris
    I am setting up a sort of personal dropbox for our customers on a CentOS 6.3 machine. The server will be accessible thru SFTP and a proprietary http service base on PHP. This machine will be in our DMZ so it has to be secure. Because of this I have apache running as an unprivileged user, hardened the security on apache, the OS, PHP, applied a lot of filtering in iptables and applied some restrictive TCP Wrappers. Now you might have suspected this one was coming, SELinux is also set to enforcing. I'm setting up PAM to use MySQL so my users in the web application can login. These users will all be in a group that can use SSH only for SFTP and users will be chrooted to their own 'home' folder. To allow this SELinux wants the folders to have the user_home_t tag. Also the parent directory needs to be writable by root only. If these restrictions are not met SELinux will kill the SSH pipe immediately. The files that need to be accessible thru both http and SFTP so I have made a SELinux module to allow Apache to search/attr/read/write etc. to directories with the user_home_dir_t tag. As sftp users are stored in MySQL I want to setup their home dirs upon user creation. This is a problem since Apache has no write access to the /home dir, it's only writable by root since it's required to keep SELinux and OpenSSH happy. Basically I need to let Apache do only a few tasks as root and only within /home. So I need to somehow elevate the privileges temporarily or let root do these tasks for apache instead. What I need to have apache do with root privileges is the following. mkdir /home/userdir/ mkdir /home/userdir/userdir chmod -R 0755 /home/userdir umask 011 /home/userdir/userdir chcon -R -t user_home_t /home/userdir chown -R user:sftp_admin /home/userdir/userdir chmod 2770 /home/userdir/userdir This would create a home for the user, now I have an idea that might work, cron. That would mean the server needs to check for users that have no home every minute, then when creating users the interface would freeze for an average of 30 seconds before the account creation can be confirmed which I do not prefer. Does anybody know if something can be done with sudoers? Or any other idea's are welcome... Thanks for your time!

    Read the article

  • Apache error log interpretation

    - by HTF
    It looks like someone gained access to my server. How I can find out which Apache vHosts this log is related to? How these commands from the log are invoked and how/why they are printed to the log file - is this some remote shell or PHP script? /var/log/httpd/error_log mkdir: cannot create directory `/tmp/.kdso': File exists --2014-06-13 13:29:17-- http://updates.dyndn-web.com/abc.txt Resolving updates.dyndn-web.com... 94.23.49.91 Connecting to updates.dyndn-web.com|94.23.49.91|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 5055 (4.9K) [text/plain] Saving to: `abc.txt' 0K .... 100% 303K=0.02s 2014-06-13 13:29:17 (303 KB/s) - `abc.txt' saved [5055/5055] % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed ^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0^M101 5055 101 5055 0 0 79686 0 --:--:-- --:--:-- --:--:-- 154k minerd64: no process killed minerd32: no process killed named: no process killed kernelupdates: no process killed kernelcfg: no process killed kernelorg: no process killed ls: cannot access /tmp/.ICE-unix: No such file or directory mkdir: cannot create directory `/tmp': File exists --2014-06-13 13:29:18-- http://updates.dyndn-web.com/64.tar.gz Resolving updates.dyndn-web.com... 94.23.49.91 Connecting to updates.dyndn-web.com|94.23.49.91|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 205812 (201K) [application/x-tar] Saving to: `64.tar.gz' 0K .......... .......... .......... .......... .......... 24% 990K 0s 50K .......... .......... .......... .......... .......... 49% 2.74M 0s 100K .......... .......... .......... .......... .......... 74% 2.96M 0s 150K .......... .......... .......... .......... .......... 99% 3.49M 0s 200K 100% 17.4M=0.1s 2014-06-13 13:29:18 (1.99 MB/s) - `64.tar.gz' saved [205812/205812] sh: ./kernelupgrade: Permission denied

    Read the article

  • Why would a process monitoring script use exit 1; on finding no problems?

    - by user568458
    General question: On a Linux (Centos) server, if a process monitoring script run by cron is set to close with exit 1; rather than exit 0; on finding that everything is okay and that no action is needed, is that a mistake? Or are there legitimate reasons for calling exit 1; instead of exit 0; on the "Everything's fine, no action needed" condition? exit 0; on finding no problems seems to me to be more appropriate. But maybe there's something I'm not aware of. For example, maybe there's something specific to Cron? Or maybe there's a convention in process monitoring scripts that 'failure' means 'this script failed to need to fix a problem' (rather than what I would expect which is that exit 1; would mean 'the process being monitored has failed'?) My specific case: I'm looking at a process monitoring script written by my web hosting company. By process monitoring script, I mean a script executed by Cron on a regular basis that checks if an important system process is running, and if it isn't running, takes actions such as mailing an administrator or restarting the process. Here's the (generalised) structure of their script, for a service running on port 8080 (in this case, Apache Tomcat): SERVICE=$(/usr/sbin/lsof -i tcp:8080 | wc -l); if [ $SERVICE != 0 ]; then exit 1; else #take action fi Seems simple enough even for someone with limited knowledge like me, except the exit 1; part seems odd. As I understand it, exit 0; closes a program and signifies to the parent that executed the program that everything is fine, exit n; where n0 and n<127 signifies that there has been some kind of error or problem. Here, their script seems to go against that rule - it calls exit 1; in the condition where everything is fine, and doesn't exit after taking remedial action in the problem condition. To me, this looks like a mistake - but my experience in this area is limited. Are there cases where calling exit 1; in the "Everything's fine, no action needed" condition is more appropriate than calling exit 0;? Or is it a mistake? Wider context is pretty simple. It's a Centos VPS, running Plesk. The script is being called by Cron via Plesk's "Scheduled tasks" Cron manager. There's no custom layer between Cron and this script that would respond in an unusual way to the exit call. It's a fairly average, almost out-of-the box Plesk-managed Centos VPS (in so far as there is such a thing). The process being monitored by this script is Apache Tomcat.

    Read the article

  • Linux: find out what process is using all the RAM?

    - by Timur
    Before actually asking, just to be clear: yes, I know about disk cache, and no, it is not my case :) Sorry, for this preamble :) I'm using CentOS 5. Every application in the system is swapping heavily, and the system is very slow. When I do free -m, here is what I got: total used free shared buffers cached Mem: 3952 3929 22 0 1 18 -/+ buffers/cache: 3909 42 Swap: 16383 46 16337 So, I actually have only 42 Mb to use! As far as I understand, -/+ buffers/cache actually doesn't count the disk cache, so I indeed only have 42 Mb, right? I thought, I might be wrong, so I tried to switch off the disk caching and it had no effect - the picture remained the same. So, I decided to find out who is using all my RAM, and I used top for that. But, apparently, it reports that no process is using my RAM. The only process in my top is MySQL, but it is using 0.1% of RAM and 400Mb of swap. Same picture when I try to run other services or applications - all go in swap, top shows that MEM is not used (0.1% maximum for any process). top - 15:09:00 up 2:09, 2 users, load average: 0.02, 0.16, 0.11 Tasks: 112 total, 1 running, 111 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4046868k total, 4001368k used, 45500k free, 748k buffers Swap: 16777208k total, 68840k used, 16708368k free, 16632k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 3214 ntp 15 0 23412 5044 3916 S 0.0 0.1 0:00.00 17m ntpd 2319 root 5 -10 12648 4460 3184 S 0.0 0.1 0:00.00 8188 iscsid 2168 root RT 0 22120 3692 2848 S 0.0 0.1 0:00.00 17m multipathd 5113 mysql 18 0 474m 2356 856 S 0.0 0.1 0:00.11 472m mysqld 4106 root 34 19 251m 1944 1360 S 0.0 0.0 0:00.11 249m yum-updatesd 4109 root 15 0 90152 1904 1772 S 0.0 0.0 0:00.18 86m sshd 5175 root 15 0 90156 1896 1772 S 0.0 0.0 0:00.02 86m sshd Restart doesn't help, and, by they way is very slow, which I wouldn't normally expect on this machine (4 cores, 4Gb RAM, RAID1). So, with that - I'm pretty sure that this is not a disk cache, who is using the RAM, because normally it should have been reduced and let other processes to use RAM, rather then go to swap. So, finally, the question is - if someone has any ideas how to find out what process is actually using the memory so heavily?

    Read the article

  • MySQL performance over a (local) network much slower than I would expect

    - by user15241
    MySQL queries in my production environment are taking much longer than I would expect them too. The site in question is a fairly large Drupal site, with many modules installed. The webserver (Nginx) and database server (mysql) are hosted on separated machines, connected by a 100mbps LAN connection (hosted by Rackspace). I have the exact same site running on my laptop for development. Obviously, on my laptop, the webserver and database server are on the same box. Here are the results of my database query times: Production: Executed 291 queries in 320.33 milliseconds. (homepage) Executed 517 queries in 999.81 milliseconds. (content page) Development: Executed 316 queries in 46.28 milliseconds. (homepage) Executed 586 queries in 79.09 milliseconds. (content page) As can clearly be seen from these results, the time involved with querying the MySQL database is much shorter on my laptop, where the MySQL server is running on the same database as the web server. Why is this?! One factor must be the network latency. On average, a round trip from from the webserver to the database server takes 0.16ms (shown by ping). That must be added to every singe MySQL query. So, taking the content page example above, where there are 517 queries executed. Network latency alone will add 82ms to the total query time. However, that doesn't account for the difference I am seeing (79ms on my laptop vs 999ms on the production boxes). What other factors should I be looking at? I had thought about upgrading the NIC to a gigabit connection, but clearly there is something else involved. I have run the MySQL performance tuning script from http://www.day32.com/MySQL/ and it tells me that my database server is configured well (better than my laptop apparently). The only problem reported is "Of 4394 temp tables, 48% were created on disk". This is true in both environments and in the production environment I have even tried increasing max_heap_table_size and Current tmp_table_size to 1GB, with no change (I think this is because I have some BLOB and TEXT columns).

    Read the article

  • Why is my concurrency capacity so low for my web app on a LAMP EC2 instance?

    - by AMF
    I come from a web developer background and have been humming along building my PHP app, using the CakePHP framework. The problem arose when I began the ab (Apache Bench) testing on the Amazon EC2 instance in which the app resides. I'm getting pretty horrendous average page load times, even though I'm running a c1.medium instance (2 cores, 2GB RAM), and I think I'm doing everything right. I would run: ab -n 200 -c 20 http://localhost/heavy-but-view-cached-page.php Here are the results: Concurrency Level: 20 Time taken for tests: 48.197 seconds Complete requests: 200 Failed requests: 0 Write errors: 0 Total transferred: 392111200 bytes HTML transferred: 392047600 bytes Requests per second: 4.15 [#/sec] (mean) Time per request: 4819.723 [ms] (mean) Time per request: 240.986 [ms] (mean, across all concurrent requests) Transfer rate: 7944.88 [Kbytes/sec] received While the ab test is running, I run VMStat, which shows that Swap stays at 0, CPU is constantly at 80-100% (although I'm not sure I can trust this on a VM), RAM utilization ramps up to about 1.6G (leaving 400M free). Load goes up to about 8 and site slows to a crawl. Here's what I think I'm doing right on the code side: In Chrome browser uncached pages typically load in 800-1000ms, and cached pages load in 300-500ms. Not stunning, but not terrible either. Thanks to view caching, there might be at most one DB query per page-load to write session data. So we can rule out a DB bottleneck. I have APC on. I am using Memcached to serve the view cache and other site caches. xhprof code profiler shows that cached pages take up 10MB-40MB in memory and 100ms - 1000ms in wall time. Pages that would be the worst offenders would look something like this in xhprof: Total Incl. Wall Time (microsec): 330,143 microsecs Total Incl. CPU (microsecs): 320,019 microsecs Total Incl. MemUse (bytes): 36,786,192 bytes Total Incl. PeakMemUse (bytes): 46,667,008 bytes Number of Function Calls: 5,195 My Apache config: KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 3 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 120 MaxRequestsPerChild 1000 </IfModule> Is there something wrong with the server? Some gotcha with the EC2? Or is it my code? Some obvious setting I should look into? Too many DNS lookups? What am I missing? I really want to get to 1,000 concurrency capacity, but at this rate, it ain't gonna happen.

    Read the article

  • Java JPanel not showing up....

    - by user69514
    I'm not sure what I am doing wrong, but the text for my JPanels is not showing up. I just get the question number text, but the question is not showing up. Any ideas what I am doing wrong? import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax.swing.event.*; class NewFrame extends JFrame { JPanel centerpanel; // For the questions. CardLayout card; // For the centerpanel. JTextField tf; // Used in question 1. boolean // Store selections for Q2. q2Option1, q2Option2, q2Option3, q2Option4; JList q4List; // For question 4. double // Score on each question. q1Score = 0, q2Score = 0, q3Score = 0, q4Score = 0; // Constructor. public NewFrame (int width, int height) { this.setTitle ("Snoot Club Membership Test"); this.setResizable (true); this.setSize (width, height); Container cPane = this.getContentPane(); // cPane.setLayout (new BorderLayout()); // First, a welcome message, as a Label. JLabel L = new JLabel ("<html><b>Are you elitist enough for our exclusive club?" + " <br>Fill out the form and find out</b></html>"); L.setForeground (Color.blue); cPane.add (L, BorderLayout.NORTH); // Now the center panel with the questions. card = new CardLayout (); centerpanel = new JPanel (); centerpanel.setLayout (card); centerpanel.setOpaque (false); // Each question will be created in a separate method. // The cardlayout requires a label as second parameter. centerpanel.add (firstQuestion (), "1"); centerpanel.add (secondQuestion(), "2"); centerpanel.add (thirdQuestion(), "3"); centerpanel.add (fourthQuestion(), "4"); cPane.add (centerpanel, BorderLayout.CENTER); // Next, a panel of four buttons at the bottom. // The four buttons: quit, submit, next-question, previous-question. JPanel bottomPanel = getBottomPanel (); cPane.add (bottomPanel, BorderLayout.SOUTH); // Finally, show the frame. this.setVisible (true); } // No-parameter constructor. public NewFrame () { this (500, 300); } // The first question uses labels for the question and // gets input via a textfield. A panel containing all // these things is returned. The question asks for // a vacation destination: the more exotic the location, // the higher the score. JPanel firstQuestion () { // We will package everything into a panel and return the panel. JPanel subpanel = new JPanel (); // We will place things in a single column, so // a GridLayout with one column is appropriate. subpanel.setLayout (new GridLayout (8,1)); JLabel L1 = new JLabel ("Question 1:"); L1.setFont (new Font ("SansSerif", Font.ITALIC, 15)); subpanel.add (L1); JLabel L2 = new JLabel (" Select a vacation destination"); L2.setFont (new Font ("SansSerif", Font.ITALIC, 15)); subpanel.add (L2); JLabel L3 = new JLabel (" 1. Baltimore"); L3.setFont (new Font ("SansSerif", Font.ITALIC, 15)); subpanel.add (L3); JLabel L4 = new JLabel (" 2. Disneyland"); L4.setFont (new Font ("SansSerif", Font.ITALIC, 15)); subpanel.add (L4); JLabel L5 = new JLabel (" 3. Grand Canyon"); L5.setFont (new Font ("SansSerif", Font.ITALIC, 15)); subpanel.add (L5); JLabel L6 = new JLabel (" 4. French Riviera"); L6.setFont (new Font ("SansSerif", Font.ITALIC, 15)); subpanel.add (L6); JLabel L7 = new JLabel ("Enter 1,2,3 or 4 below:"); L7.setFont (new Font ("SansSerif", Font.ITALIC, 15)); subpanel.add (L7); // Here's the textfield to get user-input. tf = new JTextField (); tf.addActionListener ( new ActionListener () { // This interface has only one method. public void actionPerformed (ActionEvent a) { String q1String = a.getActionCommand(); if (q1String.equals ("2")) q1Score = 2; else if (q1String.equals ("3")) q1Score = 3; else if (q1String.equals ("4")) q1Score = 4; else q1Score = 1; } } ); subpanel.add (tf); return subpanel; } // For the second question, a collection of checkboxes // will be used. More than one selection can be made. // A listener is required for each checkbox. The state // of each checkbox is recorded. JPanel secondQuestion () { JPanel subpanel = new JPanel (); subpanel.setLayout (new GridLayout (7,1)); JLabel L1 = new JLabel ("Question 2:"); L1.setFont (new Font ("SansSerif", Font.ITALIC, 15)); subpanel.add (L1); JLabel L2 = new JLabel (" Select ONE OR MORE things that "); L2.setFont (new Font ("SansSerif", Font.ITALIC, 15)); subpanel.add (L2); JLabel L3 = new JLabel (" you put into your lunch sandwich"); L3.setFont (new Font ("SansSerif", Font.ITALIC, 15)); subpanel.add (L3); // Initialize the selections to false. q2Option1 = q2Option2 = q2Option3 = q2Option4 = false; // First checkbox. JCheckBox c1 = new JCheckBox ("Ham, beef or turkey"); c1.addItemListener ( new ItemListener () { public void itemStateChanged (ItemEvent i) { JCheckBox c = (JCheckBox) i.getSource(); q2Option1 = c.isSelected(); } } ); subpanel.add (c1); // Second checkbox. JCheckBox c2 = new JCheckBox ("Cheese"); c2.addItemListener ( new ItemListener () { // This is where we will react to a change in checkbox. public void itemStateChanged (ItemEvent i) { JCheckBox c = (JCheckBox) i.getSource(); q2Option2 = c.isSelected(); } } ); subpanel.add (c2); // Third checkbox. JCheckBox c3 = new JCheckBox ("Sun-dried Arugula leaves"); c3.addItemListener ( new ItemListener () { public void itemStateChanged (ItemEvent i) { JCheckBox c = (JCheckBox) i.getSource(); q2Option3 = c.isSelected(); } } ); subpanel.add (c3); // Fourth checkbox. JCheckBox c4 = new JCheckBox ("Lemon-enhanced smoked Siberian caviar"); c4.addItemListener ( new ItemListener () { public void itemStateChanged (ItemEvent i) { JCheckBox c = (JCheckBox) i.getSource(); q2Option4 = c.isSelected(); } } ); subpanel.add (c4); return subpanel; } // The third question allows only one among four choices // to be selected. We will use radio buttons. JPanel thirdQuestion () { JPanel subpanel = new JPanel (); subpanel.setLayout (new GridLayout (6,1)); JLabel L1 = new JLabel ("Question 3:"); L1.setFont (new Font ("SansSerif", Font.ITALIC, 15)); subpanel.add (L1); JLabel L2 = new JLabel (" And which mustard do you use?"); L2.setFont (new Font ("SansSerif", Font.ITALIC, 15)); subpanel.add (L2); // First, create the ButtonGroup instance. // We will add radio buttons to this group. ButtonGroup bGroup = new ButtonGroup(); // First checkbox. JRadioButton r1 = new JRadioButton ("Who cares?"); r1.addItemListener ( new ItemListener () { public void itemStateChanged (ItemEvent i) { JRadioButton r = (JRadioButton) i.getSource(); if (r.isSelected()) q3Score = 1; } } ); bGroup.add (r1); subpanel.add (r1); // Second checkbox. JRadioButton r2 = new JRadioButton ("Safeway Brand"); r2.addItemListener ( new ItemListener () { public void itemStateChanged (ItemEvent i) { JRadioButton r = (JRadioButton) i.getSource(); if (r.isSelected()) q3Score = 2; } } ); bGroup.add (r2); subpanel.add (r2); // Third checkbox. JRadioButton r3 = new JRadioButton ("Fleishman's"); r3.addItemListener ( new ItemListener () { public void itemStateChanged (ItemEvent i) { JRadioButton r = (JRadioButton) i.getSource(); if (r.isSelected()) q3Score = 3; } } ); bGroup.add (r3); subpanel.add (r3); // Fourth checkbox. JRadioButton r4 = new JRadioButton ("Grey Poupon"); r4.addItemListener ( new ItemListener () { public void itemStateChanged (ItemEvent i) { JRadioButton r = (JRadioButton) i.getSource(); if (r.isSelected()) q3Score = 4; } } ); bGroup.add (r4); subpanel.add (r4); return subpanel; } // For the fourth question we will use a drop-down Choice. JPanel fourthQuestion () { JPanel subpanel = new JPanel (); subpanel.setLayout (new GridLayout (3,1)); JLabel L1 = new JLabel ("Question 4:"); L1.setFont (new Font ("SansSerif", Font.ITALIC, 15)); subpanel.add (L1); JLabel L2 = new JLabel (" Your movie preference, among these:"); L2.setFont (new Font ("SansSerif", Font.ITALIC, 15)); subpanel.add (L2); // Create a JList with options. String[] movies = { "Lethal Weapon IV", "Titanic", "Saving Private Ryan", "Le Art Movie avec subtitles"}; q4List = new JList (movies); q4Score = 1; q4List.addListSelectionListener ( new ListSelectionListener () { public void valueChanged (ListSelectionEvent e) { q4Score = 1 + q4List.getSelectedIndex(); } } ); subpanel.add (q4List); return subpanel; } void computeResult () { // Clear the center panel. centerpanel.removeAll(); // Create a new panel to display in the center. JPanel subpanel = new JPanel (new GridLayout (5,1)); // Score on question 1. JLabel L1 = new JLabel ("Score on question 1: " + q1Score); L1.setFont (new Font ("Serif", Font.ITALIC, 15)); subpanel.add (L1); // Score on question 2. if (q2Option1) q2Score += 1; if (q2Option2) q2Score += 2; if (q2Option3) q2Score += 3; if (q2Option4) q2Score += 4; q2Score = 0.6 * q2Score; JLabel L2 = new JLabel ("Score on question 2: " + q2Score); L2.setFont (new Font ("Serif", Font.ITALIC, 15)); subpanel.add (L2); // Score on question 3. JLabel L3 = new JLabel ("Score on question 3: " + q3Score); L3.setFont (new Font ("Serif", Font.ITALIC, 15)); subpanel.add (L3); // Score on question 4. JLabel L4 = new JLabel ("Score on question 4: " + q4Score); L4.setFont (new Font ("Serif", Font.ITALIC, 15)); subpanel.add (L4); // Weighted score. double avg = (q1Score + q2Score + q3Score + q4Score) / (double) 4; JLabel L5; if (avg <= 3.5) L5 = new JLabel ("Your average score: " + avg + " - REJECTED!"); else L5 = new JLabel ("Your average score: " + avg + " - WELCOME!"); L5.setFont (new Font ("Serif", Font.BOLD, 20)); //L5.setAlignment (JLabel.CENTER); subpanel.add (L5); // Now add the new subpanel. centerpanel.add (subpanel, "5"); // Need to mark the centerpanel as "altered" centerpanel.invalidate(); // Everything "invalid" (e.g., the centerpanel above) // is now re-computed. this.validate(); } JPanel getBottomPanel () { // Create a panel into which we will place buttons. JPanel bottomPanel = new JPanel (); // A "previous-question" button. JButton backward = new JButton ("Previous question"); backward.setFont (new Font ("Serif", Font.PLAIN | Font.BOLD, 15)); backward.addActionListener ( new ActionListener () { public void actionPerformed (ActionEvent a) { // Go back in the card layout. card.previous (centerpanel); } } ); bottomPanel.add (backward); // A forward button. JButton forward = new JButton ("Next question"); forward.setFont (new Font ("Serif", Font.PLAIN | Font.BOLD, 15)); forward.addActionListener ( new ActionListener () { public void actionPerformed (ActionEvent a) { // Go forward in the card layout. card.next (centerpanel); } } ); bottomPanel.add (forward); // A submit button. JButton submit = new JButton ("Submit"); submit.setFont (new Font ("Serif", Font.PLAIN | Font.BOLD, 15)); submit.addActionListener ( new ActionListener () { public void actionPerformed (ActionEvent a) { // Perform submit task. computeResult(); } } ); bottomPanel.add (submit); JButton quitb = new JButton ("Quit"); quitb.setFont (new Font ("Serif", Font.PLAIN | Font.BOLD, 15)); quitb.addActionListener ( new ActionListener () { public void actionPerformed (ActionEvent a) { System.exit (0); } } ); bottomPanel.add (quitb); return bottomPanel; } } public class Survey { public static void main (String[] argv) { NewFrame nf = new NewFrame (600, 300); } }

    Read the article

  • SQL SERVER – Database Dynamic Caching by Automatic SQL Server Performance Acceleration

    - by pinaldave
    My second look at SafePeak’s new version (2.1) revealed to me few additional interesting features. For those of you who hadn’t read my previous reviews SafePeak and not familiar with it, here is a quick brief: SafePeak is in business of accelerating performance of SQL Server applications, as well as their scalability, without making code changes to the applications or to the databases. SafePeak performs database dynamic caching, by caching in memory result sets of queries and stored procedures while keeping all those cache correct and up to date. Cached queries are retrieved from the SafePeak RAM in microsecond speed and not send to the SQL Server. The application gets much faster results (100-500 micro seconds), the load on the SQL Server is reduced (less CPU and IO) and the application or the infrastructure gets better scalability. SafePeak solution is hosted either within your cloud servers, hosted servers or your enterprise servers, as part of the application architecture. Connection of the application is done via change of connection strings or adding reroute line in the c:\windows\system32\drivers\etc\hosts file on all application servers. For those who would like to learn more on SafePeak architecture and how it works, I suggest to read this vendor’s webpage: SafePeak Architecture. More interesting new features in SafePeak 2.1 In my previous review of SafePeak new I covered the first 4 things I noticed in the new SafePeak (check out my article “SQLAuthority News – SafePeak Releases a Major Update: SafePeak version 2.1 for SQL Server Performance Acceleration”): Cache setup and fine-tuning – a critical part for getting good caching results Database templates Choosing which database to cache Monitoring and analysis options by SafePeak Since then I had a chance to play with SafePeak some more and here is what I found. 5. Analysis of SQL Performance (present and history): In SafePeak v.2.1 the tools for understanding of performance became more comprehensive. Every 15 minutes SafePeak creates and updates various performance statistics. Each query (or a procedure execute) that arrives to SafePeak gets a SQL pattern, and after it is used again there are statistics for such pattern. An important part of this product is that it understands the dependencies of every pattern (list of tables, views, user defined functions and procs). From this understanding SafePeak creates important analysis information on performance of every object: response time from the database, response time from SafePeak cache, average response time, percent of traffic and break down of behavior. One of the interesting things this behavior column shows is how often the object is actually pdated. The break down analysis allows knowing the above information for: queries and procedures, tables, views, databases and even instances level. The data is show now on all arriving queries, both read queries (that can be cached), but also any types of updates like DMLs, DDLs, DCLs, and even session settings queries. The stats are being updated every 15 minutes and SafePeak dashboard allows going back in time and investigating what happened within any time frame. 6. Logon trigger, for making sure nothing corrupts SafePeak cache data If you have an application with many parts, many servers many possible locations that can actually update the database, or the SQL Server is accessible to many DBAs or software engineers, each can access some database directly and do some changes without going thru SafePeak – this can create a potential corruption of the data stored in SafePeak cache. To make sure SafePeak cache is correct it needs to get all updates to arrive to SafePeak, and if a DBA will access the database directly and do some changes, for example, then SafePeak will simply not know about it and will not clean SafePeak cache. In the new version, SafePeak brought a new feature called “Logon Trigger” to solve the above challenge. By special click of a button SafePeak can deploy a special server logon trigger (with a CLR object) on your SQL Server that actually monitors all connections and informs SafePeak on any connection that is coming not from SafePeak. In SafePeak dashboard there is an interface that allows to control which logins can be ignored based on login names and IPs, while the rest will invoke cache cleanup of SafePeak and actually locks SafePeak cache until this connection will not be closed. Important to note, that this does not interrupt any logins, only informs SafePeak on such connection. On the Dashboard screen in SafePeak you will be able to see those connections and then decide what to do with them. Configuration of this feature in SafePeak dashboard can be done here: Settings -> SQL instances management -> click on instance -> Logon Trigger tab. Other features: 7. User management ability to grant permissions to someone without changing its configuration and only use SafePeak as performance analysis tool. 8. Better reports for analysis of performance using 15 minute resolution charts. 9. Caching of client cursors 10. Support for IPv6 Summary SafePeak is a great SQL Server performance acceleration solution for users who want immediate results for sites with performance, scalability and peak spikes challenges. Especially if your apps are packaged or 3rd party, since no code changes are done. SafePeak can significantly increase response times, by reducing network roundtrip to the database, decreasing CPU resource usage, eliminating I/O and storage access. SafePeak team provides a free fully functional trial www.safepeak.com/download and actually provides a one-on-one assistance during such trial. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • How TiVo is messing up customer support.

    - by James Fleming
    Ok,  So I've gotten a TiVo and overall, I'm happy, but there have been issues and I suspect I've a defective unit. - Now the nice folks after many service calls were happy to swap it out, and to ensure continuity of service, they sent me a new unit (after a $109 deposit).  That was yesterday. Today, when we go to watch a little TV, and wait for our replacement unit to arrive we find our TiVo service has been suspended. WTF? They have an exchange program, but your unit your waiting to exchange is as dead as a doornail until the replacement arrives. How hard is it to keep the old unit active for an extra week? Here is the exchange w/Tivo below... You are currently number 1 in the queue. We apologize for the delay. We will assign you to an agent as soon as one is available.The average amount of time a customer has to wait is 00:13.  Kaylene (Listening)  Kaylene: Thank you for contacting TiVo! My name is Kaylene. So that I may better assist you, are you an existing customer?  james Fleming: yes I am, but I'm now having second thoughts about being one    Kaylene: Thank you for verifying your information. How may I assist you today James?  james Fleming: I've been having issues w/a tivo box & I'm getting a replacement sent out to me (after paying an additional deposit) and now my current unit is no longer activated  Kaylene: I can help you today!  Kaylene: When we process an exchange we do transfer over the service to the replacement box so it is active and ready to go when you receive it.  james Fleming: which is to say you also make my current box worthless until such time I receive a new box?!?!?  Kaylene: I apologize that your original box was deactivated so we could activate your replacement box.  james Fleming: Why on Earth would I bother to pay in advance for a new box if you were going to kill my existing box.  Kaylene: What features are you needing to use on your current box?  james Fleming: I need to be able to access my netflix subscription (if I'm lucky enough to have it work without rebooting)  Kaylene: Can I have you verify the TiVo Service Number of your TiVo box please?  james Fleming: 7460011906979b4  Kaylene: We have your current box temporary service but not all features are available with temporary service as it is not paid for service.  Kaylene: If you like I can transfer your service back to your current box for now. Then once you receive the new box you will have to call in and have the service transferred back to the new box.  james Fleming: Not paid for? Let's see> one tivo box + 3 year service plan + monthly service + $109 deposit on a second box = what?  Kaylene: Would you like me to transfer your service back to your current box?  james Fleming: Yes - that would be helpful  Kaylene: All you will need to do is contact us again once you receive the new box so we can transfer it back.  Kaylene: I have put your service back on TiVo box 7460011906979b4.  james Fleming: What would also be helpful is your firm informing me to how you'd be cutting service in the interim.  james Fleming: Again - I opted to pay to have a second box delivered BEFORE returning the box I have - thus trying to have a continuity of service..  Kaylene: This is not something we normally do so it is important when you contact us to transfer the service back to the new box when you receive it that you reference this case number: 110622-006089.  Kaylene: I apologize about the inconvenience. You may need  force a few connections for the box to recognize the service again.  james Fleming: If it's not something you normally do than WHY would you have a $109 fee and a term for the service.  james Fleming: I am not mad at you, but your company is not impressing me and I'm blogging about this experience  Kaylene: Again I apologize about the inconvenience but you should be good to go now. Is there anything else I can help you with today?  james Fleming: so I need to go through the re-actviate process or is that somethign you do  Kaylene: When you receive the new TiVo box you need to contact us so we can transfer the service to the new box for you.  james Fleming: sure  Kaylene: Is there anything else I can help you with today James?  james Fleming: Nope - please email this transcript to me  Kaylene: I apologize but we do not have the ability to e-mail you a copy of this transcript. You can view it online at  http://www.tivo.com when you sign into your account or you can copy and paste it now to save it.  Kaylene: Thank you for contacting TiVo today. Your reference number for our conversation is 110622-006089. You can save this for your records, and if necessary, provide this to a later agent to pull up what we discussed. There will be a brief satisfaction survey emailed to you. We would appreciate any feedback on your TiVo Chat Support experience today.  Kaylene: Thank you for using TiVo Chat and have a great day James! Good-bye.  Kaylene has disconnected.

    Read the article

  • SQL Developer Blitz at ODTUG Kscope12

    - by thatjeffsmith
    Oracle Development Tools User Group (ODTUG) puts on an outstanding event, and I enjoy that the content comes FIRST. Yes, the after-event parties and entertainment are first class, but I look forward most to sitting in on some excellent sessions. For Kscope12 one would expect Oracle to have a large presence, and you would be absolutely correct! The APEX team will be there in full force, and we’ll have sessions on JDeveloper, ADF, and .NET. But what I want to talk about today is our awesome line-up of coverage for Oracle SQL Developer (Surprise!) DB and Developer’s Toolbox Symposium Kris Rice or @krisrice, Product Development Manager for SQL Developer, will speak at 10AM Sunday about SQL Developer Data Modeler. Our free data modeling solution allows one to reverse engineer a data dictionary to a model, modify it, and create a script of the changes. Collaboration is an important part of any development team; with built-in subversion support, the modeler makes collaboration easy, not just possible. After the morning break, I’ll be talking about SQL Developer’s PL/SQL support. From creating your code, to debugging, tuning, testing, and documenting PL/SQL – SQL Developer fits the bill. Since I have a full hour, I should have time to do a little riff on using source control to version and manage your revisions too! At 3:15 Jagan Athreya will talk about the new integration between SQL Developer and Enterprise Manager Cloud Control 12c. Enabling developers to define changes in SQLDeveloper and allowing DBAs to promote these changes to Test and Production via Enterprise Manager will reduce errors, accelerate productivity, and help eliminate unplanned downtime. Get your SQL Developer groove on at ODTUG Kscope12! Presentations SQL Developer Tips and Tricks Monday June 25, Session 5, 4:15 pm – 5:15 pm I’ll take you through my favorite keyboard shortcuts, top 10 preferences every user should tweak, and spotlight features that the average user probably hasn’t discovered yet. My goal for this session is for everyone to take 1-2 tips they can implement immediately to save mucho time. I enjoy interacting with the audience so no two versions of this presentation are the same. Oracle SQL Developer and Data Modeler New Features When: Tuesday June 26, Session 6, 8:30 am – 9:30 am Ashley Chen, my PM-partner-in-crime, will be covering all the new features from our two latest updates. So if you’re new to SQL Developer, or you’ve been using an older version, stop by and see what new toys you have to play with. I also have a bet with Ashley that she will have more attendees than me, so be sure to show up so I can collect. Debugging PL/SQL With SQL Developer When: Wednesday June 27, Session 16, 3:00 pm – 4:00 pm Me again – sorry. This time I have an entire hour to JUST talk about PL/SQL and debugging! Should you use a watch with a break condition, or a breakpoint with a passcount? How does external debugging with a Perl script work? Can I just debug an anonymous PL/SQL block. So if debugging to you is just a DBMS_OUTPUT.PUT_LINE() call, stop by and see how our IDE can help you take things to the next level! Or is that level++? Hands-on-Training SQL Developer Soup to Nuts When: Tuesday, 8:00 AM – 9:30 AM If you learn by doing, this is the session for you. Bring your own laptop or use one of the lab machines. We’ll give you a VirtualBox OEL image running 11gR2 EE Database with all the fixin’s (that’s Southern speak for Partitioning, Advanced Compression, Tuning & Diagnostic Packs, etc), TimesTen, APEX and much more. All you have to do is login and run through our lab exercises. You can start with a model and work your way up to debugging and testing your own appliction, or you can pick and choose your lessons to suit your needs. We’ll have people on hand to help you out and answer your questions. Booth Hours We’ll be in the vendor area and have our very own ‘demo pod’ for SQL Developer. Between Kris, Ashley, and I we should be able to answer your questions or show you how to ‘do that thing’ in the tool. Or just stop by and say hello! We’ll be around the following hours’ish: Sunday, June 24, 2012 6:00 PM – 8:00 PM Monday, June 25, 2012 9:00 AM – 4:30 PM Tuesday, June 26, 2012 9:30 AM – 3:30 PM Wednesday, June 27, 2012 10:15 AM – 2:00 PM No Excuses – If You Have Questions, This is Your Chance to Get Your Answers! We’re doing just about everything outside of a scavenger hunt to bring information and value to our users. Let us know what you like, what you don’t like, and we’ll do our best to do more of the former and less of the latter!

    Read the article

  • SQL SERVER – Advanced Data Quality Services with Melissa Data – Azure Data Market

    - by pinaldave
    There has been much fanfare over the new SQL Server 2012, and especially around its new companion product Data Quality Services (DQS). Among the many new features is the addition of this integrated knowledge-driven product that enables data stewards everywhere to profile, match, and cleanse data. In addition to the homegrown rules that data stewards can design and implement, there are also connectors to third party providers that are hosted in the Azure Datamarket marketplace.  In this review, I leverage SQL Server 2012 Data Quality Services, and proceed to subscribe to a third party data cleansing product through the Datamarket to showcase this unique capability. Crucial Questions For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? And if my address is not correct, what should it be corrected to? The answer lies in a third party knowledge base by the acknowledged USPS certified address accuracy experts at Melissa Data. Reference Data Services Within DQS there is a handy feature to actually add reference data from many different third-party Reference Data Services (RDS) vendors. DQS simplifies the processes of cleansing, standardizing, and enriching data through custom rules and through service providers from the Azure Datamarket. A quick jump over to the Datamarket site shows me that there are a handful of providers that offer data directly through Data Quality Services. Upon subscribing to these services, one can attach a DQS domain or composite domain (fields in a record) to a reference data service provider, and begin using it to cleanse, standardize, and enrich that data. Besides what I am looking for (address correction and enrichment), it is possible to subscribe to a host of other services including geocoding, IP address reference, phone checking and enrichment, as well as name parsing, standardization, and genderization.  These capabilities extend the data quality that DQS has natively by quite a bit. For my current address correction review, I needed to first sign up to a reference data provider on the Azure Data Market site. For this example, I used Melissa Data’s Address Check Service. They offer free one-month trials, so if you wish to follow along, or need to add address quality to your own data, I encourage you to sign up with them. Once I subscribed to the desired Reference Data Provider, I navigated my browser to the Account Keys within My Account to view the generated account key, which I then inserted into the DQS Client – Configuration under the Administration area. Step by Step to Guide That was all it took to hook in the subscribed provider -Melissa Data- directly to my DQS Client. The next step was for me to attach and map in my Reference Data from the newly acquired reference data provider, to a domain in my knowledge base. On the DQS Client home screen, I selected “New Knowledge Base” under Knowledge Base Management on the left-hand side of the home screen. Under New Knowledge Base, I typed a Name and description of my new knowledge base, then proceeded to the Domain Management screen. Here I established a series of domains (fields) and then linked them all together as a composite domain (record set). Using the Create Domain button, I created the following domains according to the fields in my incoming data: Name Address Suite City State Zip I added a Suite column in my domain because Melissa Data has the ability to return missing Suites based on last name or company. And that’s a great benefit of using these third party providers, as they have data that the data steward would not normally have access to. The bottom line is, with these third party data providers, I can actually improve my data. Next, I created a composite domain (fulladdress) and added the (field) domains into the composite domain. This essentially groups our address fields together in a record to facilitate the full address cleansing they perform. I then selected my newly created composite domain and under the Reference Data tab, added my third party reference data provider –Melissa Data’s Address Check- and mapped in each domain that I had to the provider’s Schema. Now that my composite domain has been married to the Reference Data service, I can take the newly published knowledge base and create a project to cleanse and enrich my data. My next task was to create a new Data Quality project, mapping in my data source and matching it to the appropriate domain column, and then kick off the verification process. It took just a few minutes with some progress indicators indicating that it was working. When the process concluded, there was a helpful set of tabs that place the response records into categories: suggested; new; invalid; corrected (automatically); and correct. Accepting the suggestions provided by  Melissa Data allowed me to clean up all the records and flag the invalid ones. It is very apparent that DQS makes address data quality simplistic for any IT professional. Final Note As I have shown, DQS makes data quality very easy. Within minutes I was able to set up a data cleansing and enrichment routine within my data quality project, and ensure that my address data was clean, verified, and standardized against real reference data. As reviewed here, it’s easy to see how both SQL Server 2012 and DQS work to take what used to require a highly skilled developer, and empower an average business or database person to consume external services and clean data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: DQS

    Read the article

  • SQL SERVER – PAGELATCH_DT, PAGELATCH_EX, PAGELATCH_KP, PAGELATCH_SH, PAGELATCH_UP – Wait Type – Day 12 of 28

    - by pinaldave
    This is another common wait type. However, I still frequently see people getting confused with PAGEIOLATCH_X and PAGELATCH_X wait types. Actually, there is a big difference between the two. PAGEIOLATCH is related to IO issues, while PAGELATCH is not related to IO issues but is oftentimes linked to a buffer issue. Before we delve deeper in this interesting topic, first let us understand what Latch is. Latches are internal SQL Server locks which can be described as very lightweight and short-term synchronization objects. Latches are not primarily to protect pages being read from disk into memory. It’s a synchronization object for any in-memory access to any portion of a log or data file.[Updated based on comment of Paul Randal] The difference between locks and latches is that locks seal all the involved resources throughout the duration of the transactions (and other processes will have no access to the object), whereas latches locks the resources during the time when the data is changed. This way, a latch is able to maintain the integrity of the data between storage engine and data cache. A latch is a short-living lock that is put on resources on buffer cache and in the physical disk when data is moved in either directions. As soon as the data is moved, the latch is released. Now, let us understand the wait stat type  related to latches. From Book On-Line: PAGELATCH_DT Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Destroy mode. PAGELATCH_EX Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Exclusive mode. PAGELATCH_KP Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Keep mode. PAGELATCH_SH Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Shared mode. PAGELATCH_UP Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Update mode. PAGELATCH_X Explanation: When there is a contention of access of the in-memory pages, this wait type shows up. It is quite possible that some of the pages in the memory are of very high demand. For the SQL Server to access them and put a latch on the pages, it will have to wait. This wait type is usually created at the same time. Additionally, it is commonly visible when the TempDB has higher contention as well. If there are indexes that are heavily used, contention can be created as well, leading to this wait type. Reducing PAGELATCH_X wait: The following counters are useful to understand the status of the PAGELATCH: Average Latch Wait Time (ms): The wait time for latch requests that have to wait. Latch Waits/sec: This is the number of latch requests that could not be granted immediately. Total Latch Wait Time (ms): This is the total latch wait time for latch requests in the last second. If there is TempDB contention, I suggest that you read the blog post of Robert Davis right away. He has written an excellent blog post regarding how to find out TempDB contention. The same blog post explains the terms in the allocation of GAM, SGAM and PFS. If there was a TempDB contention, Paul Randal explains the optimal settings for the TempDB in his misconceptions series. Trace Flag 1118 can be useful but use it very carefully. I totally understand that this blog post is not as clear as my other blog posts. I suggest if this wait stats is on one of your higher wait type. Do leave a comment or send me an email and I will get back to you with my solution for your situation. May the looking at all other wait stats and types together become effective as this wait type can help suggest proper bottleneck in your system. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQLAuthority News – Technology and Online Learning – Personal Technology Tip

    - by pinaldave
    This is the fourth post in my series about Personal Technology Tips and Tricks, and I knew exactly what I wanted to write about.  But at first I was conflicted.   Is online learning really a personal tip?  Is it really a trick that no one knows?  However, I have decided to stick with my original idea because online learning is everywhere.  It’s a trick that we can’t – and shouldn’t – overlook.  Here are ten of my ideas about how we should be taking advantage of online learning. 1) Get ahead in the work place.  We all know that a good way to become better at your job, and to become more competitive for promotions and raises.  Many people overlook online learning as a way to get job training, though, thinking it is a path for people still seeking their high school or college diplomas.  But take a look at what companies like Pluralsight offer, and you might be pleasantly surprised. 2) Flexibility.  Some of us remember the heady days of college with nostalgia, others remember it with loathing.  A lot of bad memories come from remembering the strict scheduling and deadlines of college.  But with online learning, the classes fit into your free time – you don’t have to schedule your life around classes.  Even better, there are usually no homework or test deadlines, only one final deadline where all work must be completed.  This allows students to work at their own pace – my next point. 3) Learn at your own pace.  One thing traditional classes suffer from is that they are highly structured.  If you work more quickly than the rest of the class, or especially if you work more slowly, traditional classes do not work for you.  Online courses let you move as quickly or as slowly as you find necessary. 4) Fill gaps in your knowledge.  I’m sure I am not the only one who has thought to myself “I would love to take a course on X, Y, or Z.”  The problem is that it can be very hard to find the perfect class that teaches exactly what you’re interested in, at a time and a price that’s right.  But online courses are far easier to tailor exactly to your tastes. 5) Fits into your schedule.  Even harder to find than a class you’re interested in is one that fits into your schedule.  If you hold down a job – even a part time job – you know it’s next to impossible to find class times that work for you.  Online classes can be taken anytime, anywhere.  On your lunch break, in your car, or in your pajamas at the end of the day. 6) Student centered.  Online learning has to stay competitive.  There are hundreds, even thousands of options for students, and every provider has to find a way to lure in students and provide them with a good education.  The best kind of online classes know that they need to provide great classes, flexible scheduling, and high quality to attract students – and the student benefit from this kind of attention. 7) You can save money.  The average cost for a college diploma in the US is over $20,000.  I don’t know about you, but that is not the kind of money I just have lying around for a rainy day.  Sometimes I think I’d love to go back to school, but not for that price tag.  Online courses are much, much more affordable.  And even better, you can pick and choose what courses you’d like to take, and avoid all the “electives” in college. 8) Get access to the best minds in the business.  One of the perks of being the best in your field is that you are one person who knows the most about something.  If students are lucky, you will choose to share that knowledge with them on a college campus.  For the hundreds of other students who don’t live in your area and don’t attend your school, they are out of luck.  But luckily for them, more and more online courses is attracting the best minds in the business, and if you enroll online, you can take advantage of these minds, too. 9) Save your time.  Getting a four year degree is a great decision, and I encourage everyone to pursue their Bachelor’s – and beyond.  But if you have already tried to go to school, or already have a degree but are thinking of switching fields, four years of your life is a long time to go back and redo things.  Getting your online degree will save you time by allowing you to work at your own pace, set your own schedule, and take only the classes you’re interested in. 10) Variety of degrees and programs.  If you’re not sure what you’re interested in, or if you only need a few classes here and there to finish a program, online classes are perfect for you.  You can pick and choose what you’d like, and sample a wide variety without spending too much money. I hope I’ve outlined for everyone just a few ways that they could benefit from online learning.  If you’re still unconvinced, just check out a few of my other articles that expand more on these topics. Here are the blog posts relevent to developer trainings: Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Developer Training

    Read the article

  • The Top Ten Security Top Ten Lists

    - by Troy Kitch
    As a marketer, we're always putting together the top 3, or 5 best, or an assortment of top ten lists. So instead of going that route, I've put together my top ten security top ten lists. These are not only for security practitioners, but also for the average Joe/Jane; because who isn't concerned about security these days? Now, there might not be ten for each one of these lists, but the title works best that way. Starting with my number ten (in no particular order): 10. Top 10 Most Influential Security-Related Movies Amrit Williams pulls together a great collection of security-related movies. He asks for comments on which one made you want to get into the business. I would have to say that my most influential movie(s), that made me want to get into the business of "stopping the bad guys" would have to be the James Bond series. I grew up on James Bond movies: thwarting the bad guy and saving the world. I recall being both ecstatic and worried when Silicon Valley-themed "A View to A Kill" hit theaters: "An investigation of a horse-racing scam leads 007 to a mad industrialist who plans to create a worldwide microchip monopoly by destroying California's Silicon Valley." Yikes! 9. Top Ten Security Careers From movies that got you into the career, here’s a top 10 list of security-related careers. It starts with number then, Information Security Analyst and ends with number one, Malware Analyst. They point out the significant growth in security careers and indicate that "according to the Bureau of Labor Statistics, the field is expected to experience growth rates of 22% between 2010-2020. If you are interested in getting into the field, Oracle has many great opportunities all around the world.  8. Top 125 Network Security Tools A bit outside of the range of 10, the top 125 Network Security Tools is an important list because it includes a prioritized list of key security tools practitioners are using in the hacking community, regardless of whether they are vendor supplied or open source. The exhaustive list provides ratings, reviews, searching, and sorting. 7. Top 10 Security Practices I have to give a shout out to my alma mater, Cal Poly, SLO: Go Mustangs! They have compiled their list of top 10 practices for students and faculty to follow. Educational institutions are a common target of web based attacks and miscellaneous errors according to the 2014 Verizon Data Breach Investigations Report.    6. (ISC)2 Top 10 Safe and Secure Online Tips for Parents This list is arguably the most important list on my list. The tips were "gathered from (ISC)2 member volunteers who participate in the organization’s Safe and Secure Online program, a worldwide initiative that brings top cyber security experts into schools to teach children ages 11-14 how to protect themselves in a cyber-connected world…If you are a parent, educator or organization that would like the Safe and Secure Online presentation delivered at your local school, or would like more information about the program, please visit here.” 5. Top Ten Data Breaches of the Past 12 Months This type of list is always changing, so it's nice to have a current one here from Techrader.com. They've compiled and commented on the top breaches. It is likely that most readers here were effected in some way or another. 4. Top Ten Security Comic Books Although mostly physical security controls, I threw this one in for fun. My vote for #1 (not on the list) would be Professor X. The guy can breach confidentiality, integrity, and availability just by messing with your thoughts. 3. The IOUG Data Security Survey's Top 10+ Threats to Organizations The Independent Oracle Users Group annual survey on enterprise data security, Leaders Vs. Laggards, highlights what Oracle Database users deem as the top 12 threats to their organization. You can find a nice graph on page 9; Figure 7: Greatest Threats to Data Security. 2. The Ten Most Common Database Security Vulnerabilities Though I don't necessarily agree with all of the vulnerabilities in this order...I like a list that focuses on where two-thirds of your sensitive and regulated data resides (Source: IDC).  1. OWASP Top Ten Project The Online Web Application Security Project puts together their annual list of the 10 most critical web application security risks that organizations should be including in their overall security, business risk and compliance plans. In particular, SQL injection risks continues to rear its ugly head each year. Oracle Audit Vault and Database Firewall can help prevent SQL injection attacks and monitor database and system activity as a detective security control. Did I miss any?

    Read the article

  • Connection Pooling is Busted

    - by MightyZot
    A few weeks ago we started getting complaints about performance in an application that has performed very well for many years.  The application is a n-tier application that uses ADODB with the SQLOLEDB provider to talk to a SQL Server database.  Our object model is written in such a way that each public method validates security before performing requested actions, so there is a significant number of queries executed to get information about file cabinets, retrieve images, create workflows, etc.  (PaperWise is a document management and workflow system.)  A common factor for these customers is that they have remote offices connected via MPLS networks. Naturally, the first thing we looked at was the query performance in SQL Profiler.  All of the queries were executing within expected timeframes, most of them were so fast that the duration in SQL Profiler was zero.  After getting nowhere with SQL Profiler, the situation was escalated to me.  I decided to take a peek with Process Monitor.  Procmon revealed some “gaps” in the TCP/IP traffic.  There were notable delays between send and receive pairs.  The send and receive pairs themselves were quite snappy, but quite often there was a notable delay between a receive and the next send.  You might expect some delay because, presumably, the application is doing some thinking in-between the pairs.  But, comparing the procmon data at the remote locations with the procmon data for workstations on the local network showed that the remote workstations were significantly delayed.  Procmon also showed a high number of disconnects. Wireshark traces showed that connections to the database were taking between 75ms and 150ms.  Not only that, but connections to a file share containing images were taking 2 seconds!  So, I asked about a trust.  Sure enough there was a trust between two domains and the file share was on the second domain.  Joining a remote workstation to the domain hosting the share containing images alleviated the time delay in accessing the file share.  Removing the trust had no affect on the connections to the database. Microsoft Network Monitor includes filters that parse TDS packets.  TDS is the protocol that SQL Server uses to communicate.  There is a certificate exchange and some SSL that occurs during authentication.  All of this was evident in the network traffic.  After staring at the network traffic for a while, and examining packets, I decided to call it a night.  On the way home that night, something about the traffic kept nagging at me.  Then it dawned on me…at the beginning of the dance of packets between the client and the server all was well.  Connection pooling was working and I could see multiple queries getting executed on the same connection and ethereal port.  After a particular query, connecting to two different servers, I noticed that ADODB and SQLOLEDB started making repeated connections to the database on different ethereal ports.  SQL Server would execute a single query and respond on a port, then open a new port and execute the next query.  Connection pooling appeared to be broken. The next morning I wrote a test to confirm my hypothesis.  Turns out that the sequence causing the connection nastiness goes something like this: Make a connection to the database. Open a result set that returns enough records to require multiple roundtrips to the server. For each result, query for some other data in the database (this will open a new implicit connection.) Close the inner result set and repeat for every item in the original result set. Close the original connection. Provided that the first result set returns enough data to require multiple roundtrips to the server, ADODB and SQLOLEDB will start making new connections to the database for each query executed in the loop.  Originally, I thought this might be due to Microsoft’s denial of service (ddos) attack protection.  After turning those features off to no avail, I eventually thought to switch my queries to client-side cursors instead of server-side cursors.  Server-side cursors are the default, by the way.  Voila!  After switching to client-side cursors, the disconnects were gone and the above sequence yielded two connections as expected. While the real problem is the amount of time it takes to make connections over these MPLS networks (100ms on average), switching to client-side cursors made the problem go away.  Believe it or not, this is actually documented by Microsoft, and rather difficult to find.  (At least it was while we were trying to troubleshoot the problem!)  So, if you’re noticing performance issues on slower networks, or networks with slower switching, take a look at the traffic in a tool like Microsoft Network Monitor.  If you notice a high number of disconnects, and you’re using fire-hose or server-side cursors, then try switching to client-side cursors and you may see the problem go away. Most likely, Microsoft believes this to be appropriate behavior, because ADODB can’t guarantee that all of the data has been retrieved when you execute the inner queries.  I’m not convinced, though, because the problem remains even after replacing all of the implicit connections with explicit connections and closing those connections in-between each of the inner queries.  In that case, there doesn’t seem to be a reason why ADODB can’t use a single connection from the connection pool to make the additional queries, bringing the total number of connections to two.  Instead ADO appears to make an assumption about the state of the connection. I’ve reported the behavior to Microsoft and am awaiting to hear from the appropriate team, so that I can demonstrate the problem.  Maybe they can explain to us why this is appropriate behavior.  :)

    Read the article

  • SQLAuthority News – Storing Data and Files in Cloud – Dropbox – Personal Technology Tip

    - by pinaldave
    I thought long and hard about doing a Personal Technology Tips series for this blog.  I have so many tips I’d like to share.  I am on my computer almost all day, every day, so I have a treasure trove of interesting tidbits I like to share if given the chance.  The only thing holding me back – which tip to share first?  The first tip obviously has the weight of seeming like the most important.  But this would mean choosing amongst my favorite tricks and shortcuts.  This is a hard task. Source: Dropbox.com My Dropbox I have finally decided, though, and have determined that the first Personal Technology Tip may not be the most secret or even trickier to master – in fact, it is probably the easiest.  My today’s Personal Technology Tip is Dropbox. I hope that all of you are nodding along in recognition right now.  If you do not use Dropbox, or have not even heard of it before, get on the internet and find their site.  You won’t be disappointed.  A quick recap for those in the dark: Dropbox is an online storage site with a lot of additional syncing and cloud-computing capabilities.  Now that we’ve covered the basics, let’s explore some of my favorite options in Dropbox. Collaborate with All The first thing I love about Dropbox is the ability it gives you to collaborate with others.  You can share files easily with other Dropbox users, and they can alter them, share them with you, all while keeping track of different versions in on easy place.  I’d like to see anyone try to accomplish that key idea – “easily” – using e-mail versions and multiple computers.  It’s even difficult to accomplish using a shared network. Afraid that this kind of ease looks too good to be true?  Afraid that maybe there isn’t enough storage space, or the user interface is confusing?  Think again.  There is plenty of space – you can get 2 GB with just a free account, and upgrades are inexpensive and go up to 100 GB of storage.  And the user interface is so easy that anyone can learn to use it. What I use Dropbox for I love Dropbox because I give a lot of presentations and often they are far from home.  I can keep my presentations on Dropbox and have easy access to them anywhere, without needing to have my whole computer with me.  This is just one small way that you can use Dropbox. You can sync your entire hard drive, or hard drives if you have multiple computers (home, work, office, shared), and you can set Dropbox to automatically sync files on a certain timeline, or whenever Dropbox notices that they’ve been changed. Why I love Dropbox Dropbox has plenty of storage, but 2 GB still has a hard time competing with the average desktop’s storage space.  So what if you want to sync most of your files, but only the ones you use the most and share between work and home, and not all your files (especially large files like pictures and videos)?  You can use selective sync to choose which files to sync. Above all, my favorite feature is LanSync.  Dropbox will search your Local Area Network (LAN) for new files and sync them to Dropbox, as well as downloading the new version to all the shared files across the network.  That means that if move around on different computers at work or at home, you will have the same version of the file every time.  Or, other users on the LAN will have access to the new version, which makes collaboration extremely easy. Ref: rzfeeser.com Dropbox has so many other features that I feel like I could create a Personal Technology Tips series devoted entirely to Dropbox.  I’m going to create a bullet list here to make things shorter, but I strongly encourage you to look further into these into options if it sounds like something you would use. Theft Recover Home Security File Hosting and Sharing Portable Dropbox Sync your iCal calendar Password Storage What is your favorite tool and why? I could go on and on, but I will end here.  In summary – I strongly encourage everyone to investigate Dropbox to see if it’s something they would find useful.  If you use Dropbox and know of a great feature I failed to mention, please share it with me, I’d love to hear how everyone uses this program. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Personal Technology

    Read the article

  • SQL SERVER – Auditing and Profiling Database Made Easy with SQL Audit and Comply

    - by Pinal Dave
    Do you like auditing your database, or can you think of about a million other things you’d rather do?  Unfortunately, auditing is incredibly important.  As with tax audits, it is important to audit databases to ensure they are following all the rules, but they are also important for troubleshooting and security. There are several ways to audit SQL Server.  There is manual auditing, which is going through your database “by hand,” and obviously takes a long time and is quite inefficient.  SQL Server also provides programs to help you audit your systems.  Different administrators will have different opinions about best practices and which tools to use, and each one will be perfected for certain systems and certain users. Today, though, I would like to talk about Apex SQL Audit.  It is an auditing tool that acts like “track changes” in a word processing document.  It will log what has changed on the database, who made the changes, and what effects these changes have had (i.e. what objects were affected down the line).  All this information is logged, and can be easily viewed or printed for easy access. One of the best features of Apex is that it is so customizable (and easy to use!).  First, start Apex.  Then you can connect to the database you would like to monitor. Once you select your database, you can select which table you want to audit. You can customize right down to the field you’d like to audit, and then select which types of actions you’d like tracked – insert, delete, or update.  Repeat these steps for every database you want monitored. To create the logs, choose “Create triggers” in the menu.  The script written here will be what logs each insert, delete, and update function.  Press F5 to execute.  All this tracking information will be stored in AUDIT_LOG_DATA and AUDIT_LOG_TRANSACTIONS tables.  View these tables using ApexSQL Audit reports. These transaction logs can be extremely detailed – especially on very busy servers, where every move it traced.  Reading them can be overwhelming, to say the least.  Apex has tried to make things easier for the average DBA, though. You can read these tracking logs in Apex, and it will display data and objects that affect your server – even things that were happening on your server before you installed Apex! To read these logs, open Apex, and connect to that database you want to audit. Go to the Transaction Logs tab, and add the logs you want to read. To narrow down what results you want to see, you can use the Filter tab to choose time, operation type, name, users, and more. Click Open, and you can see the results in a grid (as shown below).  You can export these results to CSV, HTML, XML or SQL files and save on the hard disk. One of the advantages is that since there are no triggers here, there are no other processes that will affect SQL Server performance.  Using this method is also how to view history from your database that occurred before Apex was installed.  This type of tracking does require storage space for the data sources, as the database must be fully running, and the transaction logs must exist (things not stored in the transactions logs will not be recoverable). Apex can also replace SQL Server Profiler and SQL Server Traces – which are much more complex and error-prone – with its ApexSQL Comply.  It can do fault tolerant auditing, centralized reporting, and “who saw what” information in an easy-to-use interface.  The tracking settings can be altered by the user, or the default options will provide solutions to the most common auditing problems. To get started: open ApexSQL Comply, and selected Database Filter Settings to choose which database you’d like to audit.  You can select which tracking you’re like in Operation Types – DML, DDL, queries executed, execute statements, and more.  To get started, click Start Auditing. After this, every action will be stored in the central repository database (ApexSQLCrd).  You can view the audit and create a report (or view the standard default report) using a wizard. You can see how easy it is to use ApexSQL Comply.  You can easily set audits, including the type and time, and create customized reports.  Remote users can easily access the reports through the user interface (available online, as well), and security concerns are all taken care of by the program.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • MDX using EXISTING, AGGREGATE, CROSSJOIN and WHERE

    - by James Rogers
    It is a well-published approach to using the EXISTING function to decode AGGREGATE members and nested sub-query filters.  Mosha wrote a good blog on it here and a more recent one here.  The use of EXISTING in these scenarios is very useful and sometimes the only option when dealing with multi-select filters.  However, there are some limitations I have run across when using the EXISTING function against an AGGREGATE member:   The AGGREGATE member must be assigned to the Dimension.Hierarchy being detected by the EXISTING function in the calculated measure. The AGGREGATE member cannot contain a crossjoin from any other dimension or hierarchy or EXISTING will not be able to detect the members in the AGGREGATE member.   Take the following query (from Adventure Works DW 2008):   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    member [Date].[Fiscal Weeks].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]})'   select   {[Week Count]} on columns from   [Adventure Works]     where   [Date].[Fiscal Weeks].[CM]   Here we are attempting to count the existing fiscal weeks in slicer.  This is useful to get a per-week average for another member. Many applications generate queries in this manner (such as Oracle OBIEE).  This query returns the correct result of (4) weeks. Now let's put a twist in it.  What if the querying application submits the query in the following manner:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    member [Customer].[Customer Geography].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]})'   select   {[Week Count]} on columns from   [Adventure Works]     where   [Customer].[Customer Geography].[CM]   Here we are attempting to count the existing fiscal weeks in slicer.  However, the AGGREGATE member is built on a different dimension (in name) than the one EXISTING is trying to detect.  In this case the query returns (174) which is the total number of [Date].[Fiscal Weeks].[Fiscal Week].members defined in the dimension.   Now another twist, the AGGREGATE member will be named appropriately and contain the hierarchy we are trying to detect with EXISTING but it will be cross-joined with another hierarchy:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    member [Date].[Fiscal Weeks].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}*    {[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})'  select   {[Week Count]} on columns from   [Adventure Works]    where   [Date].[Fiscal Weeks].[CM]   Once again, we are attempting to count the existing fiscal weeks in slicer.  Again, in this case the query returns (174) which is the total number of [Date].[Fiscal Weeks].[Fiscal Week].members defined in the dimension. However, in 2008 R2 this query returns the correct result of 4 and additionally , the following will return the count of existing countries as well (2):   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'   member [Country Count] as 'count(existing([Customer].[Customer Geography].[Country].members))'  member [Date].[Fiscal Weeks].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}*    {[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})'  select   {[Week Count]} on columns from   [Adventure Works]    where   [Date].[Fiscal Weeks].[CM]   2008 R2 seems to work as long as the AGGREGATE member is on at least one of the hierarchies attempting to be detected (i.e. [Date].[Fiscal Weeks] or [Customer].[Customer Geography]). If not, it seems that the engine cannot find a "point of entry" into the aggregate member and ignores it for calculated members.   One way around this would be to put the sets from the AGGREGATE member explicitly in the WHERE clause (slicer).  I realize this is only supported in SSAS 2005 and 2008.  However, after talking with Chris Webb (his blog is here and I highly recommend following his efforts and musings) it is a far more efficient way to filter/slice a query:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    select   {[Week Count]} on columns from   [Adventure Works]    where   ({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}   ,{[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})   This query returns the correct result of (4) weeks.  Additionally, we can count the cross-join members of the two hierarchies in the slicer:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members)*existing([Customer].[Customer Geography].[Country].members))'    select   {[Week Count]} on columns from   [Adventure Works]    where   ({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}   ,{[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})   We get the correct number of (8) here.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >