Search Results

Search found 10536 results on 422 pages for 'cpu usage'.

Page 119/422 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Why would VMWare to go defunct? How to recover from/prevent it?

    - by Josh
    I am running VMWare Server 2.0.2 (Build 203138) on a dual core Intel i5 with Ubuntu Server 10.04 LTS system (kernel 2.6.32-22-server #33-Ubuntu SMP). Disk Subsystem is a software RAID5 array. The system has been set up for a little over a week. For the past 5 days I have been running at leat 3 VMs (Linux and a variety of Windows OSes) with no issues whatsoever. But while I was installing Linux onto a new VM, suddenly all VMs became unresponsive, including the one I was installing to. I could not log in to the VMWare Management Interface, and the system was somewhat unresponsive via SSH. When I looked at top, I saw: top - 16:14:51 up 6 days, 1:49, 8 users, load average: 24.29, 24.33 17.54 Tasks: 203 total, 7 running, 195 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 25.6%sy, 0.0%ni, 74.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8056656k total, 5927580k used, 2129076k free, 20320k buffers Swap: 7811064k total, 240216k used, 7570848k free, 5045884k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 21549 root 39 19 0 0 0 Z 100 0.0 15:02.44 [vmware-vmx] <defunct> 2115 root 20 0 0 0 0 S 1 0.0 170:32.08 [vmware-rtc] 2231 root 21 1 1494m 126m 100m S 1 1.6 892:58.05 /usr/lib/vmware/bin/vmware-vmx -# product=2; 2280 jnet 20 0 19320 1164 800 R 0 0.0 30:04.55 top 12236 root 20 0 833m 41m 34m S 0 0.5 88:34.24 /usr/lib/vmware/bin/vmware-vmx -# product=2; 1 root 20 0 23704 1476 920 S 0 0.0 0:00.80 /sbin/init 2 root 20 0 0 0 0 S 0 0.0 0:00.01 [kthreadd] 3 root RT 0 0 0 0 S 0 0.0 0:00.00 [migration/0] 4 root 20 0 0 0 0 S 0 0.0 0:00.84 [ksoftirqd/0] 5 root RT 0 0 0 0 S 0 0.0 0:00.00 [watchdog/0] 6 root RT 0 0 0 0 S 0 0.0 0:00.00 [migration/1] The VMWare process for the virtual machine I was installing into became a zombie. Yet, it was still consuming 100% of the CPU time on one of the cores, and I couldn't reach it or any other virtual machines. (I was logged in to one virtual machine over SSH, another via X11, and a third via VNC. All three connections died). When I ran ps -ef and similar commands, I found that the defunct vmware-vmx process had it's parent PID set to init (1). I also used lsof -p 21549 and found that the defunct process had no open files. Yet it was using 100% of CPU time... I was unable to kill any vmware-vmx processes, including the defunct one, even with kill -9. As a last resort to resolve the situation I tried to reboot the box, however shutdown, halt, reboot, and init 6 all failed to reboot/shutdown, even when given appropriate --force settings. ControlAltDel produced a message about rebooting on the console, but the system would not reboot. I had to hard power-cycle the box to resolve the situation. (See my other question, Should I worry about the integrity of my linux software RAID5 after a crash or kernel panic?) What would cause a scenario like this? What else could I have done to resolve it besides a hard reboot? What can I do to prevent such a situation in the future?

    Read the article

  • Linux software RAID6: rebuild slow

    - by Ole Tange
    I am trying to find the bottleneck in the rebuilding of a software raid6. ## Pause rebuilding when measuring raw I/O performance # echo 1 > /proc/sys/dev/raid/speed_limit_min # echo 1 > /proc/sys/dev/raid/speed_limit_max ## Drop caches so that does not interfere with measuring # sync ; echo 3 | tee /proc/sys/vm/drop_caches >/dev/null # time parallel -j0 "dd if=/dev/{} bs=256k count=4000 | cat >/dev/null" ::: sdbd sdbc sdbf sdbm sdbl sdbk sdbe sdbj sdbh sdbg 4000+0 records in 4000+0 records out 1048576000 bytes (1.0 GB) copied, 7.30336 s, 144 MB/s [... similar for each disk ...] # time parallel -j0 "dd if=/dev/{} skip=15000000 bs=256k count=4000 | cat >/dev/null" ::: sdbd sdbc sdbf sdbm sdbl sdbk sdbe sdbj sdbh sdbg 4000+0 records in 4000+0 records out 1048576000 bytes (1.0 GB) copied, 12.7991 s, 81.9 MB/s [... similar for each disk ...] So we can read sequentially at 140 MB/s in the outer tracks and 82 MB/s in the inner tracks on all the drives simultaneously. Sequential write performance is similar. This would lead me to expect a rebuild speed of 82 MB/s or more. # echo 800000 > /proc/sys/dev/raid/speed_limit_min # echo 800000 > /proc/sys/dev/raid/speed_limit_max # cat /proc/mdstat md2 : active raid6 sdbd[10](S) sdbc[9] sdbf[0] sdbm[8] sdbl[7] sdbk[6] sdbe[11] sdbj[4] sdbi[3](F) sdbh[2] sdbg[1] 27349121408 blocks super 1.2 level 6, 128k chunk, algorithm 2 [9/8] [UUU_UUUUU] [=========>...........] recovery = 47.3% (1849905884/3907017344) finish=855.9min speed=40054K/sec But we only get 40 MB/s. And often this drops to 30 MB/s. # iostat -dkx 1 sdbc 0.00 8023.00 0.00 329.00 0.00 33408.00 203.09 0.70 2.12 1.06 34.80 sdbd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdbe 13.00 0.00 8334.00 0.00 33388.00 0.00 8.01 0.65 0.08 0.06 47.20 sdbf 0.00 0.00 8348.00 0.00 33388.00 0.00 8.00 0.58 0.07 0.06 48.00 sdbg 16.00 0.00 8331.00 0.00 33388.00 0.00 8.02 0.71 0.09 0.06 48.80 sdbh 961.00 0.00 8314.00 0.00 37100.00 0.00 8.92 0.93 0.11 0.07 54.80 sdbj 70.00 0.00 8276.00 0.00 33384.00 0.00 8.07 0.78 0.10 0.06 48.40 sdbk 124.00 0.00 8221.00 0.00 33380.00 0.00 8.12 0.88 0.11 0.06 47.20 sdbl 83.00 0.00 8262.00 0.00 33380.00 0.00 8.08 0.96 0.12 0.06 47.60 sdbm 0.00 0.00 8344.00 0.00 33376.00 0.00 8.00 0.56 0.07 0.06 47.60 iostat says the disks are not 100% busy (but only 40-50%). This fits with the hypothesis that the max is around 80 MB/s. Since this is software raid the limiting factor could be CPU. top says: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 38520 root 20 0 0 0 0 R 64 0.0 2947:50 md2_raid6 6117 root 20 0 0 0 0 D 53 0.0 473:25.96 md2_resync So md2_raid6 and md2_resync are clearly busy taking up 64% and 53% of a CPU respectively, but not near 100%. The chunk size (128k) of the RAID was chosen after measuring which chunksize gave the least CPU penalty. If this speed is normal: What is the limiting factor? Can I measure that? If this speed is not normal: How can I find the limiting factor? Can I change that?

    Read the article

  • My computer freezes irregurarly

    - by Manhim
    My computer started to freeze at irregular times for 3 weeks now. What happens My computer freezes, the video stops. (No graphic glitches, it just stops) Sound keeps playing up to some time (Usually 10-30 seconds) then stops playing. Sometimes, randomly, the screen on my G-15 keyboard flickers and I see caracters not at the right places. Usually happens for about 1-2 seconds and a bit before my computer freezes. I have to keep the power button pressed for 4 seconds to shut my computer down. I still hear my hard drives and fans working. Sometimes it works with no problems for a full day, some other times it just keeps freezing each time I restart my computer and I have to leave it for the rest of the day. Sometimes my mouse freezes for a fraction of a second (Like 0.01 to 0.2 seconds) quite randomly, usually before it freezes. No errors spotted by the "Action center" unlike when I had problems with my last video card on this system (Driver errors). My G-15 LCD screen also freezes. What I did so far I have had similar problems in the past and I had changed my hard drive (It was faulty), so I tested my software RAID-0 array and it was faulty so I changed it. (I reinstalled Windows 7 with this part). I also tested with unplugging my secondary hard drive. My CPU was running at about 100 degree Celsius, I removed the dust between the fans and the heatsink and it's now between 50-60. I ran a CPU stress-test and it didn't freeze during the tests (using Prime95 on all cores) Ran a memory test (using memtest86+) for a single pass and there were no errors. Ran a GPU stress test with ati-tools and furmark and it didn't freeze during the tests. (No artefacts either) I had troubles with my graphic card when I got it, but I think that it got fixed with a driver update. I checked the voltages in my BIOS setup and they all seemed ok (±0.2 I think). I have ran on the computer without problems with Fedora 15 on an external hard drive (Appart that it couldn't load Gnome 3 and was reverting to Gnome 2, didn't want to install drivers since I use it on multiple computers) I used it to backup my files from the raid array to my 1TB hard drive for the reinstallation of Windows. (So the crashes only happenned on Windows) [The external hard drive is plugged directly on a SATA port] I contacted EVGA (My graphic card vendor) and pointed them on this question, I'm looking for an answer. Ran sensors on Fedora 15 and got this output: http://pastebin.com/0BHJnAvu When it happens When I play video games (Mostly) When I play flash games (Second most) When I'm looking at my desktop background (It rarely happens when I have a window open, but it does, sometimes) Specs Windows Seven x64 Home Premium Motherboard: M2N-SLI Deluxe CPU: AMD Phenom 9950 x2 @ 2.6GHz Memory: Kingston 4x2GB Dual Channel (Pretty basic memory sticks) Hard drives: Was 2x250GB (Western digital caviar) in raid-0 + 1TB (WD caviar black), I replaced the raid array with a 750GB (WD caviar black) [Yes I removed the array from the raid configurations] 750W Power supply No overcloking. Ever. There have been some power-downs like 4-5 weeks ago, but the problem didn't start immediately after. (I wasn't home, so my computer got shut-down) My current to-try list Change the thermal paste on my CPU. Change my graphic card with a temporary one and stress the computer. Change my power supply. In this situation, how can I successfully pin-point the current hardware problem? (If it's a hardware problem) Because I don't really have the budget to just forget and replace everything. I also don't really have hardware to test-replace current hardware.

    Read the article

  • Where is my VMware-ws FreeNAS CIFS(ZFS) bottle-neck?

    - by maka
    Background: I'm building a quiet HTPC + NAS that is also supposed to be used for general computer usage. I'm so far generally happy with things, it was just that I was expecting a little better IO performance. I have no clue if my expectations are unreal. The NAS is there as a general purpose file storage and as a media server for XBMC and other devices. ZFS is a requirement. Question: Where is my bottle-neck, and is there anything I can do config wise, to improve my performance? I'm thinking VM-disk settings could be something but I really have no idea where to go since I'm neither experienced with FreeNAS nor VMware-WS. Tests: When I'm on the host OS and copy files (from the SSD) to the CIFS share, I get around 30 Mbytes/sec read and write. When I'm on my laptop laptop, wired to the network, I get about the same specs. The test I've done are with a 16 GB ISO, and with about 200 MB of RARs and I've tried avoiding the RAM-cache by reading different files than the ones I'm writing ( 10 GB). It feels like having less CPU cores is a lot more efficient, since the resource manager in Windows reports less CPU-usage. With 4 cores in VMware, CPU usage was 50-80%, with 1 core it was 25-60%. EDIT: HD ActiveTime was quite high on SSD so I moved the page file, disabled hibernate and enabled Win DiskCache both on SSD and RAID. This resulted in no real performance difference for one file, but if i transferred 2 files the total speed went up to 50 Mbytes/s vs ~40. The ActiveTime avg also went down a lot (to ~20%) but has now higher bursts. DiskIO is on ~ 30-35 Mbytes/s avgs, with ~100Mb bursts. Network is on 200-250Mbits/s with ~45 active TCP connections. Hardware Asus F2A85-M Pro A10-5700 16GB DDR3 1600 OCZ Vertex 2 128GB SSD 2x Generic 1tb 7200 RPM drives as RAID0 (in win7) Intel Gigabit Desktop CT Software Host OS: Win7 (SSD) VMware Worksation 9 (SSD) FreeNAS 8.3 VM (20GB VDisk on SSD) CPU: I've tried 1, 2 and 4 cores. Virtualisation engine, Preferred mode: Automatic 10,24Gb ram 50Gb SCSI VDisk on the RAID0, VDisk is formatted as ZFS and exposed through CIFS through FreeNAS. NIC Bridge, Replicate physical network state Below are two typical process print-outs while I'm transfering one file to the CIFS share. last pid: 2707; load averages: 0.60, 0.43, 0.24 up 0+00:07:05 00:34:26 32 processes: 2 running, 30 sleeping Mem: 101M Active, 53M Inact, 1620M Wired, 2188K Cache, 149M Buf, 8117M Free Swap: 4096M Total, 4096M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 2640 root 1 102 0 50164K 10364K RUN 0:25 25.98% smbd 1897 root 6 44 0 168M 74808K uwait 0:02 0.00% python last pid: 2746; load averages: 0.93, 0.60, 0.33 up 0+00:08:53 00:36:14 33 processes: 2 running, 31 sleeping Mem: 101M Active, 53M Inact, 4722M Wired, 2188K Cache, 152M Buf, 5015M Free Swap: 4096M Total, 4096M Free PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 2640 root 1 76 0 50164K 10364K RUN 0:52 16.99% smbd 1897 root 6 44 0 168M 74816K uwait 0:02 0.00% python I'm sorry if my question isn't phrased right, I'm really bad at these kind of things, and it is the first time I post here at SU. I also appreciate any other suggestions to something, I could have missed.

    Read the article

  • Monitor randomly shutting down, computer accepting no input, need to restart to get working

    - by Sebastian Lamerichs
    First off, spec list: OS: Windows 7 Ultimate 64-bit SP1 CPU: i7-4820k @ 3.7GHz (stock) GPU: Two 3GB Radeon HD 7970s @ 1.05GHz Mobo: AsRock X79 Extreme6 HDD: 2TB Seagate Barracuda 7200rpm RAM: 16GB quad-channel Kingston 1600MHz PSU: Antec HCG 900W Monitors: Acer S220HQL 1920x1080 + ViewSonic VA2251 1920x1080. Plugged into different GPUs. My problem is that, on a daily-ish basis, my monitors will turn off and not turn back on. My computer will still be running, GPU/CPU/case fans all still going, but the monitors will not turn back on. Additionally, it seems to cease all network activity. It doesn't seem to log any errors at all. I've verified that this is not a monitor issue, as when I press the num/caps/scroll lock buttons on my keyboard, the lights don't change, so the computer is clearly not accepting input. I have noticed a few other people on the internet with this problem, and some have claimed that it was solved by disabling PCI-Express Link State Power Management, but the issue still occurs for me after this. Whilst my CPU and GPUs both run at 100% 24/7, the temperatures are certainly not at dangerous levels, with the CPU averaging 65°C and the GPUs at 70°C and 78°C average. All components are brand new. I have tried forcing MSI Afterburner to start when Windows starts and to force a constant voltage, as this fixed the issue for a few days for another user, but he reported back saying that it had stopped working properly again, so I'm not putting too much faith in this working. Many people have said to adjust display sleep mode settings, but this will clearly not work, as the keyboard lights would still work if the monitors were the issue. The closest I can get to a log file for this issue is the following Folding@Home logs: 14:45:21:WU01:FS00:0x17:Completed 1120000 out of 2000000 steps (56%) 14:46:43:WU00:FS01:0x17:Completed 480000 out of 2000000 steps (24%) 14:46:49:WU01:FS00:0x17:Completed 1140000 out of 2000000 steps (57%) 14:48:30:WU01:FS00:0x17:Completed 1160000 out of 2000000 steps (58%) 14:49:55:WU01:FS00:0x17:Completed 1180000 out of 2000000 steps (59%) As you can see, the second GPU (FS01) stops computation approximately three and a half minutes before the issue occurs (it should be completing 1% every 80-120 seconds), and the first GPU (FS00) continues for a few minutes more before the logs just end. As far as I can tell, the computer has a network failure at the time the first GPU stops working, the latest IRC message I received from this time was at 14:47:58. That being said, there could have just not been any messages between then and 14:50:00, so I'm going to be connecting a laptop to the same bouncer to double-check if it happens again. The GPUs functioned perfectly well in another computer for a significant period of time, so I'm fairly confident that they aren't the issue, which means that this is being caused by either software or the motherboard, or possibly RAM. I really hope it's software. I heard from a forum board that there was a patch from Microsoft that fixed this problem, but "I've forgot which KB it was or the google search terms I used to find the patch, LOL.", so that's not much help. Haven't seen it mentioned by anyone else on about a dozen threads about this issue either. The computer is plugged in via a surge-protected power board, and I've run several other computers and pieces of hardware through it with no issues, so that is not the cause. I have just set the hard disk to never turn off, although I don't believe that that will solve the issue. Strangely, this has only happened when I'm not at the computer (which is actually a minority of the time). Until today it had only happened when I had not been actively using the computer for 6 hours, but today it happened within 10-30 minutes of me last using the computer actively. I have enabled file logging from MSI Afterburner, so hopefully this will shed some light on the issue, but I'm not too optimistic. I've heard that it could be a motherboard problem, but I figured I should ask around before RMAing it. Any help?

    Read the article

  • Seeking on a Heap, and Two Useful DMVs

    - by Paul White
    So far in this mini-series on seeks and scans, we have seen that a simple ‘seek’ operation can be much more complex than it first appears.  A seek can contain one or more seek predicates – each of which can either identify at most one row in a unique index (a singleton lookup) or a range of values (a range scan).  When looking at a query plan, we will often need to look at the details of the seek operator in the Properties window to see how many operations it is performing, and what type of operation each one is.  As you saw in the first post in this series, the number of hidden seeking operations can have an appreciable impact on performance. Measuring Seeks and Scans I mentioned in my last post that there is no way to tell from a graphical query plan whether you are seeing a singleton lookup or a range scan.  You can work it out – if you happen to know that the index is defined as unique and the seek predicate is an equality comparison, but there’s no separate property that says ‘singleton lookup’ or ‘range scan’.  This is a shame, and if I had my way, the query plan would show different icons for range scans and singleton lookups – perhaps also indicating whether the operation was one or more of those operations underneath the covers. In light of all that, you might be wondering if there is another way to measure how many seeks of either type are occurring in your system, or for a particular query.  As is often the case, the answer is yes – we can use a couple of dynamic management views (DMVs): sys.dm_db_index_usage_stats and sys.dm_db_index_operational_stats. Index Usage Stats The index usage stats DMV contains counts of index operations from the perspective of the Query Executor (QE) – the SQL Server component that is responsible for executing the query plan.  It has three columns that are of particular interest to us: user_seeks – the number of times an Index Seek operator appears in an executed plan user_scans – the number of times a Table Scan or Index Scan operator appears in an executed plan user_lookups – the number of times an RID or Key Lookup operator appears in an executed plan An operator is counted once per execution (generating an estimated plan does not affect the totals), so an Index Seek that executes 10,000 times in a single plan execution adds 1 to the count of user seeks.  Even less intuitively, an operator is also counted once per execution even if it is not executed at all.  I will show you a demonstration of each of these things later in this post. Index Operational Stats The index operational stats DMV contains counts of index and table operations from the perspective of the Storage Engine (SE).  It contains a wealth of interesting information, but the two columns of interest to us right now are: range_scan_count – the number of range scans (including unrestricted full scans) on a heap or index structure singleton_lookup_count – the number of singleton lookups in a heap or index structure This DMV counts each SE operation, so 10,000 singleton lookups will add 10,000 to the singleton lookup count column, and a table scan that is executed 5 times will add 5 to the range scan count. The Test Rig To explore the behaviour of seeks and scans in detail, we will need to create a test environment.  The scripts presented here are best run on SQL Server 2008 Developer Edition, but the majority of the tests will work just fine on SQL Server 2005.  A couple of tests use partitioning, but these will be skipped if you are not running an Enterprise-equivalent SKU.  Ok, first up we need a database: USE master; GO IF DB_ID('ScansAndSeeks') IS NOT NULL DROP DATABASE ScansAndSeeks; GO CREATE DATABASE ScansAndSeeks; GO USE ScansAndSeeks; GO ALTER DATABASE ScansAndSeeks SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE ScansAndSeeks SET AUTO_CLOSE OFF, AUTO_SHRINK OFF, AUTO_CREATE_STATISTICS OFF, AUTO_UPDATE_STATISTICS OFF, PARAMETERIZATION SIMPLE, READ_COMMITTED_SNAPSHOT OFF, RESTRICTED_USER ; Notice that several database options are set in particular ways to ensure we get meaningful and reproducible results from the DMVs.  In particular, the options to auto-create and update statistics are disabled.  There are also three stored procedures, the first of which creates a test table (which may or may not be partitioned).  The table is pretty much the same one we used yesterday: The table has 100 rows, and both the key_col and data columns contain the same values – the integers from 1 to 100 inclusive.  The table is a heap, with a non-clustered primary key on key_col, and a non-clustered non-unique index on the data column.  The only reason I have used a heap here, rather than a clustered table, is so I can demonstrate a seek on a heap later on.  The table has an extra column (not shown because I am too lazy to update the diagram from yesterday) called padding – a CHAR(100) column that just contains 100 spaces in every row.  It’s just there to discourage SQL Server from choosing table scan over an index + RID lookup in one of the tests. The first stored procedure is called ResetTest: CREATE PROCEDURE dbo.ResetTest @Partitioned BIT = 'false' AS BEGIN SET NOCOUNT ON ; IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; IF @Partitioned = 'true' BEGIN -- Enterprise, Trial, or Developer -- required for partitioning tests IF SERVERPROPERTY('EngineEdition') = 3 BEGIN EXECUTE (' DROP TABLE dbo.Example ; IF EXISTS ( SELECT 1 FROM sys.partition_schemes WHERE name = N''PS'' ) DROP PARTITION SCHEME PS ; IF EXISTS ( SELECT 1 FROM sys.partition_functions WHERE name = N''PF'' ) DROP PARTITION FUNCTION PF ; CREATE PARTITION FUNCTION PF (INTEGER) AS RANGE RIGHT FOR VALUES (20, 40, 60, 80, 100) ; CREATE PARTITION SCHEME PS AS PARTITION PF ALL TO ([PRIMARY]) ; CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, padding CHAR(100) NOT NULL DEFAULT SPACE(100), CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ON PS (key_col); '); END ELSE BEGIN RAISERROR('Invalid SKU for partition test', 16, 1); RETURN; END; END ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; END; GO The second stored procedure, ShowStats, displays information from the Index Usage Stats and Index Operational Stats DMVs: CREATE PROCEDURE dbo.ShowStats @Partitioned BIT = 'false' AS BEGIN -- Index Usage Stats DMV (QE) SELECT index_name = ISNULL(I.name, I.type_desc), scans = IUS.user_scans, seeks = IUS.user_seeks, lookups = IUS.user_lookups FROM sys.dm_db_index_usage_stats AS IUS JOIN sys.indexes AS I ON I.object_id = IUS.object_id AND I.index_id = IUS.index_id WHERE IUS.database_id = DB_ID(N'ScansAndSeeks') AND IUS.object_id = OBJECT_ID(N'dbo.Example', N'U') ORDER BY I.index_id ; -- Index Operational Stats DMV (SE) IF @Partitioned = 'true' SELECT index_name = ISNULL(I.name, I.type_desc), partitions = COUNT(IOS.partition_number), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; ELSE SELECT index_name = ISNULL(I.name, I.type_desc), range_scans = SUM(IOS.range_scan_count), single_lookups = SUM(IOS.singleton_lookup_count) FROM sys.dm_db_index_operational_stats ( DB_ID(N'ScansAndSeeks'), OBJECT_ID(N'dbo.Example', N'U'), NULL, NULL ) AS IOS JOIN sys.indexes AS I ON I.object_id = IOS.object_id AND I.index_id = IOS.index_id GROUP BY I.index_id, -- Key I.name, I.type_desc ORDER BY I.index_id; END; The final stored procedure, RunTest, executes a query written against the example table: CREATE PROCEDURE dbo.RunTest @SQL VARCHAR(8000), @Partitioned BIT = 'false' AS BEGIN -- No execution plan yet SET STATISTICS XML OFF ; -- Reset the test environment EXECUTE dbo.ResetTest @Partitioned ; -- Previous call will throw an error if a partitioned -- test was requested, but SKU does not support it IF @@ERROR = 0 BEGIN -- IO statistics and plan on SET STATISTICS XML, IO ON ; -- Test statement EXECUTE (@SQL) ; -- Plan and IO statistics off SET STATISTICS XML, IO OFF ; EXECUTE dbo.ShowStats @Partitioned; END; END; The Tests The first test is a simple scan of the heap table: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example'; The top result set comes from the Index Usage Stats DMV, so it is the Query Executor’s (QE) view.  The lower result is from Index Operational Stats, which shows statistics derived from the actions taken by the Storage Engine (SE).  We see that QE performed 1 scan operation on the heap, and SE performed a single range scan.  Let’s try a single-value equality seek on a unique index next: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 32'; This time we see a single seek on the non-clustered primary key from QE, and one singleton lookup on the same index by the SE.  Now for a single-value seek on the non-unique non-clustered index: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32'; QE shows a single seek on the non-clustered non-unique index, but SE shows a single range scan on that index – not the singleton lookup we saw in the previous test.  That makes sense because we know that only a single-value seek into a unique index is a singleton seek.  A single-value seek into a non-unique index might retrieve any number of rows, if you think about it.  The next query is equivalent to the IN list example seen in the first post in this series, but it is written using OR (just for variety, you understand): EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data = 32 OR data = 33'; The plan looks the same, and there’s no difference in the stats recorded by QE, but the SE shows two range scans.  Again, these are range scans because we are looking for two values in the data column, which is covered by a non-unique index.  I’ve added a snippet from the Properties window to show that the query plan does show two seek predicates, not just one.  Now let’s rewrite the query using BETWEEN: EXECUTE dbo.RunTest @SQL = 'SELECT data FROM Example WHERE data BETWEEN 32 AND 33'; Notice the seek operator only has one predicate now – it’s just a single range scan from 32 to 33 in the index – as the SE output shows.  For the next test, we will look up four values in the key_col column: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col IN (2,4,6,8)'; Just a single seek on the PK from the Query Executor, but four singleton lookups reported by the Storage Engine – and four seek predicates in the Properties window.  On to a more complex example: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WITH (INDEX([PK dbo.Example key_col])) WHERE key_col BETWEEN 1 AND 8'; This time we are forcing use of the non-clustered primary key to return eight rows.  The index is not covering for this query, so the query plan includes an RID lookup into the heap to fetch the data and padding columns.  The QE reports a seek on the PK and a lookup on the heap.  The SE reports a single range scan on the PK (to find key_col values between 1 and 8), and eight singleton lookups on the heap.  Remember that a bookmark lookup (RID or Key) is a seek to a single value in a ‘unique index’ – it finds a row in the heap or cluster from a unique RID or clustering key – so that’s why lookups are always singleton lookups, not range scans. Our next example shows what happens when a query plan operator is not executed at all: EXECUTE dbo.RunTest @SQL = 'SELECT key_col FROM Example WHERE key_col = 8 AND @@TRANCOUNT < 0'; The Filter has a start-up predicate which is always false (if your @@TRANCOUNT is less than zero, call CSS immediately).  The index seek is never executed, but QE still records a single seek against the PK because the operator appears once in an executed plan.  The SE output shows no activity at all.  This next example is 2008 and above only, I’m afraid: EXECUTE dbo.RunTest @SQL = 'SELECT * FROM Example WHERE key_col BETWEEN 1 AND 30', @Partitioned = 'true'; This is the first example to use a partitioned table.  QE reports a single seek on the heap (yes – a seek on a heap), and the SE reports two range scans on the heap.  SQL Server knows (from the partitioning definition) that it only needs to look at partitions 1 and 2 to find all the rows where key_col is between 1 and 30 – the engine seeks to find the two partitions, and performs a range scan seek on each partition. The final example for today is another seek on a heap – try to work out the output of the query before running it! EXECUTE dbo.RunTest @SQL = 'SELECT TOP (2) WITH TIES * FROM Example WHERE key_col BETWEEN 1 AND 50 ORDER BY $PARTITION.PF(key_col) DESC', @Partitioned = 'true'; Notice the lack of an explicit Sort operator in the query plan to enforce the ORDER BY clause, and the backward range scan. © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • SElinux process killed while trying to set boolean

    - by Antonio
    I've got a strange problem. I can not allow apache to connect to database at my CentOC 6.4 box: [root@centos6 ~]# setsebool -P httpd_can_network_connect on Killed [root@centos6 ~]# sestatus -b | grep httpd_can_network_connect httpd_can_network_connect off httpd_can_network_connect_cobbler off httpd_can_network_connect_db off I watched log file, but there was no log messages: tail -f /var/log/audit/audit.log UPDATE: There are some information in /var/log/messages: Nov 9 19:07:16 vs302 kernel: setsebool invoked oom-killer: gfp_mask=0x280da, order=0, oom_adj=0, oom_score_adj=0 Nov 9 19:07:16 vs302 kernel: setsebool cpuset=/ mems_allowed=0 Nov 9 19:07:16 vs302 kernel: Pid: 1660, comm: setsebool Not tainted 2.6.32-358.23.2.el6.x86_64 #1 Nov 9 19:07:16 vs302 kernel: Call Trace: Nov 9 19:07:16 vs302 kernel: [<ffffffff810cb641>] ? cpuset_print_task_mems_allowed+0x91/0xb0 Nov 9 19:07:16 vs302 kernel: [<ffffffff8111ce40>] ? dump_header+0x90/0x1b0 Nov 9 19:07:16 vs302 kernel: [<ffffffff8111d2c2>] ? oom_kill_process+0x82/0x2a0 Nov 9 19:07:16 vs302 kernel: [<ffffffff8111d201>] ? select_bad_process+0xe1/0x120 Nov 9 19:07:16 vs302 kernel: [<ffffffff8111d700>] ? out_of_memory+0x220/0x3c0 Nov 9 19:07:16 vs302 kernel: [<ffffffff8112c3dc>] ? __alloc_pages_nodemask+0x8ac/0x8d0 Nov 9 19:07:16 vs302 kernel: [<ffffffff81160d6a>] ? alloc_pages_vma+0x9a/0x150 Nov 9 19:07:16 vs302 kernel: [<ffffffff81143f0b>] ? handle_pte_fault+0x76b/0xb50 Nov 9 19:07:16 vs302 kernel: [<ffffffff81228664>] ? task_has_capability+0xb4/0x110 Nov 9 19:07:16 vs302 kernel: [<ffffffff81004a49>] ? __raw_callee_save_xen_pmd_val+0x11/0x1e Nov 9 19:07:16 vs302 kernel: [<ffffffff8114452a>] ? handle_mm_fault+0x23a/0x310 Nov 9 19:07:16 vs302 kernel: [<ffffffff811485b6>] ? vma_adjust+0x556/0x5e0 Nov 9 19:07:16 vs302 kernel: [<ffffffff810474e9>] ? __do_page_fault+0x139/0x480 Nov 9 19:07:16 vs302 kernel: [<ffffffff81148b8a>] ? vma_merge+0x29a/0x3e0 Nov 9 19:07:16 vs302 kernel: [<ffffffff81149fdc>] ? do_brk+0x26c/0x350 Nov 9 19:07:16 vs302 kernel: [<ffffffff8100ba1d>] ? retint_restore_args+0x5/0x6 Nov 9 19:07:16 vs302 kernel: [<ffffffff81513bfe>] ? do_page_fault+0x3e/0xa0 Nov 9 19:07:16 vs302 kernel: [<ffffffff81510fb5>] ? page_fault+0x25/0x30 Nov 9 19:07:16 vs302 kernel: Mem-Info: Nov 9 19:07:16 vs302 kernel: Node 0 DMA per-cpu: Nov 9 19:07:16 vs302 kernel: CPU 0: hi: 0, btch: 1 usd: 0 Nov 9 19:07:16 vs302 kernel: Node 0 DMA32 per-cpu: Nov 9 19:07:16 vs302 kernel: CPU 0: hi: 186, btch: 31 usd: 30 Nov 9 19:07:16 vs302 kernel: active_anon:132249 inactive_anon:46 isolated_anon:0 Nov 9 19:07:16 vs302 kernel: active_file:56 inactive_file:59 isolated_file:0 Nov 9 19:07:16 vs302 kernel: unevictable:0 dirty:2 writeback:0 unstable:0 Nov 9 19:07:16 vs302 kernel: free:1369 slab_reclaimable:1774 slab_unreclaimable:11588 Nov 9 19:07:16 vs302 kernel: mapped:54 shmem:48 pagetables:1211 bounce:0 Nov 9 19:07:16 vs302 kernel: Node 0 DMA free:2440kB min:72kB low:88kB high:108kB active_anon:12156kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:14648kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:24kB slab_unreclaimable:8kB kernel_stack:0kB pagetables:16kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Nov 9 19:07:16 vs302 kernel: lowmem_reserve[]: 0 590 590 590 Nov 9 19:07:16 vs302 kernel: Node 0 DMA32 free:3036kB min:3072kB low:3840kB high:4608kB active_anon:516840kB inactive_anon:184kB active_file:224kB inactive_file:236kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:604988kB mlocked:0kB dirty:8kB writeback:0kB mapped:216kB shmem:192kB slab_reclaimable:7072kB slab_unreclaimable:46344kB kernel_stack:880kB pagetables:4828kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:128 all_unreclaimable? no Nov 9 19:07:16 vs302 kernel: lowmem_reserve[]: 0 0 0 0 Nov 9 19:07:16 vs302 kernel: Node 0 DMA: 0*4kB 1*8kB 0*16kB 0*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 2440kB Nov 9 19:07:16 vs302 kernel: Node 0 DMA32: 129*4kB 67*8kB 30*16kB 19*32kB 6*64kB 2*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3036kB Nov 9 19:07:16 vs302 kernel: 182 total pagecache pages Nov 9 19:07:16 vs302 kernel: 0 pages in swap cache Nov 9 19:07:16 vs302 kernel: Swap cache stats: add 0, delete 0, find 0/0 Nov 9 19:07:16 vs302 kernel: Free swap = 0kB Nov 9 19:07:16 vs302 kernel: Total swap = 0kB Nov 9 19:07:16 vs302 kernel: 157439 pages RAM Nov 9 19:07:16 vs302 kernel: 6271 pages reserved Nov 9 19:07:16 vs302 kernel: 2686 pages shared Nov 9 19:07:16 vs302 kernel: 146395 pages non-shared Nov 9 19:07:16 vs302 kernel: [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name Nov 9 19:07:16 vs302 kernel: [ 271] 0 271 2798 231 0 -17 -1000 udevd Nov 9 19:07:16 vs302 kernel: [ 476] 0 476 2797 230 0 -17 -1000 udevd Nov 9 19:07:16 vs302 kernel: [ 718] 0 718 2279 122 0 0 0 dhclient Nov 9 19:07:16 vs302 kernel: [ 762] 0 762 6909 58 0 -17 -1000 auditd Nov 9 19:07:16 vs302 kernel: [ 787] 0 787 62270 147 0 0 0 rsyslogd Nov 9 19:07:16 vs302 kernel: [ 801] 25 801 40326 2655 0 0 0 named Nov 9 19:07:16 vs302 kernel: [ 850] 0 850 16563 172 0 -17 -1000 sshd Nov 9 19:07:16 vs302 kernel: [ 875] 0 875 23451 240 0 0 0 sshd Nov 9 19:07:16 vs302 kernel: [ 966] 498 966 4780 44 0 0 0 wrapper Nov 9 19:07:16 vs302 kernel: [ 968] 498 968 497404 40812 0 0 0 java Nov 9 19:07:16 vs302 kernel: [ 1057] 0 1057 20216 225 0 0 0 master Nov 9 19:07:16 vs302 kernel: [ 1064] 89 1064 20278 209 0 0 0 qmgr Nov 9 19:07:16 vs302 kernel: [ 1071] 0 1071 27075 121 0 0 0 bash Nov 9 19:07:16 vs302 kernel: [ 1111] 0 1111 24880 350 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1117] 48 1117 24913 351 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1118] 48 1118 24880 337 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1119] 48 1119 24880 337 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1120] 48 1120 24880 337 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1121] 48 1121 24880 337 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1122] 48 1122 24880 337 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1124] 48 1124 24880 337 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1125] 48 1125 24880 337 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1129] 0 1129 29313 151 0 0 0 crond Nov 9 19:07:16 vs302 kernel: [ 1143] 0 1143 1018 22 0 0 0 agetty Nov 9 19:07:16 vs302 kernel: [ 1146] 0 1146 1015 22 0 0 0 mingetty Nov 9 19:07:16 vs302 kernel: [ 1514] 0 1514 23451 237 0 0 0 sshd Nov 9 19:07:16 vs302 kernel: [ 1517] 0 1517 27075 113 0 0 0 bash Nov 9 19:07:16 vs302 kernel: [ 1641] 89 1641 20236 218 0 0 0 pickup Nov 9 19:07:16 vs302 kernel: [ 1659] 0 1659 25234 39 0 0 0 tail Nov 9 19:07:16 vs302 kernel: [ 1660] 0 1660 89903 85712 0 0 0 setsebool Nov 9 19:07:16 vs302 kernel: Out of memory: Kill process 1660 (setsebool) score 568 or sacrifice child Nov 9 19:07:16 vs302 kernel: Killed process 1660, UID 0, (setsebool) total-vm:359612kB, anon-rss:342708kB, file-rss:140kB

    Read the article

  • Puppet's automatically generated certificates failing

    - by gparent
    I am running a default configuration of Puppet on Debian Squeeze 6.0.4. The server's FQDN is master.example.com. The client's FQDN is client.example.com. I am able to contact the puppet master and send a CSR. I sign it using puppetca -sa but the client will still not connect. Date of both machines is within 2 seconds of Tue Apr 3 20:59:00 UTC 2012 as I wrote this sentence. This is what appears in /var/log/syslog: Apr 3 17:03:52 localhost puppet-agent[18653]: Reopening log files Apr 3 17:03:52 localhost puppet-agent[18653]: Starting Puppet client version 2.6.2 Apr 3 17:03:53 localhost puppet-agent[18653]: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed Apr 3 17:03:53 localhost puppet-agent[18653]: Using cached catalog Apr 3 17:03:53 localhost puppet-agent[18653]: Could not retrieve catalog; skipping run Here is some interesting output: OpenSSL client test: client:~# openssl s_client -host master.example.com -port 8140 -cert /var/lib/puppet/ssl/certs/client.example.com.pem -key /var/lib/puppet/ssl/private_keys/client.example.com.pem -CAfile /var/lib/puppet/ssl/certs/ca.pem CONNECTED(00000003) depth=1 /CN=Puppet CA: master.example.com verify return:1 depth=0 /CN=master.example.com verify error:num=7:certificate signature failure verify return:1 depth=0 /CN=master.example.com verify return:1 18509:error:1409441B:SSL routines:SSL3_READ_BYTES:tlsv1 alert decrypt error:s3_pkt.c:1102:SSL alert number 51 18509:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:188: client:~# master's certificate: root@master:/etc/puppet# openssl x509 -text -noout -in /etc/puppet/ssl/certs/master.example.com.pem Certificate: Data: Version: 3 (0x2) Serial Number: 2 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:28 2012 GMT Not After : Apr 2 20:01:28 2017 GMT Subject: CN=master.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:a9:c1:f9:4c:cd:0f:68:84:7b:f4:93:16:20:44: 7a:2b:05:8e:57:31:05:8e:9c:c8:08:68:73:71:39: c1:86:6a:59:93:6e:53:aa:43:11:83:5b:2d:8c:7d: 54:05:65:c1:e1:0e:94:4a:f0:86:58:c3:3d:4f:f3: 7d:bd:8e:29:58:a6:36:f4:3e:b2:61:ec:53:b5:38: 8e:84:ac:5f:a3:e3:8c:39:bd:cf:4f:3c:ff:a9:65: 09:66:3c:ba:10:14:69:d5:07:57:06:28:02:37:be: 03:82:fb:90:8b:7d:b3:a5:33:7b:9b:3a:42:51:12: b3:ac:dd:d5:58:69:a9:8a:ed Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C X509v3 Extended Key Usage: critical TLS Web Server Authentication, TLS Web Client Authentication Signature Algorithm: sha1WithRSAEncryption 7b:2c:4f:c2:76:38:ab:03:7f:c6:54:d9:78:1d:ab:6c:45:ab: 47:02:c7:fd:45:4e:ab:b5:b6:d9:a7:df:44:72:55:0c:a5:d0: 86:58:14:ae:5f:6f:ea:87:4d:78:e4:39:4d:20:7e:3d:6d:e9: e2:5e:d7:c9:3c:27:43:a4:29:44:85:a1:63:df:2f:55:a9:6a: 72:46:d8:fb:c7:cc:ca:43:e7:e1:2c:fe:55:2a:0d:17:76:d4: e5:49:8b:85:9f:fa:0e:f6:cc:e8:28:3e:8b:47:b0:e1:02:f0: 3d:73:3e:99:65:3b:91:32:c5:ce:e4:86:21:b2:e0:b4:15:b5: 22:63 root@master:/etc/puppet# CA's certificate: root@master:/etc/puppet# openssl x509 -text -noout -in /etc/puppet/ssl/certs/ca.pem Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:05 2012 GMT Not After : Apr 2 20:01:05 2017 GMT Subject: CN=Puppet CA: master.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:b5:2c:3e:26:a3:ae:43:b8:ed:1e:ef:4d:a1:1e: 82:77:78:c2:98:3f:e2:e0:05:57:f0:8d:80:09:36: 62:be:6c:1a:21:43:59:1d:e9:b9:4d:e0:9c:fa:09: aa:12:a1:82:58:fc:47:31:ed:ad:ad:73:01:26:97: ef:d2:d6:41:6b:85:3b:af:70:00:b9:63:e9:1b:c3: ce:57:6d:95:0e:a6:d2:64:bd:1f:2c:1f:5c:26:8e: 02:fd:d3:28:9e:e9:8f:bc:46:bb:dd:25:db:39:57: 81:ed:e5:c8:1f:3d:ca:39:cf:e7:f3:63:75:f6:15: 1f:d4:71:56:ed:84:50:fb:5d Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:TRUE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Certificate Sign, CRL Sign X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C Signature Algorithm: sha1WithRSAEncryption 1d:cd:c6:65:32:42:a5:01:62:46:87:10:da:74:7e:8b:c8:c9: 86:32:9e:c2:2e:c1:fd:00:79:f0:ef:d8:73:dd:7e:1b:1a:3f: cc:64:da:a3:38:ad:49:4e:c8:4d:e3:09:ba:bc:66:f2:6f:63: 9a:48:19:2d:27:5b:1d:2a:69:bf:4f:f4:e0:67:5e:66:84:30: e5:85:f4:49:6e:d0:92:ae:66:77:50:cf:45:c0:29:b2:64:87: 12:09:d3:10:4d:91:b6:f3:63:c4:26:b3:fa:94:2b:96:18:1f: 9b:a9:53:74:de:9c:73:a4:3a:8d:bf:fa:9c:c0:42:9d:78:49: 4d:70 root@master:/etc/puppet# Client's certificate: client:~# openssl x509 -text -noout -in /var/lib/puppet/ssl/certs/client.example.com.pem Certificate: Data: Version: 3 (0x2) Serial Number: 3 (0x3) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:36 2012 GMT Not After : Apr 2 20:01:36 2017 GMT Subject: CN=client.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:ae:88:6d:9b:e3:b1:fc:47:07:d6:bf:ea:53:d1: 14:14:9b:35:e6:70:43:e0:58:35:76:ac:c5:9d:86: 02:fd:77:28:fc:93:34:65:9d:dd:0b:ea:21:14:4d: 8a:95:2e:28:c9:a5:8d:a2:2c:0e:1c:a0:4c:fa:03: e5:aa:d3:97:98:05:59:3c:82:a9:7c:0e:e9:df:fd: 48:81:dc:33:dc:88:e9:09:e4:19:d6:e4:7b:92:33: 31:73:e4:f2:9c:42:75:b2:e1:9f:d9:49:8c:a7:eb: fa:7d:cb:62:22:90:1c:37:3a:40:95:a7:a0:3b:ad: 8e:12:7c:6e:ad:04:94:ed:47 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C X509v3 Extended Key Usage: critical TLS Web Server Authentication, TLS Web Client Authentication Signature Algorithm: sha1WithRSAEncryption 33:1f:ec:3c:91:5a:eb:c6:03:5f:a1:58:60:c3:41:ed:1f:fe: cb:b2:40:11:63:4d:ba:18:8a:8b:62:ba:ab:61:f5:a0:6c:0e: 8a:20:56:7b:10:a1:f9:1d:51:49:af:70:3a:05:f9:27:4a:25: d4:e6:88:26:f7:26:e0:20:30:2a:20:1d:c4:d3:26:f1:99:cf: 47:2e:73:90:bd:9c:88:bf:67:9e:dd:7c:0e:3a:86:6b:0b:8d: 39:0f:db:66:c0:b6:20:c3:34:84:0e:d8:3b:fc:1c:a8:6c:6c: b1:19:76:65:e6:22:3c:bf:ff:1c:74:bb:62:a0:46:02:95:fa: 83:41 client:~#

    Read the article

  • my server suddenly crashes every 2 days or so. Programmer has no idea, please help find the cause, here is the top

    - by Alex
    Every couple of days my server suddenly crashes and I must request hardware reset at data center to get it back running. Today I came back to my shell and saw the server was dead and "top" was running on it, and see below for the "top" right before the crash. I opened /var/log/messages and scrolled to the reboot time and see nothing, no errors prior to the hard reboot. (I checked in /etc/syslog.conf and I see "*.info;mail.none;authpriv.none;cron.none /var/log/messages" , isn't this good enough to log all problems?) Usually when I look at the top, the swap is never used up like this! I also don't know why mysqld is at 323% cpu (server only runs drupal and its never slow or overloaded). Solver is my application. I don't know whats that 'sh' doing and 'dovecot' doing. Its driving me crazy over the last month, please help me solve this mystery and stop my downtimes. top - 01:10:06 up 6 days, 5 min, 3 users, load average: 34.87, 18.68, 9.03 Tasks: 500 total, 19 running, 481 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 96.6%sy, 0.0%ni, 1.7%id, 1.8%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8165600k total, 8139764k used, 25836k free, 428k buffers Swap: 2104496k total, 2104496k used, 0k free, 8236k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4421 mysql 15 0 571m 105m 976 S 323.5 1.3 9:08.00 mysqld 564 root 20 -5 0 0 0 R 99.5 0.0 2:49.16 kswapd1 25767 apache 19 0 399m 8060 888 D 79.3 0.1 0:06.64 httpd 25781 apache 19 0 398m 5648 492 R 79.0 0.1 0:08.21 httpd 25961 apache 25 0 398m 5700 560 R 76.7 0.1 0:17.81 httpd 25980 apache 25 0 10816 668 520 R 75.0 0.0 0:46.95 sh 563 root 20 -5 0 0 0 D 71.4 0.0 3:12.37 kswapd0 25766 apache 25 0 399m 7256 756 R 69.7 0.1 0:39.83 httpd 25911 apache 25 0 398m 5612 480 R 58.8 0.1 0:17.63 httpd 25782 apache 25 0 440m 38m 648 R 55.2 0.5 0:18.94 httpd 25966 apache 25 0 398m 5640 556 R 55.2 0.1 0:48.84 httpd 4588 root 25 0 74860 596 476 R 53.9 0.0 0:37.90 crond 25939 apache 25 0 2776 172 84 R 48.9 0.0 0:59.46 solver 4575 root 25 0 397m 6004 1144 R 48.6 0.1 1:00.43 httpd 25962 apache 25 0 398m 5628 492 R 47.9 0.1 0:14.58 httpd 25824 apache 25 0 440m 39m 680 D 47.3 0.5 0:57.85 httpd 25968 apache 25 0 398m 5612 528 R 46.6 0.1 0:42.73 httpd 4477 root 25 0 6084 396 280 R 46.3 0.0 0:59.53 dovecot 25982 root 25 0 397m 5108 240 R 45.9 0.1 0:18.01 httpd 25943 apache 25 0 2916 172 8 R 44.0 0.0 0:53.54 solver 30687 apache 25 0 468m 63m 1124 D 42.3 0.8 0:45.02 httpd 25978 apache 25 0 398m 5688 600 R 23.8 0.1 0:40.99 httpd 25983 root 25 0 397m 5272 384 D 14.9 0.1 0:18.99 httpd 935 root 10 -5 0 0 0 D 14.2 0.0 1:54.60 kjournald 25986 root 25 0 397m 5308 420 D 8.9 0.1 0:04.75 httpd 4011 haldaemo 25 0 31568 1476 716 S 5.6 0.0 0:24.36 hald 25956 apache 23 0 398m 5872 644 S 5.6 0.1 0:13.85 httpd 18336 root 18 0 13004 1332 724 R 0.3 0.0 1:46.66 top 1 root 18 0 10372 212 180 S 0.0 0.0 0:05.99 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.95 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.15 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00 .06 ksoftirqd/1 here is a normal top, when server is working fine: top - 01:50:41 up 21 min, 1 user, load average: 2.98, 2.70, 1.68 Tasks: 271 total, 2 running, 269 sleeping, 0 stopped, 0 zombie Cpu(s): 15.0%us, 1.1%sy, 0.0%ni, 81.4%id, 2.4%wa, 0.1%hi, 0.0%si, 0.0%st Mem: 8165600k total, 2035856k used, 6129744k free, 60840k buffers Swap: 2104496k total, 0k used, 2104496k free, 283744k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2204 apache 17 0 466m 83m 19m S 25.9 1.0 0:22.16 httpd 11347 apache 15 0 466m 83m 19m S 25.9 1.0 0:26.10 httpd 18204 apache 18 0 481m 97m 19m D 25.2 1.2 0:13.99 httpd 4644 apache 18 0 481m 100m 19m D 24.6 1.3 1:17.12 httpd 4727 apache 17 0 481m 99m 19m S 24.3 1.2 1:10.77 httpd 4777 apache 17 0 482m 102m 21m S 23.6 1.3 1:38.27 httpd 8924 apache 15 0 483m 99m 19m S 22.3 1.3 1:13.41 httpd 9390 apache 18 0 483m 99m 19m S 18.9 1.2 1:05.35 httpd 4728 apache 16 0 481m 101m 19m S 14.3 1.3 1:12.50 httpd 4648 apache 15 0 481m 107m 27m S 12.6 1.4 1:18.62 httpd 24955 apache 15 0 467m 82m 19m S 3.3 1.0 0:21.80 httpd 4722 apache 15 0 503m 118m 19m R 1.7 1.5 1:17.79 httpd 4647 apache 15 0 484m 105m 20m S 1.3 1.3 1:40.73 httpd 4643 apache 16 0 481m 100m 20m S 0.7 1.3 1:11.80 httpd 1561 root 15 0 12900 1264 828 R 0.3 0.0 0:00.54 top 4434 mysql 15 0 496m 55m 4812 S 0.3 0.7 0:06.69 mysqld 4646 apache 15 0 481m 100m 19m S 0.3 1.3 1:25.51 httpd 1 root 18 0 10372 692 580 S 0.0 0.0 0:02.09 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.03 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:00.03 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.02 migration/5 18 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.01 migration/6 21 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/6 22 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/6 23 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/7

    Read the article

  • pre-commit hook in svn: could not be translated from the native locale to UTF-8

    - by Alexandre Moraes
    Hi everybody, I have a problem with my pre-commit hook. This hook test if a file is locked when the user commits. When a bad condition happens, it should output that the another user is locking this file or if nobody is locking, it should show "you are not locking this file message (file´s name)". The error happens when the file´s name has some latin character like "ç" and tortoise show me this in the output. Commit failed (details follow): Commit blocked by pre-commit hook (exit code 1) with output: [Erro output could not be translated from the native locale to UTF-8.] Do you know how can I solve this? Thanks, Alexandre My shell script is here: #!/bin/sh REPOS="$1" TXN="$2" export LANG="en_US.UTF-8" /app/svn/hooks/ensure-has-need-lock.pl "$REPOS" "$TXN" if [ $? -ne 0 ]; then exit 1; fi exit 0 And my perl is here: !/usr/bin/env perl #Turn on warnings the best way depending on the Perl version. BEGIN { if ( $] >= 5.006_000) { require warnings; import warnings; } else { $^W = 1; } } use strict; use Carp; &usage unless @ARGV == 2; my $repos = shift; my $txn = shift; my $svnlook = "/usr/local/bin/svnlook"; my $user; my $ok = 1; foreach my $program ($svnlook) { if (-e $program) { unless (-x $program) { warn "$0: required program $program' is not executable, ", "edit $0.\n"; $ok = 0; } } else { warn "$0: required program $program' does not exist, edit $0.\n"; $ok = 0; } } exit 1 unless $ok; unless (-e $repos){ &usage("$0: repository directory $repos' does not exist."); } unless (-d $repos){ &usage("$0: repository directory $repos' is not a directory."); } foreach my $user_tmp (&read_from_process($svnlook, 'author', $repos, '-t', $txn)) { $user = $user_tmp; } my @errors; foreach my $transaction (&read_from_process($svnlook, 'changed', $repos, '-t', $txn)){ if ($transaction =~ /^U. (.*[^\/])$/){ my $file = $1; my $err = 0; foreach my $locks (&read_from_process($svnlook, 'lock', $repos, $file)){ $err = 1; if($locks=~ /Owner: (.*)/){ if($1 != $user){ push @errors, "$file : You are not locking this file!"; } } } if($err==0){ push @errors, "$file : You are not locking this file!"; } } elsif($transaction =~ /^D. (.*[^\/])$/){ my $file = $1; my $tchan = &read_from_process($svnlook, 'lock', $repos, $file); foreach my $locks (&read_from_process($svnlook, 'lock', $repos, $file)){ push @errors, "$1 : cannot delete locked Files"; } } elsif($transaction =~ /^A. (.*[^\/])$/){ my $needs_lock; my $path = $1; foreach my $prop (&read_from_process($svnlook, 'proplist', $repos, '-t', $txn, '--verbose', $path)){ if ($prop =~ /^\s*svn:needs-lock : (\S+)/){ $needs_lock = $1; } } if (not $needs_lock){ push @errors, "$path : svn:needs-lock is not set. Pleas ask TCC for support."; } } } if (@errors) { warn "$0:\n\n", join("\n", @errors), "\n\n"; exit 1; } else { exit 0; } sub usage { warn "@_\n" if @_; die "usage: $0 REPOS TXN-NAME\n"; } sub safe_read_from_pipe { unless (@_) { croak "$0: safe_read_from_pipe passed no arguments.\n"; } print "Running @_\n"; my $pid = open(SAFE_READ, '-|'); unless (defined $pid) { die "$0: cannot fork: $!\n"; } unless ($pid) { open(STDERR, ">&STDOUT") or die "$0: cannot dup STDOUT: $!\n"; exec(@_) or die "$0: cannot exec @_': $!\n"; } my @output; while (<SAFE_READ>) { chomp; push(@output, $_); } close(SAFE_READ); my $result = $?; my $exit = $result >> 8; my $signal = $result & 127; my $cd = $result & 128 ? "with core dump" : ""; if ($signal or $cd) { warn "$0: pipe from @_' failed $cd: exit=$exit signal=$signal\n"; } if (wantarray) { return ($result, @output); } else { return $result; } } sub read_from_process { unless (@_) { croak "$0: read_from_process passed no arguments.\n"; } my ($status, @output) = &safe_read_from_pipe(@_); if ($status) { if (@output) { die "$0: @_' failed with this output:\n", join("\n", @output), "\n"; } else { die "$0: @_' failed with no output.\n"; } } else { return @output; } }

    Read the article

  • Upgrading Windows 8 boot to VHD to Windows 8.1&ndash;Step by step guide

    - by Liam Westley
    Originally posted on: http://geekswithblogs.net/twickers/archive/2013/10/19/upgrading-windows-8-boot-to-vhd-to-windows-8.1ndashstep-by.aspxBoot to VHD – dual booting Windows 7 and Windows 8 became easy When Windows 8 arrived, quite a few people decided that they would still dual boot their machines, and instead of mucking about with resizing disk partitions to free up space for Windows 8 they decided to use the boot from VHD feature to create a huge hard disc image into which Windows 8 could be installed.  Scott Hanselman wrote this installation guide, while I myself used the installation guide from Ed Bott of ZD net fame. Boot to VHD is a great solution, it achieves a dual boot, can be backed up easily and had virtually no effect on the original Windows 7 partition. As a developer who has dual booted Windows operating systems for years, hacking boot.ini files, the boot to VHD was a much easier solution. Upgrade to Windows 8.1 – ah, you can’t do that on a virtual disk installation (boot to VHD) Last week the final version of Windows 8.1 arrived, and I went into the Windows Store to upgrade.  Luckily I’m on a fast download service, and use an SSD, because once the upgrade was downloaded and prepared Windows informed that This PC can’t run Windows 8.1, and provided the reason, You can’t install Windows on a virtual drive.  You can see an image of the message and discussion that sparked my search for a solution in this Microsoft Technet forum post. I was determined not to have to resize partitions yet again and fiddle with VHD to disk utilities and back again, and in the end I did succeed in upgrading to a Windows 8.1 boot to VHD partition.  It takes quite a bit of effort though … tldr; Simple steps of how you upgrade Boot into Windows 7 – make a copy of your Windows 8 VHD, to become Windows 8.1 Enable Hyper-V in your Windows 8 (the original boot to VHD partition) Create a new virtual machine, attaching the copy of your Windows 8 VHD Start the virtual machine, upgrade it via the Windows Store to Windows 8.1 Shutdown the virtual machine Boot into Windows 7 – use the bcedit tool to create a new Windows 8.1 boot to VHD option (pointing at the copy) Boot into the new Windows 8.1 option Reactivate Windows 8.1 (it will have become deactivated by running under Hyper-V) Remove the original Windows 8 VHD, and in Windows 7 use bcedit to remove it from the boot menu Things you’ll need A system that can run Hyper-V under Windows 8 (Intel i5, i7 class CPU) Enough space to have your original Windows 8 boot to VHD and a copy at the same time An ISO or DVD for Windows 8 to create a bootable Windows 8 partition Step by step guide Boot to your base o/s, the real one, Windows 7. Make a copy of the Windows 8 VHD file that you use to boot Windows 8 (via boot from VHD) – I copied it from a folder on C: called VHD-Win8 to VHD-Win8.1 on my N: drive. Reboot your system into Windows 8, and enable Hyper-V if not already present (this may require reboot) Use the Hyper-V manager , create a new Hyper-V machine, using half your system memory, and use the option to attach an existing VHD on the main IDE controller – this will be the new copy you made in Step 2. Start the virtual machine, use Connect to view it, and you’ll probably discover it cannot boot as there is no boot record If this is the case, go to Hyper-V manager, edit the Settings for the virtual machine to attach an ISO of a Windows 8 DVD to the second IDE controller. Start the virtual machine, use Connect to view it, and it should now attempt a fresh installation of Windows 8.  You should select Advanced Options and choose Repair - this will make VHD bootable When the setup reboots your virtual machine, turn off the virtual machine, and remove the ISO of the Windows 8 DVD from the virtual machine settings. Start virtual machine, use Connect to view it.  You will see the devices to be re-discovered (including your quad CPU becoming single CPU).  Eventually you should see the Windows Login screen. You may notice that your desktop background (Win+D) will have turned black as your Windows installation has become deactivate due to the hardware changes between your real PC and Hyper-V. Fortunately becoming deactivated, does not stop you using the Windows Store, where you can select the update to Windows 8.1. You can now watch the progress joy of the Windows 8 update; downloading, preparing to update, checking compatibility, gathering info, preparing to restart, and finally, confirm restart - remember that you are restarting your virtual machine sitting on the copy of the VHD, not the Windows 8 boot to VHD you are currently using to run Hyper-V (confused yet?) After the reboot you get the real upgrade messages; setting up x%, xx%, (quite slow) After a while, Getting ready Applying PC Settings x%, xx% (really slow) Updating your system (fast) Setting up a few more things x%, (quite slow) Getting ready, again Accept license terms Express settings Confirmed previous password Next, I had to set up a Microsoft account – which is possibly now required, and not optional Using the Microsoft account required a 2 factor authorization, via text message, a 7 digit code for me Finalising settings Blank screen, HI .. We're setting up things for you (similar to original Windows 8 install) 'You can get new apps from the Store', below which is ’Installing your apps’ - I had Windows Media Center which is counts as an app from the Store ‘Taking care of a few things’, below which is ‘Installing your apps’ ‘Taking care of a few things’, below ‘Don't turn off your PC’ ‘Getting your apps ready’, below ‘Don't turn off your PC’ ‘Almost ready’, below ‘Don't turn off your PC’ … finally, we get the Windows 8.1 start menu, and a quick Win+D to check the desktop confirmed all the application icons I expected, pinned items on the taskbar, and one app moaning about a missing drive At this point the upgrade is complete – you can shutdown the virtual machine Reboot from the original Windows 8 and return to Windows 7 to configure booting to the Windows 8.1 copy of the VHD In an administrator command prompt do following use the bcdedit tool (from an MSDN blog about configuring VHD to boot in Windows 7) Type bcedit to list the current boot options, so you can copy the GUID (complete with brackets/braces) for the original Windows 8 boot to VHD Create a new menu option, copy of the Windows 8 option; bcdedit /copy {originalguid} /d "Windows 8.1" Point the new Windows 8.1 option to the copy of the VHD; bcdedit /set {newguid} device vhd=[D:]\Image.vhd Point the new Windows 8.1 option to the copy of the VHD; bcdedit /set {newguid} osdevice vhd=[D:]\Image.vhd Set autodetection of the HAL (may already be set); bcdedit /set {newguid} detecthal on Reboot from Windows 7 and select the new option 'Windows 8.1' on the boot menu, and you’ll have some messages to look at, as your hardware is redetected (as you are back from 1 CPU to 4 CPUs) ‘Getting devices ready, blank then %xx, with occasional blank screen, for the graphics driver, (fast-ish) Getting Ready message (fast) You will have to suffer one final reboots, choose 'Windows 8.1' and you can now login to a lovely Windows 8.1 start screen running on non virtualized hardware via boot to VHD After checking everything is running fine, you can now choose to Activate Windows, which for me was a toll free phone call to the automated system where you type in lots of numbers to be given a whole bunch of new activation codes. Once you’re happy with your new Windows 8.1 boot to VHD, and no longer need the Windows 8 boot to VHD, feel free to delete the old one.  I do believe once you upgrade, you are no longer licensed to use it anyway. There, that was simple wasn’t it? Looking at the huge list of steps it took to perform this upgrade, you may wonder whether I think this is worth it.  Well, I think it is worth booting to VHD.  It makes backups a snap (go to Windows 7, copy the VHD, you backed up the o/s) and helps with disk management – want to move the o/s, you can move the VHD and repoint the boot menu to the new location. The downside is that Microsoft has complete neglected to support boot to VHD as an upgradable option.  Quite a poor decision in my opinion, and if you read twitter and the forums quite a few people agree with that view.  It’s a shame this got missed in the work on creating the upgrade packages for Windows 8.1.

    Read the article

  • Soapi.CS : A fully relational fluent .NET Stack Exchange API client library

    - by Sky Sanders
    Soapi.CS for .Net / Silverlight / Windows Phone 7 / Mono as easy as breathing...: var context = new ApiContext(apiKey).Initialize(false); Question thisPost = context.Official .StackApps .Questions.ById(386) .WithComments(true) .First(); Console.WriteLine(thisPost.Title); thisPost .Owner .Questions .PageSize(5) .Sort(PostSort.Votes) .ToList() .ForEach(q=> { Console.WriteLine("\t" + q.Score + "\t" + q.Title); q.Timeline.ToList().ForEach(t=> Console.WriteLine("\t\t" + t.TimelineType + "\t" + t.Owner.DisplayName)); Console.WriteLine(); }); // if you can think it, you can get it. Output Soapi.CS : A fully relational fluent .NET Stack Exchange API client library 21 Soapi.CS : A fully relational fluent .NET Stack Exchange API client library Revision code poet Revision code poet Votes code poet Votes code poet Revision code poet Revision code poet Revision code poet Votes code poet Votes code poet Votes code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Votes code poet Comment code poet Revision code poet Votes code poet Revision code poet Revision code poet Revision code poet Answer code poet Revision code poet Revision code poet 14 SOAPI-WATCH: A realtime service that notifies subscribers via twitter when the API changes in any way. Votes code poet Revision code poet Votes code poet Comment code poet Comment code poet Comment code poet Votes lfoust Votes code poet Comment code poet Comment code poet Comment code poet Comment code poet Revision code poet Comment lfoust Votes code poet Revision code poet Votes code poet Votes lfoust Votes code poet Revision code poet Comment Dave DeLong Revision code poet Revision code poet Votes code poet Comment lfoust Comment Dave DeLong Comment lfoust Comment lfoust Comment Dave DeLong Revision code poet 11 SOAPI-EXPLORE: Self-updating single page JavaSript API test harness Votes code poet Votes code poet Votes code poet Votes code poet Votes code poet Comment code poet Revision code poet Votes code poet Revision code poet Revision code poet Revision code poet Comment code poet Revision code poet Votes code poet Comment code poet Question code poet Votes code poet 11 Soapi.JS V1.0: fluent JavaScript wrapper for the StackOverflow API Comment George Edison Comment George Edison Comment George Edison Comment George Edison Comment George Edison Comment George Edison Answer George Edison Votes code poet Votes code poet Votes code poet Votes code poet Revision code poet Revision code poet Answer code poet Comment code poet Revision code poet Comment code poet Comment code poet Comment code poet Revision code poet Revision code poet Votes code poet Votes code poet Votes code poet Votes code poet Comment code poet Comment code poet Comment code poet Comment code poet Comment code poet 9 SOAPI-DIFF: Your app broke? Check SOAPI-DIFF to find out what changed in the API Votes code poet Revision code poet Comment Dennis Williamson Answer Dennis Williamson Votes code poet Votes Dennis Williamson Comment code poet Question code poet Votes code poet About A robust, fully relational, easy to use, strongly typed, end-to-end StackOverflow API Client Library. Out of the box, Soapi provides you with a robust client library that abstracts away most all of the messy details of consuming the API and lets you concentrate on implementing your ideas. A few features include: A fully relational model of the API data set exposed via a fully 'dot navigable' IEnumerable (LINQ) implementation. Simply tell Soapi what you want and it will get it for you. e.g. "On my first question, from the author of the first comment, get the first page of comments by that person on any post" my.Questions.First().Comments.First().Owner.Comments.ToList(); (yes this is a real expression that returns the data as expressed!) Full coverage of the API, all routes and all parameters with an intuitive syntax. Strongly typed Domain Data Objects for all API data structures. Eager and Lazy Loading of 'stub' objects. Eager\Lazy loading may be disabled. When finer grained control of requests is desired, the core RouteMap objects may be leveraged to request data from any of the API paths using all available parameters as documented on the help pages. A rich Asynchronous implementation. A configurable request cache to reduce unnecessary network traffic and to simplify your usage logic. There is no need to go out of your way to be frugal. You may set a distinct cache duration for any particular route. A configurable request throttle to ensure compliance with the api terms of usage and to simplify your code in that you do not have to worry about and respond to 50X errors. The RequestCache and Throttled Queue are thread-safe, so can make as many requests as you like from as many threads as you like as fast as you like and not worry about abusing the api or having to write reams of management/compensation code. Configurable retry threshold that will, by default, make up to 3 attempts to retrieve a request before failing. Every request made by Soapi is properly formed and directed so most any http error will be the result of a timeout or other network infrastructure. A retry buffer provides a level of fault tolerance that you can rely on. An almost identical javascript library, Soapi.JS, and it's full figured big brother, Soapi.JS2, that will enable you to leverage your server cycles and bandwidth for only those tasks that require it and offload things like status updates to the client's browser. License Licensed GPL Version 2 license. Why is Soapi.CS GPL? Can I get an LGPL license for Soapi.CS? (hint: probably) Platforms .NET 3.5 .NET 4.0 Silverlight 3 Silverlight 4 Windows Phone 7 Mono Download Source code lives @ http://soapics.codeplex.com. Binary releases are forthcoming. codeplex is acting up again. get the source and binaries @ http://bitbucket.org/bitpusher/soapi.cs/downloads The source is C# 3.5. and includes projects and solutions for the following IDEs Visual Studio 2008 Visual Studio 2010 ModoDevelop 2.4 Documentation Full documentation is available at http://soapi.info/help/cs/index.aspx Sample Code / Usage Examples Sample code and usage examples will be added as answers to this question. Full API Coverage all API routes are covered Full Parameter Parity If the API exposes it, Soapi giftwraps it for you. Building a simple app with Soapi.CS - a simple app that gathers all traces of a user in the whole stackiverse. Fluent Configuration - Setting up a Soapi.ApiContext could not be easier Bulk Data Import - A tiny app that quickly loads a SQLite data file with all users in the stackiverse. Paged Results - Soapi.CS transparently handles multi-page operations. Asynchronous Requests - Soapi.CS provides a rich asynchronous model that is especially useful when writing api apps in Silverlight or Windows Phone 7. Caching and Throttling - how and why Apps that use Soapi.CS Soapi.FindUser - .net utility for locating a user anywhere in the stackiverse Soapi.Explore - The entire API at your command Soapi.LastSeen - List users by last access time Add your app/site here - I know you are out there ;-) if you are not comfortable editing this post, simply add a comment and I will add it. The CS/SL/WP7/MONO libraries all compile the same code and with the exception of environmental considerations of Silverlight, the code samples are valid for all libraries. You may also find guidance in the test suites. More information on the SOAPI eco-system. Contact This library is currently the effort of me, Sky Sanders (code poet) and can be reached at gmail - sky.sanders Any who are interested in improving this library are welcome. Support Soapi You can help support this project by voting for Soapi's Open Source Ad post For more information about the origins of Soapi.CS and the rest of the Soapi eco-system see What is Soapi and why should I care?

    Read the article

  • Prime Numbers Code Help

    - by andrew
    Hello Everybody, I am suppose to "write a Java program that reads a positive integer n from standard input, then prints out the first n prime number." It's divided into 3 parts. 1st: This function will return true or false according to whether m is prime or composite. The array argument P will contain a sufficient number of primes to do the testing. Specifically, at the time isPrime() is called, array P must contain (at least) all primes p in the range 2 p m . For instance, to test m = 53 for primality, one must do successive trial divisions by 2, 3, 5, and 7. We go no further since 11 53 . Thus a precondition for the function call isPrime(53, P) is that P[0] = 2 , P[1] = 3 , P[2] = 5, and P[3] = 7 . The return value in this case would be true since all these divisions fail. Similarly to test m =143 , one must do trial divisions by 2, 3, 5, 7, and 11 (since 13 143 ). The precondition for the function call isPrime(143, P) is therefore P[0] = 2 , P[1] = 3 , P[2] = 5, P[3] = 7 , and P[4] =11. The return value in this case would be false since 11 divides 143. Function isPrime() should contain a loop that steps through array P, doing trial divisions. This loop should terminate when 2 either a trial division succeeds, in which case false is returned, or until the next prime in P is greater than m , in which case true is returned. Then there is the "main function" • Check that the user supplied exactly one command line argument which can be interpreted as a positive integer n. If the command line argument is not a single positive integer, your program will print a usage message as specified in the examples below, then exit. • Allocate array Primes[] of length n and initialize Primes[0] = 2 . • Enter a loop which will discover subsequent primes and store them as Primes[1] , Primes[2], Primes[3] , ……, Primes[n -1] . This loop should contain an inner loop which walks through successive integers and tests them for primality by calling function isPrime() with appropriate arguments. • Print the contents of array Primes[] to stdout, 10 to a line separated by single spaces. In other words Primes[0] through Primes[9] will go on line 1, Primes[10] though Primes[19] will go on line 2, and so on. Note that if n is not a multiple of 10, then the last line of output will contain fewer than 10 primes. The last function is called "usage" which I am not sure how to execute this! Your program will include a function called Usage() having signature static void Usage() that prints this message to stderr, then exits. Thus your program will contain three functions in all: main(), isPrime(), and Usage(). Each should be preceded by a comment block giving it’s name, a short description of it’s operation, and any necessary preconditions (such as those for isPrime().) And hear is my code, but I am having a bit of a problem and could you guys help me fix it? If I enter the number "5" it gives me the prime numbers which are "6,7,8,9" which doesn't make much sense. import java.util.; import java.io.; import java.lang.*; public class PrimeNumber { static boolean isPrime(int m, int[] P){ int squarert = Math.round( (float)Math.sqrt(m) ); int i = 2; boolean ans=false; while ((i<=squarert) & (ans==false)) { int c= P[i]; if (m%c==0) ans= true; else ans= false; i++; } /* if(ans ==true) ans=false; else ans=true; return ans; } ///****main public static void main(String[] args ) { Scanner in= new Scanner(System.in); int input= in.nextInt(); int i, j; int squarert; boolean ans = false; int userNum; int remander = 0; System.out.println("input: " + input); int[] prime = new int[input]; prime[0]= 2; for(i=1; i ans = isPrime(j,prime); j++;} prime[i] = j; } //prnt prime System.out.println("The first " + input + " prime number(s) are: "); for(int r=0; r }//end of main } Thanks for the help

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • HTG Explains: Should You Build Your Own PC?

    - by Chris Hoffman
    There was a time when every geek seemed to build their own PC. While the masses bought eMachines and Compaqs, geeks built their own more powerful and reliable desktop machines for cheaper. But does this still make sense? Building your own PC still offers as much flexibility in component choice as it ever did, but prebuilt computers are available at extremely competitive prices. Building your own PC will no longer save you money in most cases. The Rise of Laptops It’s impossible to look at the decline of geeks building their own PCs without considering the rise of laptops. There was a time when everyone seemed to use desktops — laptops were more expensive and significantly slower in day-to-day tasks. With the diminishing importance of computing power — nearly every modern computer has more than enough power to surf the web and use typical programs like Microsoft Office without any trouble — and the rise of laptop availability at nearly every price point, most people are buying laptops instead of desktops. And, if you’re buying a laptop, you can’t really build your own. You can’t just buy a laptop case and start plugging components into it — even if you could, you would end up with an extremely bulky device. Ultimately, to consider building your own desktop PC, you have to actually want a desktop PC. Most people are better served by laptops. Benefits to PC Building The two main reasons to build your own PC have been component choice and saving money. Building your own PC allows you to choose all the specific components you want rather than have them chosen for you. You get to choose everything, including the PC’s case and cooling system. Want a huge case with room for a fancy water-cooling system? You probably want to build your own PC. In the past, this often allowed you to save money — you could get better deals by buying the components yourself and combining them, avoiding the PC manufacturer markup. You’d often even end up with better components — you could pick up a more powerful CPU that was easier to overclock and choose more reliable components so you wouldn’t have to put up with an unstable eMachine that crashed every day. PCs you build yourself are also likely more upgradable — a prebuilt PC may have a sealed case and be constructed in such a way to discourage you from tampering with the insides, while swapping components in and out is generally easier with a computer you’ve built on your own. If you want to upgrade your CPU or replace your graphics card, it’s a definite benefit. Downsides to Building Your Own PC It’s important to remember there are downsides to building your own PC, too. For one thing, it’s just more work — sure, if you know what you’re doing, building your own PC isn’t that hard. Even for a geek, researching the best components, price-matching, waiting for them all to arrive, and building the PC just takes longer. Warranty is a more pernicious problem. If you buy a prebuilt PC and it starts malfunctioning, you can contact the computer’s manufacturer and have them deal with it. You don’t need to worry about what’s wrong. If you build your own PC and it starts malfunctioning, you have to diagnose the problem yourself. What’s malfunctioning, the motherboard, CPU, RAM, graphics card, or power supply? Each component has a separate warranty through its manufacturer, so you’ll have to determine which component is malfunctioning before you can send it off for replacement. Should You Still Build Your Own PC? Let’s say you do want a desktop and are willing to consider building your own PC. First, bear in mind that PC manufacturers are buying in bulk and getting a better deal on each component. They also have to pay much less for a Windows license than the $120 or so it would cost you to to buy your own Windows license. This is all going to wipe out the cost savings you’ll see — with everything all told, you’ll probably spend more money building your own average desktop PC than you would picking one up from Amazon or the local electronics store. If you’re an average PC user that uses your desktop for the typical things, there’s no money to be saved from building your own PC. But maybe you’re looking for something higher end. Perhaps you want a high-end gaming PC with the fastest graphics card and CPU available. Perhaps you want to pick out each individual component and choose the exact components for your gaming rig. In this case, building your own PC may be a good option. As you start to look at more expensive, high-end PCs, you may start to see a price gap — but you may not. Let’s say you wanted to blow thousands of dollars on a gaming PC. If you’re looking at spending this kind of money, it would be worth comparing the cost of individual components versus a prebuilt gaming system. Still, the actual prices may surprise you. For example, if you wanted to upgrade Dell’s $2293 Alienware Aurora to include a second NVIDIA GeForce GTX 780 graphics card, you’d pay an additional $600 on Alienware’s website. The same graphics card costs $650 on Amazon or Newegg, so you’d be spending more money building the system yourself. Why? Dell’s Alienware gets bulk discounts you can’t get — and this is Alienware, which was once regarded as selling ridiculously overpriced gaming PCs to people who wouldn’t build their own. Building your own PC still allows you to get the most freedom when choosing and combining components, but this is only valuable to a small niche of gamers and professional users — most people, even average gamers, would be fine going with a prebuilt system. If you’re an average person or even an average gamer, you’ll likely find that it’s cheaper to purchase a prebuilt PC rather than assemble your own. Even at the very high end, components may be more expensive separately than they are in a prebuilt PC. Enthusiasts who want to choose all the individual components for their dream gaming PC and want maximum flexibility may want to build their own PCs. Even then, building your own PC these days is more about flexibility and component choice than it is about saving money. In summary, you probably shouldn’t build your own PC. If you’re an enthusiast, you may want to — but only a small minority of people would actually benefit from building their own systems. Feel free to compare prices, but you may be surprised which is cheaper. Image Credit: Richard Jones on Flickr, elPadawan on Flickr, Richard Jones on Flickr     

    Read the article

  • Deterministic/Consistent Unique Masking

    - by Dinesh Rajasekharan-Oracle
    One of the key requirements while masking data in large databases or multi database environment is to consistently mask some columns, i.e. for a given input the output should always be the same. At the same time the masked output should not be predictable. Deterministic masking also eliminates the need to spend enormous amount of time spent in identifying data relationships, i.e. parent and child relationships among columns defined in the application tables. In this blog post I will explain different ways of consistently masking the data across databases using Oracle Data Masking and Subsetting The readers of post should have minimal knowledge on Oracle Enterprise Manager 12c, Application Data Modeling, Data Masking concepts. For more information on these concepts, please refer to Oracle Data Masking and Subsetting document Oracle Data Masking and Subsetting 12c provides four methods using which users can consistently yet irreversibly mask their inputs. 1. Substitute 2. SQL Expression 3. Encrypt 4. User Defined Function SUBSTITUTE The substitute masking format replaces the original value with a value from a pre-created database table. As the method uses a hash based algorithm in the back end the mappings are consistent. For example consider DEPARTMENT_ID in EMPLOYEES table is replaced with FAKE_DEPARTMENT_ID from FAKE_TABLE. The substitute masking transformation that all occurrences of DEPARTMENT_ID say ‘101’ will be replaced with ‘502’ provided same substitution table and column is used , i.e. FAKE_TABLE.FAKE_DEPARTMENT_ID. The following screen shot shows the usage of the Substitute masking format with in a masking definition: Note that the uniqueness of the masked value depends on the number of columns being used in the substitution table i.e. if the original table contains 50000 unique values, then for the masked output to be unique and deterministic the substitution column should also contain 50000 unique values without which only consistency is maintained but not uniqueness. SQL EXPRESSION SQL Expression replaces an existing value with the output of a specified SQL Expression. For example while masking an EMPLOYEES table the EMAIL_ID of an employee has to be in the format EMPLOYEE’s [email protected] while FIRST_NAME and LAST_NAME are the actual column names of the EMPLOYEES table then the corresponding SQL Expression will look like %FIRST_NAME%||’.’||%LAST_NAME%||’@COMPANY.COM’. The advantage of this technique is that if you are masking FIRST_NAME and LAST_NAME of the EMPLOYEES table than the corresponding EMAIL ID will be replaced accordingly by the masking scripts. One of the interesting aspect’s of a SQL Expressions is that you can use sub SQL expressions, which means that you can write a nested SQL and use it as SQL Expression to address a complex masking business use cases. SQL Expression can also be used to consistently replace value with hashed value using Oracle’s PL/SQL function ORA_HASH. The following SQL Expression will help in the previous example for replacing the DEPARTMENT_IDs with a hashed number ORA_HASH (%DEPARTMENT_ID%, 1000) The following screen shot shows the usage of encrypt masking format with in the masking definition: ORA_HASH takes three arguments: 1. Expression which can be of any data type except LONG, LOB, User Defined Type [nested table type is allowed]. In the above example I used the Original value as expression. 2. Number of hash buckets which can be number between 0 and 4294967295. The default value is 4294967295. You can also co-relate the number of hash buckets to a range of numbers. In the above example above the bucket value is specified as 1000, so the end result will be a hashed number in between 0 and 1000. 3. Seed, can be any number which decides the consistency, i.e. for a given seed value the output will always be same. The default seed is 0. In the above SQL Expression a seed in not specified, so it to 0. If you have to use a non default seed then the function will look like. ORA_HASH (%DEPARTMENT_ID%, 1000, 1234 The uniqueness depends on the input and the number of hash buckets used. However as ORA_HASH uses a 32 bit algorithm, considering birthday paradox or pigeonhole principle there is a 0.5 probability of collision after 232-1 unique values. ENCRYPT Encrypt masking format uses a blend of 3DES encryption algorithm, hashing, and regular expression to produce a deterministic and unique masked output. The format of the masked output corresponds to the specified regular expression. As this technique uses a key [string] to encrypt the data, the same string can be used to decrypt the data. The key also acts as seed to maintain consistent outputs for a given input. The following screen shot shows the usage of encrypt masking format with in the masking definition: Regular Expressions may look complex for the first time users but you will soon realize that it’s a simple language. There are many resources in internet, oracle documentation, oracle learning library, my oracle support on writing a Regular Expressions, out of all the following My Oracle Support document helped me to get started with Regular Expressions: Oracle SQL Support for Regular Expressions[Video](Doc ID 1369668.1) USER DEFINED FUNCTION [UDF] User Defined Function or UDF provides flexibility for the users to code their own masking logic in PL/SQL, which can be called from masking Defintion. The standard format of an UDF in Oracle Data Masking and Subsetting is: Function udf_func (rowid varchar2, column_name varchar2, original_value varchar2) returns varchar2; Where • rowid is the row identifier of the column that needs to be masked • column_name is the name of the column that needs to be masked • original_value is the column value that needs to be masked You can achieve deterministic masking by using Oracle’s built in hash functions like, ORA_HASH, DBMS_CRYPTO.MD4, DBMS_CRYPTO.MD5, DBMS_UTILITY. GET_HASH_VALUE.Please refers to the Oracle Database Documentation for more information on the Oracle Hash functions. For example the following masking UDF generate deterministic unique hexadecimal values for a given string input: CREATE OR REPLACE FUNCTION RD_DUX (rid varchar2, column_name varchar2, orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2 (26); no_of_characters number(2); BEGIN no_of_characters:=6; stext:=substr(RAWTOHEX(DBMS_CRYPTO.HASH(UTL_RAW.CAST_TO_RAW(text),1)),0,no_of_characters); RETURN stext; END; The uniqueness depends on the input and length of the string and number of bits used by hash algorithm. In the above function MD4 hash is used [denoted by argument 1 in the DBMS_CRYPTO.HASH function which is a 128 bit algorithm which produces 2^128-1 unique hashed values , however this is limited by the length of the input string which is 6, so only 6^6 unique values will be generated. Also do not forget about the birthday paradox/pigeonhole principle mentioned earlier in this post. An another example is to consistently replace characters or numbers preserving the length and special characters as shown below: CREATE OR REPLACE FUNCTION RD_DUS(rid varchar2,column_name varchar2,orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2(26); BEGIN DBMS_RANDOM.SEED(orig_val); stext:=TRANSLATE(orig_val,'ABCDEFGHILKLMNOPQRSTUVWXYZ',DBMS_RANDOM.STRING('U',26)); stext:=TRANSLATE(stext,'abcdefghijklmnopqrstuvwxyz',DBMS_RANDOM.STRING('L',26)); stext:=TRANSLATE(stext,'0123456789',to_char(DBMS_RANDOM.VALUE(1,9))); stext:=REPLACE(stext,'.','0'); RETURN stext; END; The following screen shot shows the usage of an UDF with in a masking definition: To summarize, Oracle Data Masking and Subsetting helps you to consistently mask data across databases using one or all of the methods described in this post. It saves the hassle of identifying the parent-child relationships defined in the application table. Happy Masking

    Read the article

  • Metrics - A little knowledge can be a dangerous thing (or 'Why you're not clever enough to interpret metrics data')

    - by Jason Crease
    At RedGate Software, I work on a .NET obfuscator  called SmartAssembly.  Various features of it use a database to store various things (exception reports, name-mappings, etc.) The user is given the option of using either a SQL-Server database (which requires them to have Microsoft SQL Server), or a Microsoft Access MDB file (which requires nothing). MDB is the default option, but power-users soon switch to using a SQL Server database because it offers better performance and data-sharing. In the fashionable spirit of optimization and metrics, an obvious product-management question is 'Which is the most popular? SQL Server or MDB?' We've collected data about this fact, using our 'Feature-Usage-Reporting' technology (available as part of SmartAssembly) and more recently our 'Application Metrics' technology: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 28 19.0 8115 8115 MDB 114 77.6 1449 1449 (As a disclaimer, please note than SmartAssembly has far more than 132 users . This data is just a selection of one build) So, it would appear that SQL-Server is used by fewer users, but more often. Great. But here's why these numbers are useless to me: Only the original developers understand the data What does a single 'usage' of 'MDB' mean? Does this happen once per run? Once per option change? On clicking the 'Obfuscate Now' button? When running the command-line version or just from the UI version? Each question could skew the data 10-fold either way, and the answers only known by the developer that instrumented the application in the first place. In other words, only the original developer can interpret the data - product-managers cannot interpret the data unaided. Most of the data is from uninterested users About half of people who download and run a free-trial from the internet quit it almost immediately. Only a small fraction use it sufficiently to make informed choices. Since the MDB option is the default one, we don't know how many of those 114 were people CHOOSING to use the MDB, or how many were JUST HAPPENING to use this MDB default for their 20-second trial. This is a problem we see across all our metrics: Are people are using X because it's the default or are they using X because they want to use X? We need to segment the data further - asking what percentage of each percentage meet our criteria for an 'established user' or 'informed user'. You end up spending hours writing sophisticated and dubious SQL queries to segment the data further. Not fun. You can't find out why they used this feature Metrics can answer the when and what, but not the why. Why did people use feature X? If you're anything like me, you often click on random buttons in unfamiliar applications just to explore the feature-set. If we listened uncritically to metrics at RedGate, we would eliminate the most-important and more-complex features which people actually buy the software for, leaving just big buttons on the main page and the About-Box. "Ah, that's interesting!" rather than "Ah, that's actionable!" People do love data. Did you know you eat 1201 chickens in a lifetime? But just 4 cows? Interesting, but useless. Often metrics give you a nice number: '5.8% of users have 3 or more monitors' . But unless the statistic is both SUPRISING and ACTIONABLE, it's useless. Most metrics are collected, reviewed with lots of cooing. and then forgotten. Unless a piece-of-data could change things, it's useless collecting it. People get obsessed with significance levels The first things that lots of people do with this data is do a t-test to get a significance level ("Hey! We know with 99.64% confidence that people prefer SQL Server to MDBs!") Believe me: other causes of error/misinterpretation in your data are FAR more significant than your t-test could ever comprehend. Confirmation bias prevents objectivity If the data appears to match our instinct, we feel satisfied and move on. If it doesn't, we suspect the data and dig deeper, plummeting down a rabbit-hole of segmentation and filtering until we give-up and move-on. Data is only useful if it can change our preconceptions. Do you trust this dodgy data more than your own understanding, knowledge and intelligence?  I don't. There's always multiple plausible ways to interpret/action any data Let's say we segment the above data, and get this data: Post-trial users (i.e. those using a paid version after the 14-day free-trial is over): Parameter Number of users % of total users Number of sessions Number of usages SQL Server 13 9.0 1115 1115 MDB 5 4.2 449 449 Trial users: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 15 10.0 7000 7000 MDB 114 77.6 1000 1000 How do you interpret this data? It's one of: Mostly SQL Server users buy our software. People who can't afford SQL Server tend to be unable to afford or unwilling to buy our software. Therefore, ditch MDB-support. Our MDB support is so poor and buggy that our massive MDB user-base doesn't buy it.  Therefore, spend loads of money improving it, and think about ditching SQL-Server support. People 'graduate' naturally from MDB to SQL Server as they use the software more. Things are fine the way they are. We're marketing the tool wrong. The large number of MDB users represent uninformed downloaders. Tell marketing to aggressively target SQL Server users. To choose an interpretation you need to segment again. And again. And again, and again. Opting-out is correlated with feature-usage Metrics tends to be opt-in. This skews the data even further. Between 5% and 30% of people choose to opt-in to metrics (often called 'customer improvement program' or something like that). Casual trial-users who are uninterested in your product or company are less likely to opt-in. This group is probably also likely to be MDB users. How much does this skew your data by? Who knows? It's not all doom and gloom. There are some things metrics can answer well. Environment facts. How many people have 3 monitors? Have Windows 7? Have .NET 4 installed? Have Japanese Windows? Minor optimizations.  Is the text-box big enough for average user-input? Performance data. How long does our app take to start? How many databases does the average user have on their server? As you can see, questions about who-the-user-is rather than what-the-user-does are easier to answer and action. Conclusion Use SmartAssembly. If not for the metrics (called 'Feature-Usage-Reporting'), then at least for the obfuscation/error-reporting. Data raises more questions than it answers. Questions about environment are the easiest to answer.

    Read the article

  • SQL SERVER – Introduction to Wait Stats and Wait Types – Wait Type – Day 1 of 28

    - by pinaldave
    I have been working a lot on Wait Stats and Wait Types recently. Last Year, I requested blog readers to send me their respective server’s wait stats. I appreciate their kind response as I have received  Wait stats from my readers. I took each of the results and carefully analyzed them. I provided necessary feedback to the person who sent me his wait stats and wait types. Based on the feedbacks I got, many of the readers have tuned their server. After a while I got further feedbacks on my recommendations and again, I collected wait stats. I recorded the wait stats and my recommendations and did further research. At some point at time, there were more than 10 different round trips of the recommendations and suggestions. Finally, after six month of working my hands on performance tuning, I have collected some real world wisdom because of this. Now I plan to share my findings with all of you over here. Before anything else, please note that all of these are based on my personal observations and opinions. They may or may not match the theory available at other places. Some of the suggestions may not match your situation. Remember, every server is different and consequently, there is more than one solution to a particular problem. However, this series is written with kept wait stats in mind. While I was working on various performance tuning consultations, I did many more things than just tuning wait stats. Today we will discuss how to capture the wait stats. I use the script diagnostic script created by my friend and SQL Server Expert Glenn Berry to collect wait stats. Here is the script to collect the wait stats: -- Isolate top waits for server instance since last restart or statistics clear WITH Waits AS (SELECT wait_type, wait_time_ms / 1000. AS wait_time_s, 100. * wait_time_ms / SUM(wait_time_ms) OVER() AS pct, ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS rn FROM sys.dm_os_wait_stats WHERE wait_type NOT IN ('CLR_SEMAPHORE','LAZYWRITER_SLEEP','RESOURCE_QUEUE','SLEEP_TASK' ,'SLEEP_SYSTEMTASK','SQLTRACE_BUFFER_FLUSH','WAITFOR', 'LOGMGR_QUEUE','CHECKPOINT_QUEUE' ,'REQUEST_FOR_DEADLOCK_SEARCH','XE_TIMER_EVENT','BROKER_TO_FLUSH','BROKER_TASK_STOP','CLR_MANUAL_EVENT' ,'CLR_AUTO_EVENT','DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT' ,'XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN', 'SQLTRACE_INCREMENTAL_FLUSH_SLEEP')) SELECT W1.wait_type, CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s, CAST(W1.pct AS DECIMAL(12, 2)) AS pct, CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct FROM Waits AS W1 INNER JOIN Waits AS W2 ON W2.rn <= W1.rn GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct HAVING SUM(W2.pct) - W1.pct < 99 OPTION (RECOMPILE); -- percentage threshold GO This script uses Dynamic Management View sys.dm_os_wait_stats to collect the wait stats. It omits the system-related wait stats which are not useful to diagnose performance-related bottleneck. Additionally, not OPTION (RECOMPILE) at the end of the DMV will ensure that every time the query runs, it retrieves new data and not the cached data. This dynamic management view collects all the information since the time when the SQL Server services have been restarted. You can also manually clear the wait stats using the following command: DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR); Once the wait stats are collected, we can start analysis them and try to see what is causing any particular wait stats to achieve higher percentages than the others. Many waits stats are related to one another. When the CPU pressure is high, all the CPU-related wait stats show up on top. But when that is fixed, all the wait stats related to the CPU start showing reasonable percentages. It is difficult to have a sure solution, but there are good indications and good suggestions on how to solve this. I will keep this blog post updated as I will post more details about wait stats and how I reduce them. The reference to Book On Line is over here. Of course, I have selected February to run this Wait Stats series. I am already cheating by having the smallest month to run this series. :) Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Airtel 3G in Chennai – User experience, Price & What’s the catch?

    - by Boonei
    Finally ! Here we are with Airtel 3G in India. Now Airtel customers can have a go at real 3G speed. Sources suggest that the delay in rolling out 3G was due to hardware problems. It was provided by Ericsson. Now first things first. Let me get to the point. I had subscribed to Airtel’s 3G pack Rs.100 for 100 MB. This is to check out how good it is, did not want to pay a hefty sum at the first instance. It was pretty smooth upgrading.. After the upgrade I did see the much awaited 3G signal bar on my phone. Ok! now its testing time. User experience First I did a bit of browsing, boy ! it was pretty quick, web pages loaded in a jiffy. I really did not time it because it loaded really quick. I loaded a YouTube Video, no buffering, watched the 4 min Video with no problems, it took around 6 MB of data usage Made a Skype call for about 6 min, voice clarity was really good and data usage was around 4-5 MB Tried Google Maps everything was so fast could not see the difference between computer and my phone, used it for about couple of minutes. Did listen to an Online Radio for about 5 min took about 8 MB of data usage Guess there is no need to say about Facebook or Twitter. It was good obviously. Video Call – Not yet tested Price – Do you get what you pay for ? 3G speed is fantastic, you have to really feel it to enjoy it. But currently in Airtel, 3G is available only in 3 places wiz. Bengaluru, Chennai, Coimbatore. ok ! Its not even there in all the metros? hmmm. 3G signal was not available in all parts of Chennai, often in many places it changed to 2G. Let alone all the places, even in my house when walking from one room to another sometimes its shows 2G. When it chaged from 3G to 2G there was lag in the application when it was loading data which often made me wonder if the application hanged. Currently prices not low. 2G plans in Airtel is Rs.98 for 2GB and for Rs.100 its only 100MB in 3G. Now you decide please, it’s quite a debate. The Catch – There is always a catch right ? If you have bought 3G connection and in places where 3G is not available (2G) and use any application that requires data connections (youtube, browse, chat etc) its changed with 3G!. Meaning if you have bought 100MB of 3G by paying Rs.100 like I did, suppose you used the connection for about 10MB using 2G, then it would reduce from the 100MB to 90 MB….That’s bad ! You cannot have 2G and 3G plans activated at the same point of time in your phone. You will pay 3G price for using 2G. This article titled,Airtel 3G in Chennai – User experience, Price & What’s the catch?, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Home Energy Management & Automation with Windows Phone 7

    A number of people at Clarity are personally interested in home energy conservation and home automation. We feel that a mobile device is a great fit for bringing this idea to fruition. While this project is merely a concept and not directly associated with Microsofts Hohm web service, it provides a great model for communicating the concept. I wanted to take the idea a step further and combine saving energy in your home with the ability to track water usage and control your home devices. I designed an application that focuses on total home control and not just energy usage. Application Overview By monitoring home consumption in real time and with yearly projections users can pinpoint vampire devices, times of high or low consumption, and wasteful patterns of energy use. Energy usage meters indicate total current consumption as well as individual device consumption. Users can then use the information to take action, make adjustments, and change their consumption behaviors. The app can be used to automate certain systems like lighting, temperature, or alarms. Other features can be turned on an off at the touch of a toggle switch on your phone, away from home. Forget to turn off the TV or shut the garage door? No problem, you can do it from your phone. Through settings you can enable and disable features of the phone that apply to your home making it a completely customized and convenient experience. To be clear, this equates to more security, big environmental impact, and even bigger savings.   Design and User Interface  Since this panorama application is designed for win phone 7 devices, it complies with the UI Design and Interaction Guide for wp7. I developed the frame and page hierarchy from existing examples. The interface takes advantage of the interactive nature of touch screens with slider controls, pivot control views, and toggle switches to turn on and off devices (not shown in mockup). I followed recommendations for text based elements and adapted the tile notifications to display the most recent user activity. For example, the mockup indicates upon launching the app that the last thing you did was program the thermostat. This model is great for quick launching common user actions. One last design feature to point out is the technical reasons for supplying both light and dark themes for the app. Since this application is targeting energy consumption it only makes sense to consider the effect of the apps background color or image on the phones energy use. When displaying darker colors like black the OLED display may use less power, extending battery life. Other Considerations For now I left out options of wind and solar powered energy options because they are not available to everyone. Renewable energy sources and new technologies associated with them are definitely ideas to keep in mind for a next iteration. Another idea to explore for such an application would be to include a savings model similar to mint.com. In addition to general energy-saving recommendations the application could recommend customized ways to save based on your current utility providers and available options in your area. If your television or refrigerator is guilty of sucking a lot of energy then you may see recommendations for energy star products that could save you even more money! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Monitoring the Application alongside SQL Server

    - by Tony Davis
    Sometimes, on Simple-Talk, it takes a while to spot strange and unexpected patterns of user activity, or small bugs. For example, one morning we spotted that an article’s comment count had leapt to 1485, but that only four were displayed. With some rooting around in Google Analytics, and the endlessly annoying Community Server admin-interface, we were able to work out that a few days previously the article had been subject to a spam attack and that the comment count was for some reason including both accepted and unaccepted comments (which in turn uncovered a bug in the SQL). This sort of incident made us a lot keener on monitoring Simple-talk website usage more effectively. However, the metrics we wanted are troublesome, because they are far too specific for Google Analytics to measure, and the SQL Server backend doesn’t keep sufficient information to enable us to plot trends. The latter could provide, for example, the total number of comments made on, or votes cast for, articles, over all time, but not the number that occur by hour over a set time. We lacked a baseline, in other words. We couldn’t alter the database, as it is a bought-in package. We had neither the resources nor inclination to build-in dedicated application monitoring. Possibly, we could investigate a third-party tool to do the job; but then it occurred to us that we were already using a monitoring tool (SQL Monitor) to keep an eye on the database. It stored data, made graphs and sent alerts. Could we get it to monitor some aspects of the application as well? Of course, SQL Monitor’s single purpose is to check and monitor SQL Server, over time, rather than to monitor applications that use SQL Server. However, how different is the business of gathering and plotting SQL Server Wait Stats, from gathering and plotting various aspects of user activity on the site? Not a lot, it turns out. The latest version allows us to write our own custom monitoring scripts, meaning that we could now monitor any metric in the application that returns an integer. It took little time to write a simple SQL Query that collects basic metrics of the total number of subscribers, votes cast, comments made, or views of articles, over time. The SQL Monitor database polls Simple-Talk every second or so in order to get the latest totals, and can then store and plot this information, or even correlate SQL Server usage to application usage. You can see the live data by visiting monitor.red-gate.com. Click the "Analysis" tab, and select one of the "Simple-talk:" entries in the "Show" box and an appropriate data range (e.g. last 30 days). It’s nascent, and we’re still working on it, but it’s already given us more confidence that we’ll spot quickly trends, bugs, or bursts of ‘abnormal’ activity. If there is a sudden rise in comments, we get an alert, and if it’s due to a spam attack, we can moderate or ban the perpetrator very quickly. We’ve often argued that a tool should perform a single job well rather than turn into a Swiss-army knife, but ironically we’ve rather appreciated being able to make best use of what’s there anyway for a slightly different purpose. Is this a good or common practice? What do you think? Cheers, Tony.

    Read the article

  • How to I do install DB2 ODBC?

    - by Justin
    I have been trying, with no success, to install a IBM DB2 ODBC driver so that my PHP server can connect to a database. I've tried installing the db2_connect and get all sorts of problems, I tried install I Access for Linux and the RPM did not install right nor did using alien breed any useful results. I've also tried the DB2 Runtime v8.1, no success. If I attempt to run the rpm it claims I need dependencies that I can't find in apt-get. Yum is also not very helpful as it appears I don't have any repositories installed or lists... Running the simple RPM gives me this result in terminal: # rpm -ivh iSeriesAccess-7.1.0-1.0.x86_64.rpm rpm: RPM should not be used directly install RPM packages, use Alien instead! rpm: However assuming you know what you are doing... error: Failed dependencies: /bin/ln is needed by iSeriesAccess-7.1.0-1.0.x86_64 /sbin/ldconfig is needed by iSeriesAccess-7.1.0-1.0.x86_64 /bin/rm is needed by iSeriesAccess-7.1.0-1.0.x86_64 /bin/sh is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6(GLIBC_2.3)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libdl.so.2()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libdl.so.2(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libgcc_s.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libm.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libm.so.6(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libodbcinst.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libodbc.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0(GLIBC_2.3.2)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 librt.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 librt.so.1(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6(CXXABI_1.3)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6(GLIBCXX_3.4)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 Using alien and running the dkpg gives me thes headaque: $ alien iSeriesAccess-7.1.0-1.0.x86_64.rpm --scripts # dpkg -i iseriesaccess_7.1.0-2_amd64.deb (Reading database ... 127664 files and directories currently installed.) Preparing to replace iseriesaccess 7.1.0-2 (using iseriesaccess_7.1.0-2_amd64.deb) ... Unpacking replacement iseriesaccess ... post uninstall processing for iSeriesAccess 1.0...upgrade /var/lib/dpkg/info/iseriesaccess.postrm: line 8: [: upgrade: integer expression expected Setting up iseriesaccess (7.1.0-2) ... post install processing for iSeriesAccess 1.0...configure iSeries Access ODBC Driver has been deleted (if it existed at all) because its usage count became zero odbcinst: Driver installed. Usage count increased to 1. Target directory is /etc odbcinst: Driver installed. Usage count increased to 3. Target directory is /etc Processing triggers for libc-bin ... ldconfig deferred processing now taking place So it seems the files installed right, well my odbc driver shows up but db2cli.ini is no where to be found. So several questions. Is there a better alternative to connect php to db2, say an ubuntu package I can just install? Can someone direct me to the steps that makes my ubuntu server works well with the RPM so I can build my db2 instance? Also remember I'm connection to an I Series remotely. I'm not using the DB2 Express C thing, even if I did try it to get the db2 php functions to work. And I don't have zend but I think I have every other package on the ubuntu repositories. Help, thank you!

    Read the article

  • The new Auto Scaling Service in Windows Azure

    - by shiju
    One of the key features of the Cloud is the on-demand scalability, which lets the cloud application developers to scale up or scale down the number of compute resources hosted on the Cloud. Auto Scaling provides the capability to dynamically scale up and scale down your compute resources based on user-defined policies, Key Performance Indicators (KPI), health status checks, and schedules, without any manual intervention. Auto Scaling is an important feature to consider when designing and architecting cloud based solutions, which can unleash the real power of Cloud to the apps for providing truly on-demand scalability and can also guard the organizational budget for cloud based application deployment. In the past, you have had to leverage the the Microsoft Enterprise Library Autoscaling Application Block (WASABi) or a services like  MetricsHub for implementing Automatic Scaling for your cloud apps hosted on the Windows Azure. The WASABi required to host your auto scaling block in a Windows Azure Worker Role for effectively implementing the auto scaling behaviour to your Windows Azure apps. The newly announced Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. Unlike WASABi hosted on a Worker Role, you don’t need to host any monitoring service for using the new Auto Scaling service and the Auto Scaling service will be available to individual Windows Azure Compute Services as part of the Scaling. Configure Auto Scaling for a Windows Azure Cloud Service Currently the Auto Scaling service supports Cloud Services, Web Sites and Virtual Machine. In this demo, I will be used a Cloud Services app with a Web Role and a Worker Role. To enable the Auto Scaling, select t your Windows Azure app in the Windows Azure management portal, and choose “SCLALE” tab. The Scale tab will show the all information regards with Auto Scaling. The below image shows that we have currently disabled the AutoScale service. To enable Auto Scaling, you need to choose either CPU or QUEUE. The QUEUE option is not available for Web Sites. The image below demonstrates how to configure Auto Scaling for a Web Role based on the utilization of CPU. We have configured the web role app for running with 1 to 5 Virtual Machine instances based on the CPU utilization with a range of 50 to 80%. If the aggregate utilization is becoming above above 80%, it will scale up instances and it will scale down instances when utilization is becoming below 50%. The image below demonstrates how to configure Auto Scaling for a Worker Role app based on the messages added into the Windows Azure storage Queue. We configured the worker role app for running with 1 to 3 Virtual Machine instances based on the Queue messages added into the Windows Azure storage Queue. Here we have specified the number of messages target per machine is 2000. The image below shows the summary of the Auto Scaling for the Cloud Service after configuring auto scaling service. Summary Auto Scaling is an extremely important behaviour of the Cloud applications for providing on-demand scalability without any manual intervention. Windows Azure provides greater support for enabling Auto Scaling for the apps deployed on the Windows Azure cloud platform. The new Auto Scaling service in Windows Azure lets you add automatic scaling capability to your Windows Azure Compute Services such as Cloud Services, Web Sites and Virtual Machine. In the new Auto Scaling service, you don’t have to host any monitor service like you have had in WASABi block. The Auto Scaling service is an excellent alternative to the manually hosting WASABi block in a Worker Role app.

    Read the article

  • SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at another IO-related wait type. From Book On-Line: Occurs when a task is waiting for I/Os to finish. ASYNC_IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. If by any means your application that’s connected to SQL Server is processing the data very slowly, this type of wait can occur. Several long-running database operations like BACKUP, CREATE DATABASE, ALTER DATABASE or other operations can also create this wait type. Reducing ASYNC_IO_COMPLETION wait: When it is an issue related to IO, one should check for the following things associated to IO subsystem: Look at the programming and see if there is any application code which processes the data slowly (like inefficient loop, etc.). Note that it should be re-written to avoid this  wait type. Proper placing of the files is very important. We should check the file system for proper placement of the files – LDF and MDF on separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is a higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly and so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on the development setup (test environment). As soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very likely to happen that there are no proper indexes on the system and yet there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the following two articles I wrote that talk about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at an IO-related wait types. From Book On-Line: Occurs while waiting for I/O operations to complete. This wait type generally represents non-data page I/Os. Data page I/O completion waits appear as PAGEIOLATCH_* waits. IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. This is a good indication that IO needs to be looked over here. Reducing IO_COMPLETION wait: When it is an issue concerning the IO, one should look at the following things related to IO subsystem: Proper placing of the files is very important. We should check the file system for proper placement of files – LDF and MDF on a separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk),etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as the configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on development (test environment) set up and as soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very possible that there are no proper indexes in the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the two articles that I wrote; they are about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Types, SQL White Papers, T SQL, Technology

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >