Search Results

Search found 10759 results on 431 pages for 'auto detect'.

Page 144/431 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • smartctl -t long isn't finishing

    - by xenoterracide
    I been running smartctl -t long on a drive for about 2 days now and it seems to be stalled at 10%. short and conveyance both passed. I have to send 1 of 2 drives purchased back I found badblocks with badblocks (none on this drive and I'ts made over a pass already). I'm just wondering if I should be concerned about this. smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Device Model: WDC WD10EARS-00Y5B1 Serial Number: WD-WMAV51582123 Firmware Version: 80.00A80 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon May 10 22:19:52 2010 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 241) Self-test routine in progress... 10% of test remaining. Total time to complete Offline data collection: (20100) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 231) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x3031) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 2 3 Spin_Up_Time 0x0027 131 131 021 Pre-fail Always - 6408 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 12 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 148 10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 10 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 7 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 174 194 Temperature_Celsius 0x0022 106 102 000 Old_age Always - 41 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Conveyance offline Completed without error 00% 99 - # 2 Extended offline Interrupted (host reset) 10% 30 - # 3 Short offline Completed without error 00% 0 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • dns queries not using nscd for caching

    - by xenoterracide
    I'm trying to use nscd (Nameservices Cache Daemon) to cache dns locally so I can stop using bind to do it. I've gotten it started and ntpd seems to attempt to use it. But everything else for hosts seems to ignore it. e.g if I do dig apache.org 3 times none of them will hit the cache. I'm viewing the cache stats using nscd -g to determine whether it's been used. I've also turned the debug log level up to see if I can see it hitting and the queries don't even hit nscd. nsswitch.conf # Begin /etc/nsswitch.conf passwd: files group: files shadow: files publickey: files hosts: cache files dns networks: files protocols: files services: files ethers: files rpc: files netgroup: files # End /etc/nsswitch.confenter code here nscd.conf # # /etc/nscd.conf # # An example Name Service Cache config file. This file is needed by nscd. # # Legal entries are: # # logfile <file> # debug-level <level> # threads <initial #threads to use> # max-threads <maximum #threads to use> # server-user <user to run server as instead of root> # server-user is ignored if nscd is started with -S parameters # stat-user <user who is allowed to request statistics> # reload-count unlimited|<number> # paranoia <yes|no> # restart-interval <time in seconds> # # enable-cache <service> <yes|no> # positive-time-to-live <service> <time in seconds> # negative-time-to-live <service> <time in seconds> # suggested-size <service> <prime number> # check-files <service> <yes|no> # persistent <service> <yes|no> # shared <service> <yes|no> # max-db-size <service> <number bytes> # auto-propagate <service> <yes|no> # # Currently supported cache names (services): passwd, group, hosts, services # logfile /var/log/nscd.log threads 4 max-threads 32 server-user nobody # stat-user somebody debug-level 9 # reload-count 5 paranoia no # restart-interval 3600 enable-cache passwd yes positive-time-to-live passwd 600 negative-time-to-live passwd 20 suggested-size passwd 211 check-files passwd yes persistent passwd yes shared passwd yes max-db-size passwd 33554432 auto-propagate passwd yes enable-cache group yes positive-time-to-live group 3600 negative-time-to-live group 60 suggested-size group 211 check-files group yes persistent group yes shared group yes max-db-size group 33554432 auto-propagate group yes enable-cache hosts yes positive-time-to-live hosts 3600 negative-time-to-live hosts 20 suggested-size hosts 211 check-files hosts yes persistent hosts yes shared hosts yes max-db-size hosts 33554432 enable-cache services yes positive-time-to-live services 28800 negative-time-to-live services 20 suggested-size services 211 check-files services yes persistent services yes shared services yes max-db-size services 33554432 resolv.conf # Generated by dhcpcd from eth0 nameserver 127.0.0.1 domain westell.com nameserver 192.168.1.1 nameserver 208.67.222.222 nameserver 208.67.220.220 as kind of a side note I'm using archlinux.

    Read the article

  • Can't re-mount existing RAID10 on Ubuntu

    - by Zoran
    I saw similar questions, but didn't find what solution to my problem. After power-cut, one of RAID10 (4 disks were) appears to be malfunctioning. I make tha array active one, but can not mount it. Always the same error: mount: you must specify the filesystem type So, here is what I have when type mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Tue Sep 1 11:00:40 2009 Raid Level : raid10 Array Size : 1465148928 (1397.27 GiB 1500.31 GB) Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Jun 11 09:54:27 2012 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K UUID : 1a02e789:c34377a1:2e29483d:f114274d Events : 0.166 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 0 0 1 removed 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde At the /etc/mdadm/mdadm.conf I have by default, scan all partitions (/proc/partitions) for MD superblocks. alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes automatically tag new arrays as belonging to the local system HOMEHOST <system> instruct the monitoring daemon where to send mail alerts MAILADDR root definitions of existing MD arrays ARRAY /dev/md0 level=raid10 num-devices=4 UUID=1a02e789:c34377a1:2e29483d:f114274d ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9b592be7:c6a2052f:2e29483d:f114274d This file was auto-generated... So, my question is, how can I mount md0 array (md1 has been mounted without problem) in order to preserve existing data? One more thing, fdisk -l command gives the following result: Disk /dev/sdb: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x660a6799 Device Boot Start End Blocks Id System /dev/sdb1 * 1 88217 708603021 83 Linux /dev/sdb2 88218 91201 23968980 5 Extended /dev/sdb5 88218 91201 23968948+ 82 Linux swap / Solaris Disk /dev/sdc: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x0008f8ae Device Boot Start End Blocks Id System /dev/sdc1 1 88217 708603021 83 Linux /dev/sdc2 88218 91201 23968980 5 Extended /dev/sdc5 88218 91201 23968948+ 82 Linux swap / Solaris Disk /dev/sdd: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x4be1abdb Device Boot Start End Blocks Id System Disk /dev/sde: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xa4d5632e Device Boot Start End Blocks Id System Disk /dev/sdf: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xdacb141c Device Boot Start End Blocks Id System Disk /dev/sdg: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xdacb141c Device Boot Start End Blocks Id System Disk /dev/md1: 750.1 GB, 750156251136 bytes 2 heads, 4 sectors/track, 183143616 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0xdacb141c Device Boot Start End Blocks Id System Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: invalid flag 0x7b6e of partition table 5 will be corrected by w(rite) Disk /dev/md0: 1500.3 GB, 1500312502272 bytes 255 heads, 63 sectors/track, 182402 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x660a6799 Device Boot Start End Blocks Id System /dev/md0p1 * 1 88217 708603021 83 Linux /dev/md0p2 88218 91201 23968980 5 Extended /dev/md0p5 ? 121767 155317 269488144 20 Unknown And one more thing. When using mdadm --examine command, here ise result: mdadm -v --examine --scan /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sd ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9b592be7:c6a2052f:2e29483d:f114274d devices=/dev/sdf ARRAY /dev/md0 level=raid10 num-devices=4 UUID=1a02e789:c34377a1:2e29483d:f114274d devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde md0 has 3 devices which are active. Can someone instruct me how to solve this issue? If it is possible, I would like not to removing faulty HDD. Please advise

    Read the article

  • Setting up home DNS with Ubuntu Server

    - by Zeophlite
    I have a webserver (with static IP 192.168.1.5), and I want to have my machines on my local network to be able to access it without modifying /etc/hosts (or equivalent for Windows/OSX). My router has Primary DNS server 192.168.1.5 Secondary DNS server 8.8.8.8 (Google's public DNS). Nginx is set up to server websites externally as *.example.com Internally, I want *.example.local to point to the server. My webserver has BIND9 installed, but I'm unsure of the settings. I've been through various contradicting tutorials, and so most of my settings have been clobbered. I've stripped out the lines which I'm confused about. The tutorials I looked at are http://tech.surveypoint.com/blog/installing-a-local-dns-server-behind-a-hardware-router/ and http://ubuntuforums.org/showthread.php?t=236093 . They mostly differ on what should be put in /etc/bind/zones/db.example.local and /etc/bind/zones/db.192, so I've left the conflicting lines out below. Can someone suggest what the correct lines are to give my above behaviour (namely *.example.local pointing to 192.168.1.5)? /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.5 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.254 /etc/hostname avalon /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN /etc/bind/named.conf.options options { directory "/var/cache/bind"; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; /etc/bind/named.conf.local zone "example.local" { type master; file "/etc/bind/zones/db.example.local"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/db.192"; }; /etc/bind/zones/db.example.local $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 5 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL /etc/bind/zones/db.192 $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 4 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; What do I need to add to the above files so that on a laptop on the internal network, I can type in webapp.example.local, and be served by my webserver? EDIT I made several changes to the above files on the webserver. /etc/network/interfaces (end of file) dns-nameservers 127.0.0.1 dns-search example.local /etc/bind/zones/db.example.local (end of file) @ IN NS avalon.example.local. @ IN A 192.168.1.5 avalon IN A 192.168.1.5 webapp IN A 192.168.1.5 www IN CNAME 192.168.1.5 /etc/bind/zones/db.192 (end of file) IN NS avalon.example.local. 73 IN PTR avalon.example.local. As a side note, my spare Win7 machine was able to connect directly to webapp.example.local, but for a Ubuntu 13.10 machine, I had to make the following changes as well (not on the webserver, but on a separate machine): /etc/nsswitch.conf before hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 after hosts: files dns /etc/NetworkManager/NetworkManager.conf before dns=dnsmasq after #dns=dnsmasq The issue remains that its not wildcard DNS, and so I have to add entries to /etc/bind/zones/db.example.local for webapp1, webapp2, ...

    Read the article

  • No sound from external subwoofer "Sonic Master" on an Asus N76VM

    - by Willem
    A few weeks ago I bought a Asus n76vm notebook looking forward to it's 'superior sound'. This sound system compromises a external subwoofer which amplifies bass and is connected to a special output jack. Ubuntu 12.04, however, does not detect this subwoofer. How could this be solved? Any help would be gratefully appreciated http://www.asus.com/Notebooks/Multimedia_Entertainment/N76VM/#specifications

    Read the article

  • Azure News at TechEd 2010

    - by guybarrette
    The Azure team did a few announcements today at TechED US 2010: June 2010 Azure Tools SDK with support for VS 2010 RTM and the .NET Framework 4 Production launch of the Azure CDN OS Auto-upgrade Feature Read all about it here var addthis_pub="guybarrette";

    Read the article

  • Azure News at TechEd 2010

    The Azure team did a few announcements today at TechED US 2010: June 2010 Azure Tools SDK with support for VS 2010 RTM and the .NET Framework 4 Production launch of the Azure CDN OS Auto-upgrade Feature Read all about it here var addthis_pub="guybarrette";...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Collision in Tiled Map - LibGDX

    - by user43353
    I have collision code that deals with left or right or top or bottom. I am using Tiled Map with LibGDX. Question is: How do I detect collision with other cells by all 4 sides, and not specifically by left/right or top/bottom. Here is my top/bottom and left/right collision code: private boolean isCellBlocked(float x, float y) { Cell cell = collisionLayer.getCell((int) (x / collisionLayer.getTileWidth()), (int) (y / collisionLayer.getTileHeight())); return cell != null && cell.getTile() != null && cell.getTile().getProperties().containsKey(blockedKey); } public boolean collidesRight() { for(float step = 0; step < getHeight(); step += collisionLayer.getTileHeight() / 2) if(isCellBlocked(getX() + getWidth(), getY() + step)) return true; return false; } public boolean collidesLeft() { for(float step = 0; step < getHeight(); step += collisionLayer.getTileHeight() / 2) if(isCellBlocked(getX(), getY() + step)) return true; return false; } public boolean collidesTop() { for(float step = 0; step < getWidth(); step += collisionLayer.getTileWidth() / 2) if(isCellBlocked(getX() + step, getY() + getHeight())) return true; return false; } public boolean collidesBottom() { for(float step = 0; step < getWidth(); step += collisionLayer.getTileWidth() / 2) if(isCellBlocked(getX() + step, getY())) return true; return false; } What I'm trying to achieve is simple: I'm trying to make code that will detect by all 4 sides, collidesRight + collidesLeft + collidesTop + collidesBottom in one boolean. For some reason, I cant seem to figure it out. I tried to use Rectangles (the Java Class) on the specific tile I want to be detected, but was messy and I have multiple maps. Having a Rectangle (from Java's API) around the player is no problem. It's just the tiles I want to be detected are the main issues as they cause messy code when used with the Rectangle class. Im trying to minimize the amount of code....

    Read the article

  • Daily tech links for .net and related technologies - Mar 18-21, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - Mar 18-21, 2010 Web Development TDD kata for ASP.NET MVC controllers (part 2) -David Take Control Of Web Control ClientID Values in ASP.NET 4.0 - Scott Mitchell Inside the ASP.NET MVC Controller Factory - Dino Esposito Microsoft, jQuery, and Templating - stephen walther Cross Domain AJAX Request with YQL and jQuery - Jeffrey Way T4MVC Add-In to auto run template -Wayne Web Design Website Content Planning The Right Way - Kristin Wemmer Microsoft...(read more)

    Read the article

  • Webcam surveillance software recommendation

    - by Cedric H.
    I'm looking for a simple "surveillance" or "security" software for Ubuntu. The main goal is to monitor my pets, so the features should be quite simple, in addition I'll use a few (2 at the beginning) basic (old) webcams. I would like it to detect motion and to save the pictures/recording on the local disk, in addition to sending email (+ ideally: posting on facebook). The easier to use/configure the better.

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • An XEvent a Day (27 of 31) – The Future - Tracking Page Splits in SQL Server Denali CTP1

    - by Jonathan Kehayias
    Nearly two years ago Kalen Delaney blogged about Splitting a page into multiple pages , showing how page splits occur inside of SQL Server.  Following her blog post, Michael Zilberstein wrote a post, Monitoring Page Splits with Extended Events , that showed how to see the sqlserver.page_split Events using Extended Events.  Eladio Rincón also blogged about Using XEvents (Extended Events) in SQL Server 2008 to detect which queries are causing Page Splits , but not in relation to Kalen’s blog...(read more)

    Read the article

  • ORM Profiler v1.1 has been released!

    - by FransBouma
    We've released ORM Profiler v1.1, which has the following new features: Real time profiling A real time viewer (RTV) has been added, which gives insight in the activity as it is received by the client, in two views: a chronological connection overview and an activity graph overview. This RTV allows the user to directly record to a snapshot using record buttons, pause the view, mark a range to create a snapshot from that range, and view graphs about the # of connection open actions and # of commands per second. The RTV has a 'range' in which it keeps live data and auto-cleans data that's older than this range. Screenshot of the activity graphs part of the real-time viewer: Low-level activity tab A new tab has been added to the Application tabs: the Low-level activity tab. This tab shows the main activity as it has been received over the named pipe. It can help to get insight in the chronological activity without the grouping over connections, so multiple connections at the same time per thread are easier to spot. Clicking a command will sync the rest of the application tabs, clicking a row will show the details below the splitter bar, as it is done with the other application tabs as well. Default application name in interceptor When an empty string or null is passed for application name to the Initialize method of the interceptor, the AppDomain's friendly name is used instead. Copy call stack to clipboard A call stack viewed in a grid in various parts of the UI is now copyable to the clipboard by clicking a button. Enable/Disable interceptor from the config file It's now possible to enable/disable the interceptor Initialization from the application's config file, using: Code: <appSettings> <add key="ORMProfilerEnabled" value="true"/> </appSettings> if value is true, the interceptor's Initialize method will proceed. If the value is false, the interceptor's Initialize method will not proceed and initialization won't be performed, meaning no interception will take place. If the setting is absent, or misconfigured, the Initialize method will proceed as normal and perform the initialization. Stored procedure calls for select databases are now properly displayed as a call For the databases: SQL Server, Oracle, DB2, Sybase ASA, Sybase ASE and Informix a stored procedure call is displayed as an execute/call statement and copy to clipboard works as-is. I'm especially happy with the new real-time profiling feature in ORM Profiler, which is the flagship feature for this release: it offers a completely new way to use the profiler, namely directly during debugging: you can immediately see what's going on without the necessity of a snapshot. The activity graph feature combined with the auto-cleanup of older data, allows you to keep the profiler open for a long period of time and see any spike of activity on the profiled application.

    Read the article

  • Silverlight Cream for December 09, 2010 -- #1006

    - by Dave Campbell
    In this Issue: Adam Kinney, Jonathan van de Veen, René Schulte(-2-), Vikas, Chad Campbell, Chris Koenig, John Papa, and Martin Krüger. Above the Fold: Silverlight: "Silverlight TV #54: Introducing 11 Brand New Labs" John Papa WP7: "Gestures in Windows Phone 7" Chris Koenig Training: "New Windows Phone 7 tutorials for Designers on toolbox!" Adam Kinney Shoutouts: Jesse Liberty posted ways to get help when you get stuck: Top 10 Tips To Getting Help With Silverlight From SilverlightCream.com: New Windows Phone 7 tutorials for Designers on toolbox! Adam Kinney posted about some WP7 design goodness he's had the opportunity to take part in putting together that is now available for all of us on the Microsoft design Toolbox site.... detailed info about what's there. Adventures while building a Silverlight Enterprise application part #39 Jonathan van de Veen has his latest Silverlight coding adventure detailed on his blog... in the final throes of releasing, he came across some issues surrounding CRUD operations... Windows Phone Unplugged - How to Detect the Zune Software René Schulte has a post up about my two favorite devices: Zune and WP7 ... and how to detect if the Zune software is running when the device is connected to the PC. Issue with the WP7 PictureDecoder and Workaround René Schulte has a second post up today about strange behavior with the PictureDecoder DecodeJpeg method... he describes the problem and a workaround for it. Performance Wizard for Silverlight Vikas reports some Silverlight goodness in the VS2010 SP1 beta that's out ... a Performance Wizard... and he's ratted out it's use and sharing that info... Submitting an App to the Windows Phone Marketplace Chad Campbell details the user experience of getting an app through the marketplace to users... from the standpoint of someone that's been there. Gestures in Windows Phone 7 Chris Koenig is talking about Gestures in WP7, documenting how he used some XNA to get some side-to-side image scrolling going on... and gave us the source! Silverlight TV #54: Introducing 11 Brand New Labs John Papa has his latest Silverlight TV up and he's talking to two great guys: Dan Wahlin and Corey Schuman who have produced the labs you've seen referenced... awesome stuff guys... WP7: precisely position the text cursor when writing text Martin Krüger has a quick WP7 usage tip up for precisely positioning the text cursor in a textbox ... I could have used that today when "Nick's Frame Shop" came up as "Nix Frame Shop" in a search. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Network Solutions Gold VIP Program

    - by GGBlogger
    Today I received an email advising me “Congratulations! You have been selected for the Network Solutions Gold VIP Program” Now I get a bunch of messages from Network Solutions because a long time ago I chose them as my registrar. I usually just save these to my Outlook Network Solutions folder and move on. This time my wife was in the room and I chuckled and said something about “Network Solutions has made me a Gold VIP member.” She said “So what does that mean?” Prompted by that I scrolled down the page to look at my “perks". Let’s see – Special pricing, 1 year free Web Forwarding, 1 hour of free support… etc etc until I hit “Enrollment in SafeRenew* SafeRenew* simplifies your renewal process. It protects your domain name registrations and corresponding services in the event you forget or are unable to renew on time. I’m thinking “Now ain’t that neat” I don’t use auto renew anyway and never have. Then I continued reading: Beginning June 24th, 2012 your domains* (that’s 10 days away) will automatically renew. To ensure continuation of service, please be certain you have a valid credit card on file. If you do not wish for your services to be automatically renewed, please click here to log into account manager and opt out of the SafeRenew™ service. Whoa Network Solutions is going to start spending my money without so much as a by your leave sir!!! I don’t bloody think so and how truly magnanimous of them I can “click here” to opt out of what I didn’t opt in for. Talk about furious! I still am. Now the kicker is: if my wife hadn’t been curious and if I’d had a working credit card on file I wouldn’t have know this until one of my unwanted domain names auto renewed. Of course I was told “Oh – we’d have refunded your money” to which I say bull. In my view Networks Solutions is guilty of a crime of some sort. They DO NOT have a right to spend my money without asking!!!!! So watch out folks.

    Read the article

  • Daily tech links for .net and related technologies - June 14-16, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - June 14-16, 2010 Web Development ASP.Net MVC 2 Auto Complete Textbox With Custom View Model Attribute & EditorTemplate - Sean McAlinden Localization with ASP.NET MVC ModelMetadata - Kazi Manzur Rashid Securing Dynamic Data 4 (Replay) - Steve Adding Client-Side Script to an MVC Conditional Validator - Simon Ince jQuery: Storing and retrieving data related to elements - Rebecca Murphey Web Design 48 Examples of Excellent Layout in Web Design...(read more)

    Read the article

  • ASP.NET Localization: Enabling resource expressions with an external resource assembly

    - by Brian Schroer
    I have several related projects that need the same localized text, so my global resources files are in a shared assembly that’s referenced by each of those projects. It took an embarrassingly long time to figure out how to have my .resx files generate “public” properties instead of “internal” so I could have a shared resources assembly (apparently it was pretty tricky pre-VS2008, and my “googling” bogged me down some out-of-date instructions). It’s easy though – Just change the “Custom Tool” to “PublicResXFileCodeGenerator”:    …which can be done via the “Access Modifier” dropdown of the resource file designer window:   A reference to my shared resources DLL gives me the ability to use the resources in code, but by default, the ASP.NET resource expression syntax: <asp:Button ID="BeerButton" runat="server" Text="<%$ Resources:MyResources, Beer %>" />   …assumes that your resources are in your web site project.   To make resource expressions work with my shared resources assembly, I added two classes to the resources assembly: 1) a custom IResourceProvider implementation:   1: using System; 2: using System.Web.Compilation; 3: using System.Globalization; 4:   5: namespace DuffBeer 6: { 7: public class CustomResourceProvider : IResourceProvider 8: { 9: public object GetObject(string resourceKey, CultureInfo culture) 10: { 11: return MyResources.ResourceManager.GetObject(resourceKey, culture); 12: } 13:   14: public System.Resources.IResourceReader ResourceReader 15: { 16: get { throw new NotSupportedException(); } 17: } 18: } 19: }   2) and a custom factory class inheriting from the ResourceProviderFactory base class:   1: using System; 2: using System.Web.Compilation; 3:   4: namespace DuffBeer 5: { 6: public class CustomResourceProviderFactory : ResourceProviderFactory 7: { 8: public override IResourceProvider CreateGlobalResourceProvider(string classKey) 9: { 10: return new CustomResourceProvider(); 11: } 12:   13: public override IResourceProvider CreateLocalResourceProvider(string virtualPath) 14: { 15: throw new NotSupportedException(String.Format( 16: "{0} does not support local resources.", 17: this.GetType().Name)); 18: } 19: } 20: }   In the “system.web / globalization” section of my web.config file, I point the “resourceProviderFactoryType" property to my custom factory:   <system.web> <globalization culture="auto:en-US" uiCulture="auto:en-US" resourceProviderFactoryType="DuffBeer.CustomResourceProviderFactory, DuffBeer" />   This simple approach met my needs for these projects , but if you want to create reusable resource provider and factory classes that allow you to specify the assembly in the resource expression, the instructions are here.

    Read the article

  • ScrollyFox Provides Automated Page Scrolling in Firefox

    - by Asian Angel
    Do you read a high amount of content each day on the web but get tired of manually scrolling through everything? Now you can set up relaxed pace auto-scrolling in Firefox with the ScrollyFox extension. Note: You may occasionally encounter a website where the extension will not work. This may be due to the particular website’s coding. Using ScrollyFox Once you have the extension installed you may want to have a quick look at the preferences. The default scroll speed is set at “50” and the reverse scrolling setting is enabled. You can easily adjust the settings for speed to suit your needs. Note: For our examples we left the reverse scrolling setting enabled. By default the extension is disabled at first and the status bar button will have a faded coloration. You can see what the button looks like once activated…notice the small arrow type buttons on the right side. In our first example you can see the webpage auto-scrolling in a downward direction. Having reached the bottom it automatically started scrolling back towards the top. Visiting the How-To Geek website you can see that the extension was already working as the page was finishing loading. Going up! Conclusion While this extension may not be for everyone, it can be useful for those who have heavy reading and/or very long articles to read. Links Download the ScrollyFox extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Fixing Firefox Scrolling Problems with Dell Synaptics TouchpadQuick Hits: 11 Firefox Tab How-TosDisable That Irritating AutoScroll Feature in FirefoxEnjoy Customizable Smooth Scrolling in Firefox with SmoothWheelQuick Tip: Disable Firefox Tab Scrolling TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox

    Read the article

  • Dell XPS L502Xnot detecting monitor-TV

    - by Guilherme Z. Santos
    I'm running Ubuntu 12.04 LTS on a Dell XPS L502X and when I connect the HDMI cable from the TV to the computer Ubuntu detects nothing at all, It works perfectly fine in Windows 7 though. I've already went to the Display control, plugged and unplugged the TV, clicked the Detect Displays button, and nothing. Do I have to activate the HDMI output or something? Because I used another computer with a VGA output and it worked perfectly.

    Read the article

  • Window focus confusion in unity

    - by Bryan Agee
    I like having focus prevention set to high, so that I don't have some stupid auto-launched app steal my typing in the middle of something else. Unfortunately, Unity keeps focus on the right window while raising the new one. A number of times, this has caused me to close an application by accident that had control of the menu bar, even though it was underneath the new window. Is there a way to prevent raise without focus?

    Read the article

  • Dual monitors: one monitor starts blinking and after a while turns black

    - by Anand
    I am using dual monitors in Ubuntu 11 with classical interface (Not Unity). I am using two monitors. The system can detect and automatically configure the monitors correctly. The problem is that after some time, one monitor will start blinking and after a while it will turn black. Does any one know that it is the problem of my hardware or the problem of Ubuntu system? Thank you very much in advance for your help! Anand

    Read the article

  • Keyboard Shortcuts in Oracle SQL Developer

    - by thatjeffsmith
    The CTRL key, which stands for ConTRoL…aw, the good ole days What keyboard shortcuts should EVERY Oracle SQL Developer user know? How do you find new shortcuts to master, and how do you change them to match ones you’ve already learned in other tools? These are the driving questions for today’s post. While some of us may be keyboard ninjas, and others are more driven to use the mouse – everyone has probably picked up a few strategic keyboard shortcuts over the years. For example, I’ve personally JUST memorized the Cmd-Shift-4 ‘trick’ in Mac OS X. And of course we all know what F1 does, right? Right?!? Here are a few more keyboard shortcuts to commit to memory. My Favorite SQL Developer Shortcuts ctrl-enter : executes the current statement(s) F5 : executes the current code as a script (think SQL*Plus) ctrl-space : invokes code insight on demand Code Editor – Completion Insight – Enable Completion Auto-Popup (Keyword being Auto) ctrl-Up/Dn : replaces worksheet with previous/next SQL from SQL History ctrl-shift+Up/Dn : same as above but appends instead of replaces shift+F4 : opens a Describe window for current object at cursor ctrl+F7 : format SQL ctrl+/ : toggles line commenting ctrl+e : incremental search Configuring Keyboard Shortcuts in SQL Developer Tools Preferences Shortcut Keys Search by command name OR the keystroke itself Some tips… Sort by category Pay special attention to the ‘Code Editor’ and ‘Other’ categories Mind the conflicts when you change the defaults Be nice – share! You can save your new mappings with your co-workers using the Export and Import buttons Click on ‘More Actions’ to expose the Import and Export buttons When I get ‘bored’ or if I think I might be missing something, I peruse the Code Editor and Other categories, again! I’ve picked up quite a few cool editor tricks here. Then I blog about them, like they’re ‘magic.’ #EvilLaugh But the main tip is this – don’t let your previously memorized keyboard shortcuts SHORTCUT your usage of SQL Developer. If your fingers have already memorized some keystrokes, just re-program SQL Developer to match! What’s your favorite shortcut? I’ll use the most popular shortcut mentioned in the comments to round out my Top 10 list above!

    Read the article

  • 12/12 Live Webcast: Introducing Next-Generation Enterprise Auditing and Database Firewall

    - by jgelhaus
    Join Oracle Security gurus to hear how Oracle products monitor Oracle and non-Oracle database traffic, detect unauthorized activity including SQL injection attacks, and block internal and external threats from reaching the database. Hear how organizations such as TransUnion Interactive and SquareTwo Financial rely on Oracle to monitor and secure their Oracle and non-Oracle database environments. Register for the webcast here.

    Read the article

  • Broadcom BCM4331 not working on new Mac Mini 5,1

    - by Jon
    I can't seem to get my wireless card working on my Mac Mini 5,1. Lspci returns: 03:00.0 Network controller: Broadcom Corporation BCM4331 802.11a/b/g/n (rev 02) But running "additional drivers" doesn't detect anything. The nm-applet menu reads "device not ready--firmware missing." What can I do to get this to work? Note, this is with 12.04.1, so many of the previous discussions (for 11.10, etc) probably don't apply here.

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >