Search Results

Search found 1108 results on 45 pages for 'stats'.

Page 32/45 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • July, the 31 Days of SQL Server DMO’s – Day 24 (sys.dm_db_index_operational_stats)

    - by Tamarick Hill
    The sys.dm_db_index_operational_stats Dynamic Management Function returns information about the IO, locking, and access methods for the indexes that you currently have on your SQL Server Instance. This function takes four input parameters which are (1) database_id, (2) object_id, (3) index_id, and (4) partition_number. Let’s have a look at the results from this function against our AdventureWorks2012 database. This function returns a ton of columns, so not only will I not attempt to describe each of the columns, I wont even attempt to display all of them here. My query below will give you a subset of the columns returned from this function. SELECT database_id, object_id, index_id, partition_number, leaf_insert_count, leaf_delete_count, leaf_update_count, leaf_ghost_count, nonleaf_insert_count, nonleaf_delete_count, nonleaf_update_count, range_scan_count, forwarded_fetch_count, row_lock_count, row_lock_wait_count, page_lock_count, page_lock_wait_count, Index_lock_promotion_attempt_count, index_lock_promotion_count, page_compression_attempt_count, page_compression_success_count FROM sys.dm_db_index_operational_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL) The first four columns in the result set represent the values that we passed in as our input parameters. If you use NULL’s as I did, then you will see results for every index on your system. I specified a database_id so my result set only shows those records pertaining to my AdventureWorks2012 database. The next columns in the result set provide you with information on how may inserts, deletes, or updates that have taken place on your leaf and nonleaf index levels. The nonleaf levels would refer to the intermediate and root index levels. In the middle of these you see a leaf_ghost_count column, which represents the number of records that have been logically deleted and marked as “ghosted”  and are waiting on the background ghost cleanup process to physically remove them. The range_scan_count column represents the number of range or table scans that have been performed against an index. The forwarded_fetch_count column represents the number of rows that were returned from a forwarding row pointer. The row_lock_count and row_lock_wait_count represent the number of row locks that have been requested for an index and the number of times SQL has had to wait on a row lock respectively. The page_lock_count and page_lock_wait_count represent the number of page locks that have been requested for an index and the number of times SQL has had to wait on a page lock respectively. The index_lock_promotion_attempt_count represents the number of times the database engine has attempted to promote a lock to the index level. The index_lock_promotion_count column displays how many times that index lock promotion was successful. Lastly the page_compression_attempt_count and page_compression_success_count represents how many times a page was attempted to be compressed and how many times the attempt was successful. As you can see there is a ton of information returned from this DMV. The DMV we reviewed on yesterday (sys.dm_db_index_usage_stats) provided you with good information on when and how indexes have been used, but this DMF takes an even deeper dive into these statistics. If you are interested in performing a very detailed analysis on the operational stats of your indexes, this is not only a good place to start, but more than likely the best place. For more information on this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms174281.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • CodePlex Daily Summary for Saturday, October 29, 2011

    CodePlex Daily Summary for Saturday, October 29, 2011Popular Releasespatterns & practices: Enterprise Library Contrib: Enterprise Library Contrib - 5.0 (Oct 2011): This release of Enterprise Library Contrib is based on the Microsoft patterns & practices Enterprise Library 5.0 core and contains the following: Common extensionsTypeConfigurationElement<T> - A Polymorphic Configuration Element without having to be part of a PolymorphicConfigurationElementCollection. AnonymousConfigurationElement - A Configuration element that can be uniquely identified without having to define its name explicitly. Data Access Application Block extensionsMySql Provider - ...Network Monitor Open Source Parsers: Network Monitor Parsers 3.4.2748: The Network Monitor Parsers packages contain parsers for more than 400 network protocols, including RFC based public protocols and protocols for Microsoft products defined in the Microsoft Open Specifications for Windows and SQL Server. NetworkMonitor_Parsers.msi is the base parser package which defines parsers for commonly used public protocols and protocols for Microsoft Windows. In this release, NetowrkMonitor_Parsers.msi continues to improve quality and fix bugs. It has included the fo...Duckworth Lewis Professional Edition Calculator: DLcalc 3.0: DLcalc 3.0 can perform Duckworth/Lewis Professional Edition calculations 100% accurately. It also produces over-by-over and ball-by-ball PAR score tables.Folder Bookmarks: Folder Bookmarks 2.2.0.1: In this version: Custom Icons - now you can change the icons of the bookmarks. By default, whenever an image is added, the icon is automatically changed to a thumbnail of the picture. This can be turned off in the settings (Options... > Settings) Ability to remove items from the 'Recent' category Bugfixes - 'Choose' button in 'Edit Bookmark' now works Another bug fix: another problem in the 'Edit Bookmark' windowMedia Companion: MC 3.420b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Movies Fixed: Fanart and poster scraping issues TV Shows (Re)Added: Rebuild single show Fixed: Issue when shows are moved from original location Ability to handle " for actor nicknames Crash when episode name contains "<" (does not scrape yet) Clears fanart when switch...patterns & practices - Unity: Unity 3.0 for .NET4.5 Preview: The Unity 3.0.1026.0 Preview enables Unity to work on .NET 4.5 with both the WinRT and desktop profiles. The major changes include: Unity projects updated to target .NET 4.5. Dynamic build plans modified to use compiled lambda expressions instead of Reflection.Emit Converting reflection to use the new TypeInfo for reflection. Projects updated to work with the Microsoft Visual Studio 2011 Preview Notes/Known Issues: The Microsoft.Practices.Unity.UnityServiceLocator class cannot be use...Managed Extensibility Framework: MEF 2 Preview 4: Detailed information on this release is available on the BCL team blog.Image Converter: Image Converter 0.3: New Features: - English and German support Technical Improvements: - Microsoft All Rules using Code Analysis Planned Features for future release: 1. Unit testing 2. Command line interface 3. Automatic UpdatesAcDown????? - Anime&Comic Downloader: AcDown????? v3.6: ?? ● AcDown??????????、??????,??????????????????????,???????Acfun、Bilibili、???、???、???、Tucao.cc、SF???、?????80????,???????????、?????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ?? v3.6?? ??“????”...DotNetNuke® Events: 05.02.01: This release fixes any know bugs from any previous version. Events 05.02.01 will work for any DNN version 5.5.0 and up. Full details on the changes can be found at http://dnnevents.codeplex.com/workitem/list/basic Please review and rate this release... (stars are welcome)BUG FIXESAdded validation around category cookie RSS feed was missing an explicit close of the file when writing. Fixed. Added extra security into detail view .ICS Files did not include correct line folding. Fixed Cha...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.33: Add JSParser.ParseExpression method to parse JavaScript expressions rather than source-elements. Add -strict switch (CodeSettings.StrictMode) to force input code to ECMA5 Strict-mode (extra error-checking, "use strict" at top). Fixed bug when MinifyCode setting was set to false but RemoveUnneededCode was left it's default value of true.Path Copy Copy: 8.0: New version that mostly adds lots of requested features: 11340 11339 11338 11337 This version also features a more elaborate Settings UI that has several tabs. I tried to add some notes to better explain the use and purpose of the various options. The Path Copy Copy documentation is also on the way, both to explain how to develop custom plugins and to explain how to pre-configure options if you're a network admin. Stay tuned.MVC Controls Toolkit: Mvc Controls Toolkit 1.5.0: Added: The new Client Blocks feaure of Views A new "move" js method for the TreeViews The NewHtmlCreated js event to the DataGrid Improved the ChoiceList structure that now allows also the selection list of a dropdown to be chosen with a lambda expression Improved the AcceptViewHintAttribute controller filter. Now a client can specify not only the name of a View or Partial View it prefers, but also to receive just the rough data in Json format. Fixed: Issue with partial thrust Cl...Free SharePoint Master Pages: Buried Alive (Halloween) Theme: Release Notes *Created for Halloween, you will find theme file, custom css file and images. *Created by Al Roome @AlstarRoome Features: Custom styling for web part Custom background *Screenshot https://s3.amazonaws.com/kkhipple/post/sharepoint-showcase-halloween.pngDevForce Application Framework: DevForce AF 2.0.3 RTW: PrerequisitesWPF 4.0 Silverlight 4.0 DevForce 2010 6.1.3.1 Download ContentsDebug and Release Assemblies API Documentation Source code License.txt Requirements.txt Release HighlightsNew: EventAggregator event forwarding New: EntityManagerInterceptor<T> to intercept EntityManger events New: IHarnessAware to allow for ViewModel setup when executed inside of the Development Harness New: Improved design time stability New: Support for add-in development New: CoroutineFns.To...NicAudio: NicAudio 2.0.5: Minor change to accept special DTS stereo modes (LtRt, AB,...)NDepend TFS 2010 integration: version 0.5.0 beta 1: Only the activity and the VS plugin are avalaible right now. They basically work. Data types that are logged into tfs reports are subject to change. This is no big deal since data is not yet sent into the warehouse.Windows Azure Toolkit for Windows Phone: Windows Azure Toolkit for Windows Phone v1.3.1: Upgraded Windows Azure projects to Windows Azure Tools for Microsoft Visual Studio 2010 1.5 – September 2011 Upgraded the tools tools to support the Windows Phone Developer Tools RTW Update SQL Azure only scenarios to use ASP.NET Universal Providers (through the System.Web.Providers v1.0.1 NuGet package) Changed Shared Access Signature service interface to support more operations Refactored Blobs API to have a similar interface and usage to that provided by the Windows Azure SDK Stor...DotNetNuke® FAQ: 05.00.00: FAQ (Frequently Asked Questions) 05.00.00 will work for any DNN version 5.6.1 and up. It is the first version which is rewritten in C#. The scope of this update is to fix all known issues and improve user interface. Please review and rate this release... (stars are welcome)BUG FIXESManage Categories button text was not localized Edit/Add FAQ Entry: button text was not localized ENHANCEMENTSAdded an option to select the control for category display: Listbox with checkboxes (flat category ...SiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.0.921.340): Added CodePlex and PayPal links New iconNew ProjectsAsynk: Asynk is a framework/application that allows existing applications to easily be extended with an offloaded asynchronous worker layer. Asynk is developed using C#.Blob Tower Defense: 3D tower defense game for Windows Phone 7. School project for Brno University of Technology, computer graphics class.Booz: Booz is... An extended version of the boo shell (booish2 to be precise). Offers additional commands like cd, md, ls etc. I hope this shell can be used to take the position of/surpass the native windows shell in the near future.CIMS: a sanction infomation system for sencience and technology of hustCrystalDot - Icon Collection / Pack (LGPL): .Net / Mono freundliche Varainte der Crystal-Icons von Everaldo Icon collection / pack for .NET and Mono designed by Everaldo - KDE style http://www.everaldo.com/crystal/dotetes: dotetes adalah teka teki silang tool dikembangkan dengan bahasa c#Emoe': This Project is a Windows Phone 7.1 application.Equation Inversion: Visual Studion 2008 Add-in for equation inversions.Exploring VMR Features on WEC7: This is the sample application helps you to do alpha blending the bitmap on camera streaming in Windows Embedded Compact 7 using Directshow video Renderer (VMR). It is a VS2008 based smart device project developed on C++. I have explained the sample application in the following blog link. http://www.e-consystems.com/blog/windowsce/?p=759 EzValidation: Custom validation extensions for ASP.NET MVC 3. Includes server and client side model based validation attributes for: -- Equal To -- Not Equal To -- Greater Than -- Greater Than or Equal To -- Less Than -- Less Than or Equal To Supports validating against: -- Another Model Field -- A Specific Value -- Current Date/Yesterday/Tomorrow (for Dates and Strings) Download & Install via NuGet "package-install ezvalidation"Flu.net: Flu.net is a tool that helps you creating your own fluent syntax for .NET Framework applications in a declarative fashion. It is aimed for infrastructures and other open-source projects use.For Chess Endgames: King vs. King Opposition Calculator: You must input the locations of 2 kings on a chessboard, and whose turn it is to move. The calculator will display which king has the opposition, and how it can be used or maintained.GameTrakXNA: This project aims to create a simple library to use the unique GameTrak controller within XNA and Flash.Google Speech Recognition Example: Google Speech Recognition contains a working example of application that uses google speech recognition API. App contains all necessary dlls to record, decode and send your voice request to google service and recieve a text representation of what you've said. It's developed in C#Interval Mandelbrot Explorer: Explore the Mandelbrot set using interval arithmetic.ISD training tasks: ISD training examples and tasksiTunesControlBar: The iTunesControlBar helps user control their iTunes Application while it is minimized. iTunesControlBar resides at the top of the screen, invisible when not used, and allows playback and volume control, library searches and media information without the need to bring up iTunes.iTurtle: A bunch of Powerscripts to automate server management in AD environment.M26WC - Mono 2.6 Wizard Control: Wizard which runs under Mono2.6 A fork of: http://aerowizard.codeplex.com/Microsoft Help Viewer 2: Help Viewer 2 is the help runtime for both Visual Studio 11 help and Windows 8 help. The code in this project will help you use and understand the HV2 runtime API.MONTRASEC: Monitoring Trafficking in human beings and Sexual Exploitation of Children: benchmarking for member state and EU reporting, turning the SIAMSECT templates into a user-friendly interface and reporting tool. MTF.NET Runtime: Managed Task Framework .NET Runtime The MTF.NET runtime software and resulting assemblies are required to run applications built using the Managed Task Framework.NET Professional (Visual Studio 2010 extension) software design editor. The MTF.NET team are committed to continuously improving the core MTF.NET runtime and ensuring it is always available free and fully transparent. Pandoras Box: A greenfield inversion of control project utilising the power and flexibility of expressions and preferring convention over configuration.Pass the Puzzle: Pass the Puzzle is a frantic word-guessing party game. The game displays a few letters, and the players must come up with words containing those letters. But beware: if the timer goes off, you lose! It is based on the folk party game Pass the Parcel and is written in C#.PerCiGal: Percigal is a project for the development of applications for managing your personal media library. It consists in - a windows application to use at home to catalog movies, TV series, cast and books, with the support of the Internet for information retrieval; - a web interface for viewing and cataloging everywhere your media; - an application for smartphones. Project Flying Carpet: Este jogo é um projeto para a cadeira Projeto de Jogos: Motores Jogos do curso de Jogos Digitais da Unisinos.proxy browser: sed leo Latin's Butterfly....Python Multiple Dispatch: Multiple dispatch (AKA multimethods) for Python 3 via a metaclass and type annotations.reDune: ?????????? ???? ? ????? «????????? ? ???????? ???????». ???????? ?? Dune2000 ?? Westwood ? Electronic Arts.Rereadable: Keep page from internet for read it latter.ServStop: ServStop is a .NET application that makes it easy to stop several system services at once. Now you don't have to change startup types or stop them one at a time. It has a simple list-based interface with the ability to save and load lists of user services to stop. Written in C#.SharePoint 2010 Audience Membership Workflow Activity (Full Trust): A simple SharePoint 2010 workflow activity / workflow condition to check whether the user initiating the workflow is a member of a specified audience. Farm-level .wsp solution, written in C#. Once installed, the workflow activity can be used in SharePoint Designer 2010 declarative workflows.SQL Server® to Firebird DB converter: Converts Microsoft SQL Server® database into Firebird database including entire structure and datastegitest: test projectSystem.Threading.Joins: The Joins project provides asynchronous concurrency semantics based on join calculus and modeled after the Microsoft Research C? (C Omega) project.TestAndroidGame: try dev a TestAndroidGametetribricks: block game Topographic Explorer: A project to import, convert, explore, manipulate, and save topographical maps. Looking to use C# and WPF.Trading: Under construction!!!Trombone: Trombone makes it easier for Windows Mobile Professional users to automate status reply through SMS. It's developed in Visual C# 2008.Tulsa SharePoint Interest Group: Repository for source code for the Tulsa SharePoint Interest Group's web site. The Tulsa SharePoint Interest Group is using the Community Kit for SharePoint. This project will house any modifications that are specific to our user group.World of Tanks RU tiny stats collection utilty.: Tiny utility to load players stats for World of Tanks RU server. Results saved to comma separated file.WS-Discovery Proxy: Attempt at creating general purpose WS-Discovery Proxy.Yamaha Tu?n Tr?c: This application is used to manage information for Yamaha Tu?n Tr?c

    Read the article

  • SQL Server 2008 Compression

    - by Peter Larsson
    Hi! Today I am going to talk about compression in SQL Server 2008. The data warehouse I currently design and develop holds historical data back to 1973. The data warehouse will have an other blog post laster due to it's complexity. However, the server has 60GB of memory (of which 48 is dedicated to SQL Server service), so all data didn't fit in memory and the SAN is not the fastest one around. So I decided to give compression a go, since we use Enterprise Edition anyway. This is the code I use to compress all tables with PAGE compression. DECLARE @SQL VARCHAR(MAX)   DECLARE curTables CURSOR FOR             SELECT 'ALTER TABLE ' + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                     + '.' + QUOTENAME(OBJECT_NAME(object_id))                     + ' REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE)'             FROM    sys.tables   OPEN    curTables   FETCH   NEXT FROM    curTables INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curTables         INTO    @SQL     END   CLOSE       curTables DEALLOCATE  curTables Copy and paste the result to a new code window and execute the statements. One thing I noticed when doing this, is that the database grows with the same size as the table. If the database cannot grow this size, the operation fails. For me, I first ended up with orphaned connection. Not good. And this is the code I use to create the index compression statements DECLARE @SQL VARCHAR(MAX)   DECLARE curIndexes CURSOR FOR             SELECT      'ALTER INDEX ' + QUOTENAME(name)                         + ' ON '                         + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                         + '.'                         + QUOTENAME(OBJECT_NAME(object_id))                         + ' REBUILD PARTITION = ALL WITH (FILLFACTOR = 100, DATA_COMPRESSION = PAGE)'             FROM        sys.indexes             WHERE       OBJECTPROPERTY(object_id, 'IsMSShipped') = 0                         AND OBJECTPROPERTY(object_id, 'IsTable') = 1             ORDER BY    CASE type_desc                             WHEN 'CLUSTERED' THEN 1                             ELSE 2                         END   OPEN    curIndexes   FETCH   NEXT FROM    curIndexes INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curIndexes         INTO    @SQL     END   CLOSE       curIndexes DEALLOCATE  curIndexes When this was done, I noticed that the 90GB database now only was 17GB. And most important, complete database now could reside in memory! After this I took care of the administrative tasks, backups. Here I copied the code from Management Studio because I didn't want to give too much time for this. The code looks like (notice the compression option). BACKUP DATABASE [Yoda] TO              DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH            NOFORMAT,                 INIT,                 NAME = N'Yoda - Full Database Backup',                 SKIP,                 NOREWIND,                 NOUNLOAD,                 COMPRESSION,                 STATS = 10,                 CHECKSUM GO   DECLARE @BackupSetID INT   SELECT  @BackupSetID = Position FROM    msdb..backupset WHERE   database_name = N'Yoda'         AND backup_set_id =(SELECT MAX(backup_set_id) FROM msdb..backupset WHERE database_name = N'Yoda')   IF @BackupSetID IS NULL     RAISERROR(N'Verify failed. Backup information for database ''Yoda'' not found.', 16, 1)   RESTORE VERIFYONLY FROM    DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH    FILE = @BackupSetID,         NOUNLOAD,         NOREWIND GO After running the backup, the file size was even more reduced due to the zip-like compression algorithm used in SQL Server 2008. The file size? Only 9 GB. //Peso

    Read the article

  • SQLAuthority News – Updates on Contests, Books and SQL Server

    - by pinaldave
    There are lots of things happening on this blog and I feel sometime it is difficult to keep up. One of the suggestion I keep on receiving if there is a single page where one can visit and know the updates. I did consider of the same at some point but in era of RSS Feed it is difficult to have proper audience to that page. Here are few updates on various contest and books give away in recent time. Combo set of 5 Joes 2 Pros Book – 1 for YOU and 1 for Friend – I have received so many entries for this contest. Many have sent me email asking if this contest can be extended by couple of days. For the same the deadline for this contest is now Nov 10th 7 AM. You can send your entries by that time. The prize is 2 combo set of Joes 2 Pros is of USD 444. If you have not take part in the contest please take part now. Guess What is in the box? – There were many entry for this contest. We played this contest on blog as well, facebook. The answer of this contest was announced in 2 days in blog post announcing my new book. The winner was Manas Dash from Bangalore. He answered “The box will contain SQL book authored by Vinod and Pinal”. This was the closest answer we received. Win 5 SQL Programming Book Contest will have winner announced by Nov 15th and winners will be sent email. Win 5 SQL Wait Stats Book Contest is closed and winners have been sent their award. My third book SQL Server Interview Questions and Answers have run out of stock in India in 36 hours of its launch. We are working very hard to make it available again. Thank you again for excellent support! Without your participation all the give away have no significance. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – 5th Anniversary Giveaways

    - by pinaldave
    Please read my 5th Anniversary post and my quick note on history of the Database. I am sure that we all have friends and we value friendship more than anything. In fact, the complete model of Facebook is built on friends. If you have lots of friends, you must be a lucky person. Having a lot of friends is indeed a good thing. I consider all you blog readers as my friends so now I want do something for you. What is it? Well, send me details about how many of your friends like my page and you would have a chance to win lots of learning materials for yourself and your friends. Here are the exciting prizes awaiting the lucky winner: Combo set of 5 Joes 2 Pros Book – 1 for YOU and 1 for Friend This is USD 444 (each set USD 222) worth gift. It contains all the five Joes 2 Pros books (Vol1, Vol2, Vol3, Vol4, Vol5) + 1 Learning DVD. [Amazon] | [Flipkart] If in case you submitted an entry but didn’t win the Combo set of 5 Joes 2 Pros books, you could still will  my SQL Server Wait Stats book as a consolation prize! I will pick the next 5 participants who have the highest number of friends who “liked” the Facebook page, http://facebook.com/SQLAuth. Instead of sending one copy, I will send you 2 copies so you can share one copy with a friend of yours. Well, it is important to share our learning and love with friends, isn’t it? Note: Just take a screenshot of http://facebook.com/SQLAuth using Print Screen function and send it by Nov 7th to pinal ‘at’ sqlauthority.com.. There are no special freebies to early birds so take your time and see if you can increase your friends like count by Nov 7th. Guess – What is in it? It is quite possible you are not a Facebook or Twitter user. In that case you can still win a surprise from me. You have 2 days to guess what is in this box. If you guess it correct and you are one of the first 5 persons to have the correct answer – you will get what is in this box for free. Please note that you have only 48 hours to guess. Please give me your guess by commenting to this blog post. Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Milestone, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Friday Fun: Snowball

    - by Asian Angel
    It is Christmas Eve and hopefully you are enjoying the start of an early weekend away from work. This week we have a snowball throwing game for you to try out, so bundle up and get ready to let those snowballs fly! Snowball The object of the game is to use your snowball ammo to harass the drunk businessman and send him flying along distance-wise as far as you can. Simply use your mouse to aim and click the left button to throw snowballs. You can monitor your stats on the silver bar towards the top of the window. The sound can also be disabled if the music is bothering you, but keep in mind that all sound will be disabled if you use the option. Time to get those snowballs flying through the air!! Keep hitting the businessman with your snowballs as you chase after him. Make certain that your aim is good or you will quickly run out of snowballs! You can really get him moving along at a good rate and he can even go high enough in the air to disappear off the screen for a few moments. There is a also chance that your aim will be so wicked with the snowballs that you will literally knock the drunk businessman’s head off! Weird but possible… The game ends when one of these two events occur: 1.) you run out of snowballs or 2.) the businessman literally bounces back at and then drops behind you as seen in the screenshot here. The moment either happens your score will pop up and then you have the opportunity to try again. Have fun! Note: The bounce back event can happen when encountering cars. Play Snowball Latest Features How-To Geek ETC How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know HTG Explains: Which Linux File System Should You Choose? HTG Explains: Why Does Photo Paper Improve Print Quality? An Alternate Star Wars Christmas Special [Video] Sunset in a Tropical Paradise Wallpaper Natural Wood Grain Icons for Your Desktop and App Launcher Docks My Blackberry Is Not Working! The Apple Too?! [Funny Video] Hidden Tracks Your Stolen Mac; Free Until End of January Why the Other Checkout Line Always Moves Faster

    Read the article

  • SQL SERVER – Get 2 of My Books FREE at Koenig Tech Day – Where Technologies Converge!

    - by pinaldave
    As a regular reader of my blog – you must be aware of that I love to write books and talk about various subjects of my book. The founders of Koenig Solutions are my very old friends, I know them for many years. They have been my biggest supporter of my books. Coming weekend they have a technology event at their Bangalore Location. Every attendee of the technology event will get a set of two books worth Rs. 450 – ‘SQL Server Interview Questions And Answers‘ and ‘SQL Wait Stats Joes 2 Pros‘. I am going to cover a couple of topics of the books and present  as well. I am very confident that every attendee will be having a great time. I will be covering following subjects: SQL Server Tricks and Tips for Blazing Fast Performance Slow Running Queries (SQL) are the most common problem that developers face while working with SQL Server. While it is easy to blame the SQL Server for unsatisfactory performance, however the issue often persists with the way queries have been written, and how SQL Server has been set up. The session will focus on the ways of identifying problems that slow down SQL Servers, and tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. After the session is over – I will point to what exact location in the book where you can continue for the further learning. I am pretty excited, this is more like book reading but in entire different format. The one day event will cover four technologies in four separate interactive sessions on: Microsoft SQL Server Security VMware/Virtualization ASP.NET MVC Date of the event: Dec 15, 2012 9 AM to 6PM. Location of the event:  Koenig Solutions Ltd. # 47, 4th Block, 100 feet Road, 3rd Floor, Opp to Shanthi Sagar, Koramangala, Bangalore- 560034 Mobile : 09008096122 Office : 080- 41127140 Organizers have informed me that there are very limited seats for this event and technical session based on my book will start at Sharp 9 AM. If you show up late there are chances that you will not get any seats. Registration for the event is a MUST. Please visit this link for further information. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • 24HOP gets off to a good start

    - by Rob Farley
    Session 11 is on as I write this – Ami Levin presenting about Primary Keys. It’s a good session. But actually, they’ve all been excellent so far, not just Ami’s. I’ve heard only good things about the content. So if you’re reading this and 24HOP is still on, then tune in and take part. If it’s finished, get yourself over to http://sqlpass.org/24hours and see if the sessions have been made available on-demand. Yes – you should be able to watch the sessions when you want to for a year. Watching live is best, because you can ask questions and have them answered during the session, but if there are ones you just couldn’t make, then watching them on-demand is a good option. Numbers have been “not bad”. At the moment it’s still the middle of the night for most Americans – about 6:30am in New York, and yet we’ve had well over a hundred at all the sessions so far, getting up to well over 300 for some sessions. And when I look through the list of names, I see a bunch of names that suggest we’re reaching people from all around the world. I’m seriously looking forward to seeing the stats about which countries have been represented in the audiences. There have been a few comments about the platform. Everyone seems to consider IBTalk an improvement on LiveMeeting, but the closed captioning has met a mixed reception. Some people are loving it, whereas other people are finding the translations leave quite a bit of space for improvement. If you have feedback on this, please feel free to drop me an email (my name with an underscore at hotmail.com, or with a dot at sqlpass.org should reach me just fine, or Twitter, etc). I don’t know how many of the sessions I’ll get to watch overnight – but I’m looking forward to seeing how things go as the day progresses. Big thanks to everyone who’s involved – the sponsors, PASS HQ team and the IBTalk folk who have stayed up overnight to facilitate, plus the moderators, the people doing the live captioning, and of course the speakers and attendees. I love how the SQL Community gets behind things like this. Earlier, the Adelaide SQL Server User Group gathered and watched Denny Lee’s session on BigData, and everyone in the group agreed that it worked really well. I took a picture of our cinema room, although you could only see a small section of the audience. @rob_farley

    Read the article

  • Cannot install packages. "Warning: untrusted versions..." plus "method driver /usr/lib/apt/methods/http could not be found"

    - by Steve Tjoa
    Judging from Internet forums, these errors appear to be popular when attempting to install packages: steve:~$ sudo aptitude install examplepackage The following NEW packages will be installed: examplepackage examplepackage-common{a} 0 packages upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 1,834 kB of archives. After unpacking 7,631 kB will be used. Do you want to continue? [Y/n/?] WARNING: untrusted versions of the following packages will be installed! Untrusted packages could compromise your system's security. You should only proceed with the installation if you are certain that this is what you want to do. examplepackage examplepackage-common Do you want to ignore this warning and proceed anyway? To continue, enter "Yes"; to abort, enter "No": Yes E: The method driver /usr/lib/apt/methods/http could not be found. E: The method driver /usr/lib/apt/methods/http could not be found. E: Internal error: couldn't generate list of packages to download I followed this post by uninstalling ubuntu-keyring. But I cannot reinstall ubuntu-keyring or ubuntu-minimal -- the above errors reappear. In fact, I don't even seem to have apt (I must have caused this along the way by trying a bad solution, or maybe a clean): steve:~$ sudo apt-get update sudo: apt-get: command not found Aptitude works, but I can't install apt: steve:~$ sudo aptitude install apt The following NEW packages will be installed: apt 0 packages upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 1,046 kB of archives. After unpacking 3,441 kB will be used. E: The method driver /usr/lib/apt/methods/http could not be found. E: The method driver /usr/lib/apt/methods/http could not be found. E: Internal error: couldn't generate list of packages to download ...or update steve:~$ sudo aptitude update E: The method driver /usr/lib/apt/methods/http could not be found. E: The method driver /usr/lib/apt/methods/http could not be found. E: The method driver /usr/lib/apt/methods/http could not be found. I tried this post. Didn't help. To summarize, the main problem is that I cannot install anything. While attempting to fix the problem, the other aforementioned errors occurred. Can you help me fix this error? Feel free to ask if you need more information. Stats: steve:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric

    Read the article

  • The Numbers of Customer Experience

    - by Christie Flanagan
    This week, we’ll be continuing our conversations about Customer Experience (CX) on the Oracle WebCenter blog.  While we all know that customer experience is critically important for acquiring new customers and engendering long term brand loyalty, I thought we could kick this week off by taking a look at the numbers of customer experience.   I’m sure you’ll agree that nothing quite puts things into perspective like numbers and figures. A whopping 86% of consumers say that they are willing to pay more for a better customer experience.  But many companies are failing to step up to the challenge.  And when companies fail deliver on customer experience expectations, they leave money on the table. A huge percentage of customers, 89%, begin doing business with a competitor following a poor customer experience. Breaking up isn’t hard to do and today’s empowered customers have no qualms about taking their business elsewhere when their expectations for customer experience are not met. Over a quarter of consumers, 26%, posted a negative comment on a social networking site like Facebook or Twitter following a poor customer experience. Today, individual customer service failures have the ability to easily snowball.  An unsatisfied customer has the ability to easily share their rancor with their entire social network and chip away at your brand’s reputation. A large number of consumers, 79%,  who shared complaints about poor customer experience online had their complaints ignored.  Companies ignore customer complaints at their own peril.  And unsatisfied customers, when handled effectively, have the potential to become advocates for your brand.  Of the 21% of consumers who did get responses to complaints, more than half had positive reactions to the same company about which they were previously complaining. Half of consumers will give a brand only a week to respond to a question before they stop doing business with them.  The clock is ticking when customers have questions about your brand and a week is an eternity in the realm of customer experience.  The source for these stats is the 2011 Customer Experience Impact (CEI) Report, which explores the relationship between consumers and brands.  The report is based on a survey commissioned by RightNow (acquired by Oracle in 2012) and conducted by Harris Interactive. If you’re interested in seeing more facts and figures about customer experience, download the full report.

    Read the article

  • Avoiding Duplicate Content Penalties on a Corporate/Franchise website

    - by heath
    My question is really an extension of a previous question that was ported from stackoverflow and closed so I cannot edit it. The basic gist is a regional franchise company has decided to force all independent stores into one website look; they currently all have their own domains and completely different websites. After reading the helpful answers and looking over some links provided, I think my solution is to put a 301 on each franchise store site (acme-store1.com, acme-store2.com, etc) back to the main corporate site (acme.com). All of the company history, product info, etc (about 90% of the entire site) applies to all stores. However, each store should have some exclusive content such as staff, location pictures, exclusive events and promotions, etc. I originally thought that I would simply do something like acme.com/store1/staff, acme.com/store2/staff, etc for the store exclusive content and then acme.com/our-company, for example, would cover all stores. However, I now see two issues that I don't know how to solve. They want to see site stats based on what store site they came from. If a user comes from acme-store1.com, is redirected to acme.com and hits several pages, don't I need to somehow keep that original site in the new url to track each page in that user's session and show they originally came from acme-store1.com? Each store is still independently owned and is essentially still in competition with the other stores, albeit, in less competition than they are with other brands. This is important because each store would like THEIR contact info, links to their social media pages, their mailing list sign-up and customer requests on EVERY page. So if a user originally goes to acme-store1.com and is redirected to acme.com, it still should look to the user that it's all about store 1, even though 90% of the content will be exactly the same as it is in the store 2, store 3 and corporate site. For example, acme.com/our-company would have the same company history, same header/footer/navigation, BUT depending on the original site the user came from, it would display contact and links to THAT store. If someone came directly to the corporate site, it would display their contact and links (they have their own as well). I was considering that all redirects would be to store1.acme.com, store2.acme.com, etc (or acme.com/store1) and then I can dynamically add the contact info and appropriate links based on the subdomain or subfolder. But, then I have to worry about duplicate content penalties because, again, about 90% of the text in these "subdomains" are all the same. For reference, this is a PHP5 site. I've already written a compact framework utilizing templates and mod-rewrite that I've used for other sites. Is this an easy fix that I'm just not grasping? Any suggestions?

    Read the article

  • Wrong statistics in AUX_STATS$ might puzzle the optimizer

    - by Mike Dietrich
    We do recommend the creation of System Statistics for quite a long time. Since Oracle 9i the optimizer works with a CPU and IO cost based model. And in order to give the optimizer some knowledge about the IO subsystem's performance and throughput - once System Statistics are collected - they'll get stored in AUX_STATS$. For this purpose in the old Oracle 9i days some default values had been defined - and you'll still find those defaults in Oracle Database 11g Release 2 in AUX_STATS$. But these old values don't reflect the performance of modern IO systems. So it might be a good best practice post upgrade to create fresh System Statistics if you haven't done this before.  You can collect System Statistics with: exec DBMS_STATS.GATHER_SYSTEM_STATS('start'); and end it later by executing: exec DBMS_STATS.GATHER_SYSTEM_STATS('stop'); You could also run DBMS_STATS.GATHER_SYSTEM_STATS('interval', interval=>N) instead where N is the number of minutes when statistics gathering is stopped automatically. Please make sure you'll do this on a real workload period. It won't make sense to gather these values while the database is in an idle state. You should do this ideally for several hours. It doesn't affect performance in a negative way as the values are anyway collected in V$SYSSTAT and V$SESSTAT. And in case you'd like to delete the stats and revert to the old default values you'd simply execute:exec DBMS_STATS.DELETE_SYSTEM_STATS; The tricky thing in Oracle Database 11.2 - and that's why I'm actually writing this blog post today - is bug9842771. This leads to wrong values in AUX_STATS$ for SREADTIM and MREADTIM by factor 1000 guiding the optimizer sometimes into the totally wrong directon. The workaround is to overwrite these values manually and divide them by 1000. Use the DBMS_STATS.SET_SYSTEM_STATS procedure. See this MOS Note:9842771.8 for the above bug for some further information. This issue is fixed in Oracle Database 11.2.0.3 and above. To get some background information about the statistics collected in please read this section in the Oracle Database 11.2 Performance Tuning Guide. And gathering System Statistics might have some implication if you have mixed workloads - and interacts with DB_FILE_MULTIBLOCK_READ_COUNT. For more information please read section 13.4.1.2.

    Read the article

  • How to wrap console utils in webserver

    - by Alex Brown
    I have a big dataset (100Mbs/day) and a bunch of console a TCL/TK tools to view it - I want to turn it into a web app that I can build, and others can maintain. In long: my group runs simulations yielding 100s of Mbs of data daily, in multiple (mostly but not only) text forms. We have a bunch of scripts and tools, mostly old school 1990's style stuff requiring a 5-button mouse, as well as lots of ad-hoc scripts that engineers build out of frustration every month or so. These produces UIs, graphs, spreadsheets (various sizes), logs, event histories etc. I want to replace (or at least supplement) the xwindows / console style UI with a web-based one, so I need the following properties: pleasant to program can wrap existing command-line tools in separate views (I don't need to scrape GUIs or anything) as I port logic from the existing scripts I can create a modularised and pleasant codebase to replace it I can attach a web-ui to navigate between views - each view is likely to contain keys which might make sense to view in another I am new to building systems that have logic on the back-end and front-end of a web-server. from that point of view, they do this: backend wraps old-school executables, constructs calls into them and them takes the output and wraps it up, niceifies it and delivers it to the web client. For instance the tool might generate a number of indexed images (per invocation) which I might deliver all at once or on-demand. May (probably) need to to heavy stats on some sources. frontend provides navigation connecting multiple views, performs requests from one view for data from another (or self to self), etc. Probably will have some views with a lot of interactivity. Can people please point me towards viable solutions for this? I know it's a bit of an open question so as answers come in I hope to refine the spec until we have a good match. I guess I expect to see answers like "RoR!" "beans!" "Scala!" but please give an indication of why those are a good fit; I know nothing! I got bumped off SO for asking an open-ended question, so sorry if its OT here too (let me know). I take the policy that I use the best/closest matched language for a project but most of my team are extremely low level (ie pipeline stages and CDyn) so I don't have the peer group to know where to start.

    Read the article

  • Driving Growth through Smarter Selling

    - by Samantha.Y. Ma
    With the proliferation of social media and mobile technologies, the world of selling and buying has drastically changed, as buyers now have access to more information than they did in the past. In fact, studies have shown that buyers complete 60 percent of the buying process before they even engage with a salesperson. The old models of selling no longer work effectively; and the new way of selling is driven by customer insights. To succeed, sales need to be proactive, not reactive. They need to engage with the customer early, sometimes even before the customer’s needs are fully understood. In fact, the best sales reps prescribe a solution that the customer doesn't even know they need, often by leveraging social media to listen, engage and collaborate with peers. And they fully tap into the power of analytics and data to drive results.  Let’s look at some stats regarding challenges facing sales today. According to recent studies, sales reps spend 78 percent of their time doing administrative things -- such as planning, searching for information, data entry -- and only 22 percent of the time actually selling. Furthermore, 40 percent of B2B sales reps miss their quota, and only 3 percent of companies can say with confidence that their forecasts are “always accurate.” How do you drive growth in this modern day and age? It's not just getting your sales teams to work harder; it's helping them work smarter and providing them with a solution they want to use, on the device(s) they already know, giving them critical insights and tools to be more productive, increase win rates, and close deals faster. Oracle Sales Cloud was designed to do exactly that. It enables smarter selling that allows reps to sell more, managers to know more, and companies to grow more.  Let’s face it—if all CRM solutions worked well, sales executives wouldn’t be having the same headaches as they had in the past. Join Oracle’s Thomas Kurian and Doug Clemmans on Tuesday, October 22 as they explain: • How today’s sales processes have rendered many CRM systems obsolete • The secrets to smarter selling, leveraging mobile, social, and big data • How Oracle Sales Cloud enables smarter selling—as proven by Oracle and its customers Take the first step down the path toward smarter selling. With Oracle Sales Cloud, reps sell more, managers know more, and companies grow more.

    Read the article

  • career planning advice [closed]

    - by JDB
    Possible Duplicate: Are certifications worth it? I am at the point in my career where people start to veer off into either management-type roles or they focus on solidifying their technical skills to stay in the development game for the long-haul. Here's my story: I've got a degree in economics, an MA in Political Science and an MBA in Finance and Management. In addition, I've done coursework in advanced math and software development (although no degree in math or software). All-in-all, I've got 13 years of post-secondary education under my belt. I, however, currently work as a software developer using C# for desktop, Silverlight, Flex and javascript for web, and objective c for mobile. I've been in software development for the past 3.3 years, and it seems like it comes pretty easy to me. I work in a field called "geospatial information systems," which just involves customization and manipulation of geospatial data. Right now I am looking at one of several certifications. Given this background, which of these certifications has the highest ceiling? CFA PMP various development/technological certifications from Microsoft, etc. Other? My academic and work experience are all heavy on the analytical/development side, esp. so given the MBA and the B.S. in Econ. The political science degree was really a lot of stats. So it seems that I would be good pursuing more of the CFA/analytical role. This is a difficult path, however, because I have no work experience in the financial sector, and the developers in finance are all "quants," which again, I am OK with, but I haven't done much statistical modeling in the past 3.3 years. The PMP would require knowledge of best practices as it pertains explicitly to software development. I also don't enjoy a lot of business travel, a common theme for most PMP jobs I've seen. If certifications is the route, which would you recommend? Anything else? I've thought about going back to try to knock out a B.S. in C.S., but I wasn't sure how long that would take, or what would be involved. Thoughts or recommendations? Thanks in advance! I turn 32 this weekend, which is what has forced me to think about these issues.

    Read the article

  • Running a program on boot without login, using the screen

    - by configurator
    Preface: I have a server running on an old laptop. The screen is always on with a login prompt, but because its keyboard is in pretty bad shape, I use it exclusively via ssh. The screen is in a good position, though; I want to use it to display a clock and some stats about what my server is doing. I have scripts to display all those things, but I want to always show them on the monitor screen. My question is, how do I get my script (called HUD) to run on /dev/tty1, instead of the login prompt. Hopefully, it should be possible to accept keyboard input as well as display its output, so that it can use the keyboard to show more info where needed in a future version. I'd also like tty2 etc. to remain active as login screens, in face I actually do need to login locally. For a start, I tried creating a script that I can run from ssh to start the HUD. It goes something like this: ( flock -n 9 watch --interval 0.2 --precise --color --notitle --exec /path/to/script & disown ) 9> /var/lock/hud > /dev/tty1 2> /dev/tty1 < /dev/tty1 (I had to use & disown instead of nohup because nohup recognized the tty and redirects output to nohup.out instead.) This sort-of works. However, it has a few issues: It doesn't steal the terminal's keyboard input, so you can't ctrl+c to get out of it (nor change the script to actually use the keyboard input), and if you press enter it show it and scrolls the display, never refreshing it correctly afterwards. Oddly, if I disconnect the ssh session which created it, it stops working and shows a message: exec: No such file or directory. If I reconnect to ssh, it resumes functioning properly. It feels hackish. Is there a better way to do this? How?

    Read the article

  • Homepage issue on Google [closed]

    - by nico
    We have recently updated our website www.blinds4uk.co.uk with a new homepage containing additional features and more on page content but since then we have lost primary keyword positions and the home page has disappeared completely. The only time it appears is for an exact search ‘blinds4uk’. Today I took snippets of 'unique content' from the homepage and put this into the 'google search' but our homepage was nowhere to be found. When I did the same in ‘Yahoo’ the homepage came up. Are we missing something? We ranked in the top 4 for primary kw terms 'blinds uk' & 'uk blinds' but now we don’t show anywhere for these terms. Our homepage has never ranked well for the primary kw 'blinds' yet our internal pages rank very well with many pages on page 1 of google uk. We employed an SEO firm for 9 months to help us establish issues with the homepage but they never could, so we got rid. We have been trying to get to the root cause of why the homepage ranks so poorly for a number of years and only yesterday we established that we had the meta tag directly below the tag and our title & meta description were further down the page; we have today corrected. Not sure what effect this would have on the way Google reads the homepage but we are trying everything to try and get the homepage ranking fro those primary kw's. Our current developers & ex SEO guys are all part of the same company and cannot pin-point anything other than saying carryon with their SEO team because it will take time but just comes across as a milking exercise. Another thing which I have found very strange is the data from our 'traffic audience'. We are a UK based website yet our traffic stats were showing as;- UK 36.6%, Denmark 35.8% and India 27.6%. – don’t make sense to me! Is there anybody out there that could simply point us in the right direction to the problem(s), so we can fix once and for all? Could there be anything within the code that is causing the home page not to display within google for our primary kw's terms such as blinds, window blinds etc. I would appreciate any advice at all that may help us in our quest to sort this homepage issue once and for all

    Read the article

  • Separating logic and data in browser game

    - by Tesserex
    I've been thinking this over for days and I'm still not sure what to do. I'm trying to refactor a combat system in PHP (...sorry.) Here's what exists so far: There are two (so far) types of entities that can participate in combat. Let's just call them players and NPCs. Their data is already written pretty well. When involved in combat, these entities are wrapped with another object in the DB called a Combatant, which gives them information about the particular fight. They can be involved in multiple combats at once. I'm trying to write the logic engine for combat by having combatants injected into it. I want to be able to mock everything for testing. In order to separate logic and data, I want to have two interfaces / base classes, one being ICombatantData and the other ICombatantLogic. The two implementers of data will be one for the real objects stored in the database, and the other for my mock objects. I'm now running into uncertainties with designing the logic side of things. I can have one implementer for each of players and NPCs, but then I have an issue. A combatant needs to be able to return the entity that it wraps. Should this getter method be part of logic or data? I feel strongly that it should be in data, because the logic part is used for executing combat, and won't be available if someone is just looking up information about an upcoming fight. But the data classes only separate mock from DB, not player from NPC. If I try having two child classes of the DB data implementer, one for each entity type, then how do I architect that while keeping my mocks in the loop? Do I need some third interface like IEntityProvider that I inject into the data classes? Also with some of the ideas I've been considering, I feel like I'll have to put checks in place to make sure you don't mismatch things, like making the logic for an NPC accidentally wrap the data for a player. Does that make any sense? Is that a situation that would even be possible if the architecture is correct, or would the right design prohibit that completely so I don't need to check for it? If someone could help me just layout a class diagram or something for this it would help me a lot. Thanks. edit Also useful to note, the mock data class doesn't really need the Entity, since I'll just be specifying all the parameters like combat stats directly instead. So maybe that will affect the correct design.

    Read the article

  • Insanity&ndash;Day 1

    - by D'Arcy Lussier
    Some people do those posts about “Here’s what I’m going to do to change my life”. I don’t really like those. I’m a “don’t tell me what you’re going to do, tell me what you’re doing/have done” type of guy. So while I could say how I’m going to change my life and be healthier and happier and thinner in 60 days, I’m just going to tell you how I’m progressing through the Insanity workout. Insanity is a workout-in-a-box. It’s a collection of DVDs that you work out to. Nothing new here, except that the program is intense. It’s core tenet of the program is intensity – you workout at an elevated heart rate for 3 minutes with a short break in between, as opposed to traditional interval training where you go hard for a short time and recover for a few minutes. The other aspect of it is commitment. There is a timetable to follow – you workout 6 days a week with one rest day. This isn’t meant to be a “pick it up whenever you feel like it” type of program. The videos themselves are kind of…I don’t want to say low quality, but not as polished as I expected. Maybe that’s what they were going for. By that I mean they show shots of cameramen and the production equipment during the workout. Otherwise, it’s the Insanity leader Shawn T. leading a group of pretty fit folks through this gruelling workout. And ultimately, those little production nit-picks are irrelevant compared to the actual workout. Holy crap. I haven’t done an aerobics class in like…ever. And watching the video before my actual workout, I thought “That doesn’t look too hard”. Believe me, it is. No weights, no machines, just various exercises done in circuits and with increasing speed. By the end of the workout, I was drenched. So that was day one. Some stats just so I can track it: I’m 286 lbs at my last weigh-in a week ago or so. Should be interesting to see what 60 days of this does! D

    Read the article

  • how did Google Analytics kill my site?

    - by user1813359
    Yesterday I created a google analytics profile for one of my sites and included the JS block in the layout template. What happened next was very strange. Within about 2 minutes, the site had become unreachable. I had been checking the AWStats page for the site when I thought to set up GA. After that had been done, I clicked on the link for 404 stats, which opens in a new tab. It churned for a long while and then showed a nearly blank page, similar to that when Firefox chokes on a badly-formatted XML page, except there was no error msg. But i was logged into the server and could see that that page has a 401 Transitional DTD. Strange! I tried viewing source but it just churned endlessly. I then tried "inspect element" and was able to see an error msg having to do with some internal Firefox lib. Unfortunately, i neglected to copy that. :-( All further attempts to load anything on the site would time out. Firebug's Net panel showed no request being made. Chrome would time out. So, I deleted the GA profile, removed the JS block, and cleared the server cache. No joy. I then removed all google cookies and disabled JS. Still nothing. No luck in any other browser. And now my client couldn't access the site. Terrific. I was able use wget while logged into another server. The retrieved page was fine, and did not contain the GA JS block. However, the two servers are on the same network. (Perhaps a clue.) The server itself was fine. Ping, traceroute looked great. I could SSH in. I tailed the access log and tried a browser request. Nothing. But i forgot to quit and a minute or so later I saw a request from someone else being logged. Later, I could see that requests had been served all day to some people. Now, 24 hours later, the site works once again, but is still unreachable by the client (who is in another city). So, does anyone have some insight into what's going on? Does this have something to do with google's CDN? I don't know very much about how GA works but what I'm seeing reminds me of DNS propagation issues. And why the initial XML error? And why the heck was the site just plain unreachable? What did google do to my site?! Sorry for the length but I wanted to cover everything.

    Read the article

  • Using dd-wrt Dynamic DNS client with CloudFlare

    - by Roman
    I'm trying to configure Dynamic DNS client on my router with dd-wrt (v24-sp2) firmware so it would dynamically change IP address in one of the DNS records. Unfortunately I encountered a problem… Here is an example request from their ddclient configuration: https://www.cloudflare.com/api.html?a=DIUP&u=<my_login>&tkn=<my_token>&ip=<my_ip>&hosts=<my_record> It works if I use it in browser, but in dd-wrt I get this output: Tue Jan 24 00:36:47 2012: INADYN: Started 'INADYN Advanced version 1.96-ADV' - dynamic DNS updater. Tue Jan 24 00:36:47 2012: I:INADYN: IP address for alias '<my_record>' needs update to '<my_ip>' Tue Jan 24 00:36:48 2012: W:INADYN: Error validating DYNDNS svr answer. Check usr,pass,hostname! (HTTP/1.1 303 See Other Server: cloudflare-nginx Date: Mon, 23 Jan 2012 14:36:48 GMT Content-Type: text/plain Connection: close Expires: Sun, 25 Jan 1981 05:00:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: https://www.cloudflare.com/api.html?a=DIUP&u=<my_login>&tkn=<my_token>&ip=<my_ip>&hosts=<my_record> Vary: Accept-Encoding Set-Cookie: __cfduid=<id>; expires=Mon, 23-Dec-2019 23:50:00 GMT; path=/; domain=.cloudflare.com Set-Cookie: __cfduid=<id>; expires=Mon, 23-Dec-2019 23:50:00 GMT; path=/; domain=.www.cloudflare.com You must include an `a' paramiter, with a value of DIUP|wl|chl|nul|ban|comm_news|devmode|sec_lvl|ipv46|ob|cache_lvl|fpurge_ts|async|pre_purge|minify|stats|direct|zone_check|zone_ips|zone_errors|zone_agg|zone_search|zone_time|zone_grab|app|rec_se URL from "Location" works perfectly and parameter "a" is included. What's the problem?

    Read the article

  • HA Proxy Stick-table and tcp-connection configuration

    - by Vladimir
    I am using HA Proxy HA-Proxy version 1.4.18 2011/09/16 I am trying to insert the following into /etc/init.d/haproxy.cfg file # Use General Purpose Couter (gpc) 0 in SC1 as a global abuse counter # Monitors the number of request sent by an IP over a period of 10 seconds stick-table type ip size 1m expire 10s store gpc0,http_req_rate(10s) tcp-request connection track-sc1 src tcp-request connection reject if { src_get_gpc0 gt 0 } # Table definition stick-table type ip size 100k expire 30s store conn_cur(3s) # Allow clean known IPs to bypass the filter tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst } # Shut the new connection as long as the client has already 10 opened tcp-request connection reject if { src_conn_cur ge 10 } tcp-request connection track-sc1 src I get the following error: [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:36] : stick-table: unknown argument 'store'. [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:37] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:38] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:41] : stick-table: unknown argument 'store'. [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:43] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:45] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:46] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg [WARNING] 256/113143 (4627) : Proxy 'http_proxy': in multi-process mode, stats will be limited to process assigned to the current request. [ALERT] 256/113143 (4627) : Fatal errors found in configuration. [fail] Could you please tell me what is wrong with the code? Thanks!

    Read the article

  • Email with extra '.com' behind sender email address

    - by CHT
    Currently I had a situation where I sent an email to [email protected], but when I receive mail from [email protected], it showed as [email protected], with extra '.com' behind the email address, this just happen within this week. Before this, I didn't change any setting, currently I am using Outlook 2010. When I checked the email in webmail, it also showed it as [email protected]. It seem that it has nothing to do with Outlook. However, I also tried on Thunderbird 16.0.1, but still the problem is the same. Has anyone experienced this before? Is the problem caused by the sender or receiver? Header Message as below: Return-Path: [email protected] Received: from colo4.roaringpenguin.com (not-assigned.privatedns.com [174.142.115.36] (may be forged)) by pioneerpos.com (8.12.11/8.12.11) with ESMTP id q9V6OsKU032650 for [email protected]; Wed, 31 Oct 2012 01:24:55 -0500 Received: from mail.pointsoft.com.tw (pointsoft.com.tw [59.124.242.126]) by colo4.roaringpenguin.com (8.14.3/8.14.3/Debian-9.4) with ESMTP id q9V6OmN0026374 for [email protected]; Wed, 31 Oct 2012 02:24:50 -0400 X-MimeOLE: Produced By Microsoft Exchange V6.5 Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----_=_NextPart_001_01CDB730.6B3D5A51" Subject: =?big5?B?scTByrPmLblzpfM=?= Date: Wed, 31 Oct 2012 14:25:16 +0800 Message-ID: X-MS-Has-Attach: yes X-MS-TNEF-Correlator: thread-topic: =?big5?B?scTByrPmLblzpfM=?= thread-index: Ac23MH3YpZuLx2ejTYqR5PfoZ+IoBw== X-Priority: 1 Priority: Urgent Importance: high From: "Alice" [email protected] To: "Bob" [email protected] X-Spam-Score: undef - pointsoft.com.tw is whitelisted. X-CanIt-Geo: ip=59.124.242.126; country=TW; region=03; city=Taipei; latitude=25.0392; longitude=121.5250; http://maps.google.com/maps?q=25.0392,121.5250&z=6 X-CanItPRO-Stream: pioneerpos-com:default (inherits from rp-customers:default,base:default) X-Canit-Stats-ID: 02IhGoMJb - 2e7fa924443e - 20121031 X-CanIt-Archive-Cluster: irqpXI7aJGyo4Ewta7qVH399FOg X-Scanned-By: CanIt (www . roaringpenguin . com) on 174.142.115.36

    Read the article

  • FreeBSD performance tuning. Sysctls, loader.conf, kernel.

    - by SaveTheRbtz
    I wanted to share knowledge of tuning FreeBSD via sysctls, so i'm posting them with comments. Based on Igor Sysoev (author of nginx) presentation about FreeBSD tuning up to 100,000-200,000 active connections. Sysctls are for 7.x FreeBSD. Since 7.2 amd64 some of them are tuned well by default. Prior 7.0 some of them are boot only (set via /boot/loader.conf) or does not exist at all. Highload web server sysctls: # Max. backlog size kern.ipc.somaxconn=4096 # Shared memory // 7.2+ can use shared memory > 2Gb kern.ipc.shmmax=2147483648 # Sockets kern.ipc.maxsockets=204800 # Do not use lager sockbufs on 8.0 # ( http://old.nabble.com/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981 ) kern.ipc.maxsockbuf=262144 # Recive clusters (on amd64 7.2+ 65k is default) # For such high value vm.kmem_size must be increased to 3G #kern.ipc.nmbclusters=229376 # Jumbo pagesize(4k/8k) clusters # Used as general packet storage for jumbo frames # can be monitored via `netstat -m` #kern.ipc.nmbjumbop=192000 # Jumbo 9k/16k clusters # If you are using them #kern.ipc.nmbjumbo9=24000 #kern.ipc.nmbjumbo16=10240 # Every socket is a file, so increase them kern.maxfiles=204800 kern.maxfilesperproc=200000 kern.maxvnodes=200000 # Turn off receive autotuning #net.inet.tcp.recvbuf_auto=0 # Small receive space, only usable on http-server, on file server this # should be increased to 65535 or even more #net.inet.tcp.recvspace=8192 # Small send space is useful for http servers that serve small files # Autotuned since 7.x net.inet.tcp.sendspace=16384 # This should be enabled if you going to use big spaces (>64k) #net.inet.tcp.rfc1323=1 # Turn this off on highspeed, lossless connections (LAN 1Gbit+) #net.inet.tcp.delayed_ack=0 # This feature is useful if you are serving data over modems, Gigabit Ethernet, # or even high speed WAN links (or any other link with a high bandwidth delay product), # especially if you are also using window scaling or have configured a large send window. # You can try setting it to 0 on fileserver with 1GBit+ interfaces # Automatically disables on small RTT ( http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/netinet/tcp_subr.c?#rev1.237 ) #net.inet.tcp.inflight.enable=0 # Disable randomizing of ports to avoid false RST # Before usage check SA here www.bsdcan.org/2006/papers/ImprovingTCPIP.pdf # (it's also says that port randomization auto-disables at some conn.rates, but I didn't tested it thou) #net.inet.ip.portrange.randomized=0 # Increase portrange # For outgoing connections only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=1024 net.inet.ip.portrange.last=65535 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # Increases default TTL, sometimes useful # Default is 64 net.inet.ip.ttl=128 # Lessen max segment life to conserve resources # ACK waiting time in miliseconds (default: 30000 from RFC) net.inet.tcp.msl=5000 # Max bumber of timewait sockets net.inet.tcp.maxtcptw=40960 # Don't use tw on local connections # As of 15 Apr 2009. Igor Sysoev says that nolocaltimewait has some buggy realization. # So disable it or now till get fixed #net.inet.tcp.nolocaltimewait=1 # FIN_WAIT_2 state fast recycle net.inet.tcp.fast_finwait2_recycle=1 # Time before tcp keepalive probe is sent # default is 2 hours (7200000) #net.inet.tcp.keepidle=60000 # Should be increased until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=4096 # Interrupt handling via multiple CPU, but with context switch. # You can play with it. Default is 1; #net.isr.direct=0 # This is for routers only #net.inet.ip.forwarding=1 #net.inet.ip.fastforwarding=1 # This speed ups dummynet when channel isn't saturated net.inet.ip.dummynet.io_fast=1 # Increase dummynet(4) hash #net.inet.ip.dummynet.hash_size=2048 #net.inet.ip.dummynet.max_chain_len # Should be increased when you have A LOT of files on server # (Increase until vfs.ufs.dirhash_mem becames lower) vfs.ufs.dirhash_maxmem=67108864 # Explicit Congestion Notification (see http://en.wikipedia.org/wiki/Explicit_Congestion_Notification) net.inet.tcp.ecn.enable=1 # Flowtable - flow caching mechanism # Useful for routers #net.inet.flowtable.enable=1 #net.inet.flowtable.nmbflows=65535 # Extreme polling tuning #kern.polling.burst_max=1000 #kern.polling.each_burst=1000 #kern.polling.reg_frac=100 #kern.polling.user_frac=1 #kern.polling.idle_poll=0 # IPFW dynamic rules and timeouts tuning # Increase dyn_buckets till net.inet.ip.fw.curr_dyn_buckets is lower net.inet.ip.fw.dyn_buckets=65536 net.inet.ip.fw.dyn_max=65536 net.inet.ip.fw.dyn_ack_lifetime=120 net.inet.ip.fw.dyn_syn_lifetime=10 net.inet.ip.fw.dyn_fin_lifetime=2 net.inet.ip.fw.dyn_short_lifetime=10 # Make packets pass firewall only once when using dummynet # i.e. packets going thru pipe are passing out from firewall with accept #net.inet.ip.fw.one_pass=1 # shm_use_phys Wires all shared pages, making them unswappable # Use this to lessen Virtual Memory Manager's work when using Shared Mem. # Useful for databases #kern.ipc.shm_use_phys=1 /boot/loader.conf: # Accept filters for data, http and DNS requests # Usefull when your software uses select() instead of kevent/kqueue or when you under DDoS # DNS accf available on 8.0+ accf_data_load="YES" accf_http_load="YES" accf_dns_load="YES" # Async IO system calls aio_load="YES" # Adds NCQ support in FreeBSD # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ # 8.0+ only #ahci_load= #siis_load= # Increase kernel memory size to 3G. # # Use ONLY if you have KVA_PAGES in kernel configuration, and you have more than 3G RAM # Otherwise panic will happen on next reboot! # # It's required for high buffer sizes: kern.ipc.nmbjumbop, kern.ipc.nmbclusters, etc # Useful on highload stateful firewalls, proxies or ZFS fileservers # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #vm.kmem_size="3G" # Older versions of FreeBSD can't tune maxfiles on the fly #kern.maxfiles="200000" # Useful for databases # Sets maximum data size to 1G # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #kern.maxdsiz="1G" # Maximum buffer size(vfs.maxbufspace) # You can check current one via vfs.bufspace # Should be lowered/upped depending on server's load-type # Usually decreased to preserve kmem # (default is 200M) #kern.maxbcache="512M" # Sendfile buffers # For i386 only #kern.ipc.nsfbufs=10240 # syncache Hash table tuning net.inet.tcp.syncache.hashsize=1024 net.inet.tcp.syncache.bucketlimit=100 # Incresed hostcache net.inet.tcp.hostcache.hashsize="16384" net.inet.tcp.hostcache.bucketlimit="100" # TCP control-block Hash table tuning net.inet.tcp.tcbhashsize=4096 # Enable superpages, for 7.2+ only # Also read http://lists.freebsd.org/pipermail/freebsd-hackers/2009-November/030094.html vm.pmap.pg_ps_enabled=1 # Usefull if you are using Intel-Gigabit NIC #hw.em.rxd=4096 #hw.em.txd=4096 #hw.em.rx_process_limit="-1" # Also if you have ALOT interrupts on NIC - play with following parameters # NOTE: You should set them for every NIC #dev.em.0.rx_int_delay: 250 #dev.em.0.tx_int_delay: 250 #dev.em.0.rx_abs_int_delay: 250 #dev.em.0.tx_abs_int_delay: 250 # There is also multithreaded version of em drivers can be found here: # http://people.yandex-team.ru/~wawa/ # # for additional em monitoring and statistics use # `sysctl dev.em.0.stats=1 ; dmesg` # #Same tunings for igb #hw.igb.rxd=4096 #hw.igb.txd=4096 #hw.igb.rx_process_limit=100 # Some useful netisr tunables. See sysctl net.isr #net.isr.defaultqlimit=4096 #net.isr.maxqlimit: 10240 # Bind netisr threads to CPUs #net.isr.bindthreads=1 # Nicer boot logo =) loader_logo="beastie" And finally here is my additions to GENERIC kernel # Just some of them, see also # cat /sys/{i386,amd64,}/conf/NOTES # This one useful only on i386 #options KVA_PAGES=512 # You can play with HZ in environments with high interrupt rate (default is 1000) # 100 is for my notebook to prolong it's battery life #options HZ=100 # Polling is goot on network loads with high packet rates and low-end NICs # NB! Do not enable it if you want more than one netisr thread #options DEVICE_POLLING # Eliminate datacopy on socket read-write # To take advantage with zero copy sockets you should have an MTU of 8K(amd64) # (4k for i386). This req. is only for receiving data. # Read more in man zero_copy_sockets #options ZERO_COPY_SOCKETS # Support TCP sign. Used for IPSec options TCP_SIGNATURE options IPSEC # This ones can be loaded as modules. They described in loader.conf section #options ACCEPT_FILTER_DATA #options ACCEPT_FILTER_HTTP # Adding ipfw, also can be loaded as modules options IPFIREWALL options IPFIREWALL_VERBOSE options IPFIREWALL_VERBOSE_LIMIT=10 options IPFIREWALL_DEFAULT_TO_ACCEPT options IPFIREWALL_FORWARD # Adding kernel NAT options IPFIREWALL_NAT options LIBALIAS # Traffic shaping options DUMMYNET # Divert, i.e. for userspace NAT options IPDIVERT # This is for OpenBSD's pf firewall device pf device pflog # pf's QoS - ALTQ options ALTQ options ALTQ_CBQ # Class Bases Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) options ALTQ_NOPCC # Required for SMP build # Pretty console # Manual can be found here http://forums.freebsd.org/showthread.php?t=6134 #options VESA #options SC_PIXEL_MODE # Disable reboot on Ctrl Alt Del #options SC_DISABLE_REBOOT # Change normal|kernel messages color options SC_NORM_ATTR=(FG_GREEN|BG_BLACK) options SC_KERNEL_CONS_ATTR=(FG_YELLOW|BG_BLACK) # More scroll space options SC_HISTORY_SIZE=8192 # Adding hardware crypto device device crypto device cryptodev # Useful network interfaces device vlan device tap #Virtual Ethernet driver device gre #IP over IP tunneling device if_bridge #Bridge interface device pfsync #synchronization interface for PF device carp #Common Address Redundancy Protocol device enc #IPsec interface device lagg #Link aggregation interface device stf #IPv4-IPv6 port # Also for my notebook, but may be used with Opteron #device amdtemp # Support for ECMP. More than one route for destination # Works even with default route so one can use it as LB for two ISP # For now code is unstable and panics (panic: rtfree 2) on route deletions. #options RADIX_MPATH # Multicast routing #options MROUTING #options PIM # DTrace options KDTRACE_HOOKS # all architectures - enable general DTrace hooks options DDB_CTF # all architectures - kernel ELF linker loads CTF data #options KDTRACE_FRAME # amd64-only # Adaptive spining in lockmgr (8.x+) # See http://www.mail-archive.com/[email protected]/msg10782.html options ADAPTIVE_LOCKMGRS # UTF-8 in console (9.x+) #options TEKEN_UTF8 #options TEKEN_XTERM # NCQ support # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ #options ATA_CAM # FreeBSD 9+ # Deadlock resolver thread # For additional information see http://www.mail-archive.com/[email protected]/msg18124.html #options DEADLKRES PS. Also most of FreeBSD's limits can be monitored by # vmstat -z and # limits PPS. variety of network counters can be monitored via # netstat -s In FreeBSD-9 netstat's -Q option appeared, try following command to display netisr stats # netstat -Q PPPS. also see # man 7 tuning PPPPS. I wanted to thank FreeBSD community, especially author of nginx - Igor Sysoev, nginx-ru@ and FreeBSD-performance@ mailing lists for providing useful information about FreeBSD tuning. So here is the question: What tunings are you using on yours FreeBSD servers? You can also post your /etc/sysctl.conf, /boot/loader.conf, kernel options, etc with description of its' meaning (do not copy-paste from sysctl -d). Don't forget to specify server type (web, smb, gateway, etc) Let's share experience!

    Read the article

  • Excel techniques for perfmon csv log file analysis

    - by Aszurom
    I have perfmon running against several servers, where I'm outputting to a .csv file data like CPU %time, memory bytes free, hard disk I/O metrics like s/write and writes/s. The ones graphing the SQL servers are also collecting SQL stats. The web servers are collecting .Net relevant stuff. I am aware of PAL, and used it as a template of what data to capture based on server type actually. I just don't think the output it generates is detailed or flexible enough - but it does a pretty remarkable job of parsing logs and making graphs. I'm borderline incompetent with Excel, so I'm hoping to be directed to some knowledge of how to take a perfmon output .csv and mine it in Excel to produce some numbers that are meaningful to me as a sysadmin. I could of course just pick a range of data and assemble a graph out of that and look for spikes and trends, but I'm convinced there is some technique to this that makes it more manageable than looking at a monsterous spreadsheet of numbers and trying to make graphs of it. Plus, it's pretty time consuming and not something I can do as a "take a glance at the servers" sort of routine. I'm graphing CPU, disk use, network b/sec, etc. in Cacti as well, which is nice for seeing big trends. The problem is that it is 5 minute averages, so a server could have a problem but it's intermittent and washes out in a 5 min average. What do you do with perfmon data that I could learn from?

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >