Search Results

Search found 11262 results on 451 pages for 'important directories'.

Page 220/451 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • 6 Ways to Free Up Hard Drive Space Used by Windows System Files

    - by Chris Hoffman
    We’ve previously covered the standard ways to free up space on Windows. But if you have a small solid-state drive and really want more hard space, there are geekier ways to reclaim hard drive space. Not all of these tips are recommended — in fact, if you have more than enough hard drive space, following these tips may actually be a bad idea. There’s a tradeoff to changing all of these settings. Erase Windows Update Uninstall Files Windows allows you to uninstall patches you install from Windows Update. This is helpful if an update ever causes a problem — but how often do you need to uninstall an update, anyway? And will you really ever need to uninstall updates you’ve installed several years ago? These uninstall files are probably just wasting space on your hard drive. A recent update released for Windows 7 allows you to erase Windows Update files from the Windows Disk Cleanup tool. Open Disk Cleanup, click Clean up system files, check the Windows Update Cleanup option, and click OK. If you don’t see this option, run Windows Update and install the available updates. Remove the Recovery Partition Windows computers generally come with recovery partitions that allow you to reset your computer back to its factory default state without juggling discs. The recovery partition allows you to reinstall Windows or use the Refresh and Reset your PC features. These partitions take up a lot of space as they need to contain a complete system image. On Microsoft’s Surface Pro, the recovery partition takes up about 8-10 GB. On other computers, it may be even larger as it needs to contain all the bloatware the manufacturer included. Windows 8 makes it easy to copy the recovery partition to removable media and remove it from your hard drive. If you do this, you’ll need to insert the removable media whenever you want to refresh or reset your PC. On older Windows 7 computers, you could delete the recovery partition using a partition manager — but ensure you have recovery media ready if you ever need to install Windows. If you prefer to install Windows from scratch instead of using your manufacturer’s recovery partition, you can just insert a standard Window disc if you ever want to reinstall Windows. Disable the Hibernation File Windows creates a hidden hibernation file at C:\hiberfil.sys. Whenever you hibernate the computer, Windows saves the contents of your RAM to the hibernation file and shuts down the computer. When it boots up again, it reads the contents of the file into memory and restores your computer to the state it was in. As this file needs to contain much of the contents of your RAM, it’s 75% of the size of your installed RAM. If you have 12 GB of memory, that means this file takes about 9 GB of space. On a laptop, you probably don’t want to disable hibernation. However, if you have a desktop with a small solid-state drive, you may want to disable hibernation to recover the space. When you disable hibernation, Windows will delete the hibernation file. You can’t move this file off the system drive, as it needs to be on C:\ so Windows can read it at boot. Note that this file and the paging file are marked as “protected operating system files” and aren’t visible by default. Shrink the Paging File The Windows paging file, also known as the page file, is a file Windows uses if your computer’s available RAM ever fills up. Windows will then “page out” data to disk, ensuring there’s always available memory for applications — even if there isn’t enough physical RAM. The paging file is located at C:\pagefile.sys by default. You can shrink it or disable it if you’re really crunched for space, but we don’t recommend disabling it as that can cause problems if your computer ever needs some paging space. On our computer with 12 GB of RAM, the paging file takes up 12 GB of hard drive space by default. If you have a lot of RAM, you can certainly decrease the size — we’d probably be fine with 2 GB or even less. However, this depends on the programs you use and how much memory they require. The paging file can also be moved to another drive — for example, you could move it from a small SSD to a slower, larger hard drive. It will be slower if Windows ever needs to use the paging file, but it won’t use important SSD space. Configure System Restore Windows seems to use about 10 GB of hard drive space for “System Protection” by default. This space is used for System Restore snapshots, allowing you to restore previous versions of system files if you ever run into a system problem. If you need to free up space, you could reduce the amount of space allocated to system restore or even disable it entirely. Of course, if you disable it entirely, you’ll be unable to use system restore if you ever need it. You’d have to reinstall Windows, perform a Refresh or Reset, or fix any problems manually. Tweak Your Windows Installer Disc Want to really start stripping down Windows, ripping out components that are installed by default? You can do this with a tool designed for modifying Windows installer discs, such as WinReducer for Windows 8 or RT Se7en Lite for Windows 7. These tools allow you to create a customized installation disc, slipstreaming in updates and configuring default options. You can also use them to remove components from the Windows disc, shrinking the size of the resulting Windows installation. This isn’t recommended as you could cause problems with your Windows installation by removing important features. But it’s certainly an option if you want to make Windows as tiny as possible. Most Windows users can benefit from removing Windows Update uninstallation files, so it’s good to see that Microsoft finally gave Windows 7 users the ability to quickly and easily erase these files. However, if you have more than enough hard drive space, you should probably leave well enough alone and let Windows manage the rest of these settings on its own. Image Credit: Yutaka Tsutano on Flickr     

    Read the article

  • Package system broken - E: Sub-process /usr/bin/dpkg returned an error code (1)

    - by delha
    After installing some packages and libraries I have an error on Package Manager, I can't run any update because it says: "The package system is broken If you are using third party repositories then disable them, since they are a common source of problems. Now run the following command in a terminal: apt-get install -f " I've tried to do what it says and it returns me: jara@jara-Aspire-5738:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libcaca-dev libopencv2.3-bin nite-dev python-bluez ps-engine libslang2-dev python-sphinx ros-electric-geometry-tutorials ros-electric-geometry-visualization python-matplotlib libzzip-dev ros-electric-orocos-kinematics-dynamics ros-electric-physics-ode libbluetooth-dev libaudiofile-dev libassimp2 libnetpbm10-dev ros-electric-laser-pipeline python-epydoc ros-electric-geometry-experimental libasound2-dev evtest python-matplotlib-data libyaml-dev ros-electric-bullet ros-electric-executive-smach ros-electric-documentation libgl2ps0 libncurses5-dev ros-electric-robot-model texlive-fonts-recommended python-lxml libwxgtk2.8-dev daemontools libxxf86vm-dev libqhull-dev libavahi-client-dev ros-electric-geometry libgl2ps-dev libcurl4-openssl-dev assimp-dev libusb-1.0-0-dev libopencv2.3 ros-electric-diagnostics-monitors libsdl1.2-dev libjs-underscore libsdl-image1.2 tipa libusb-dev libtinfo-dev python-tz python-sip libfltk1.1 libesd0 libfreeimage-dev ros-electric-visualization x11proto-xf86vidmode-dev python-docutils libvtk5.6 ros-electric-assimp x11proto-scrnsaver-dev libnetcdf-dev libidn11-dev libeigen3-dev joystick libhdf5-serial-1.8.4 ros-electric-joystick-drivers texlive-fonts-recommended-doc esound-common libesd0-dev tcl8.5-dev ros-electric-multimaster-experimental ros-electric-rx libaudio-dev ros-electric-ros-tutorials libwxbase2.8-dev ros-electric-visualization-common python-sip-dev ros-electric-visualization-tutorials libfltk1.1-dev libpulse-dev libnetpbm10 python-markupsafe openni-dev tk8.5-dev wx2.8-headers freeglut3-dev libavahi-common-dev python-roman python-jinja2 ros-electric-robot-model-visualization libxss-dev libqhull5 libaa1-dev ros-electric-eigen freeglut3 ros-electric-executive-smach-visualization ros-electric-common-tutorials ros-electric-robot-model-tutorials libnetcdf6 libjs-sphinxdoc python-pyparsing libaudiofile0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libcv-dev The following NEW packages will be installed libcv-dev 0 upgraded, 1 newly installed, 0 to remove and 4 not upgraded. 2 not fully installed or removed. Need to get 0 B/3,114 kB of archives. After this operation, 11.1 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 261801 files and directories currently installed.) Unpacking libcv-dev (from .../libcv-dev_2.1.0-7build1_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/libcv-dev_2.1.0-7build1_amd64.deb (-- unpack): trying to overwrite '/usr/bin/opencv_haartraining', which is also in package libopencv2.3-bin 2.3.1+svn6514+branch23-12~oneiric dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libcv-dev_2.1.0-7build1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I've tried everything people recommend on internet like: sudo apt-get clean sudo apt-get autoremove sudo apt-get update sudo apt-get upgrade sudo apt-get -f install Also I've tried to install the synaptic manager but it doesn't let me install anything.. As you can see nothing works so I'm desperate! I'm using ubuntu 11.10, 64 bits Thanks!!

    Read the article

  • SQL SERVER – Faster SQL Server Databases and Applications – Power and Control with SafePeak Caching Options

    - by Pinal Dave
    Update: This blog post is written based on the SafePeak, which is available for free download. Today, I’d like to examine more closely one of my preferred technologies for accelerating SQL Server databases, SafePeak. Safepeak’s software provides a variety of advanced data caching options, techniques and tools to accelerate the performance and scalability of SQL Server databases and applications. I’d like to look more closely at some of these options, as some of these capabilities could help you address lagging database and performance on your systems. To better understand the available options, it is best to start by understanding the difference between the usual “Basic Caching” vs. SafePeak’s “Dynamic Caching”. Basic Caching Basic Caching (or the stale and static cache) is an ability to put the results from a query into cache for a certain period of time. It is based on TTL, or Time-to-live, and is designed to stay in cache no matter what happens to the data. For example, although the actual data can be modified due to DML commands (update/insert/delete), the cache will still hold the same obsolete query data. Meaning that with the Basic Caching is really static / stale cache.  As you can tell, this approach has its limitations. Dynamic Caching Dynamic Caching (or the non-stale cache) is an ability to put the results from a query into cache while maintaining the cache transaction awareness looking for possible data modifications. The modifications can come as a result of: DML commands (update/insert/delete), indirect modifications due to triggers on other tables, executions of stored procedures with internal DML commands complex cases of stored procedures with multiple levels of internal stored procedures logic. When data modification commands arrive, the caching system identifies the related cache items and evicts them from cache immediately. In the dynamic caching option the TTL setting still exists, although its importance is reduced, since the main factor for cache invalidation (or cache eviction) become the actual data updates commands. Now that we have a basic understanding of the differences between “basic” and “dynamic” caching, let’s dive in deeper. SafePeak: A comprehensive and versatile caching platform SafePeak comes with a wide range of caching options. Some of SafePeak’s caching options are automated, while others require manual configuration. Together they provide a complete solution for IT and Data managers to reach excellent performance acceleration and application scalability for  a wide range of business cases and applications. Automated caching of SQL Queries: Fully/semi-automated caching of all “read” SQL queries, containing any types of data, including Blobs, XMLs, Texts as well as all other standard data types. SafePeak automatically analyzes the incoming queries, categorizes them into SQL Patterns, identifying directly and indirectly accessed tables, views, functions and stored procedures; Automated caching of Stored Procedures: Fully or semi-automated caching of all read” stored procedures, including procedures with complex sub-procedure logic as well as procedures with complex dynamic SQL code. All procedures are analyzed in advance by SafePeak’s  Metadata-Learning process, their SQL schemas are parsed – resulting with a full understanding of the underlying code, objects dependencies (tables, views, functions, sub-procedures) enabling automated or semi-automated (manually review and activate by a mouse-click) cache activation, with full understanding of the transaction logic for cache real-time invalidation; Transaction aware cache: Automated cache awareness for SQL transactions (SQL and in-procs); Dynamic SQL Caching: Procedures with dynamic SQL are pre-parsed, enabling easy cache configuration, eliminating SQL Server load for parsing time and delivering high response time value even in most complicated use-cases; Fully Automated Caching: SQL Patterns (including SQL queries and stored procedures) that are categorized by SafePeak as “read and deterministic” are automatically activated for caching; Semi-Automated Caching: SQL Patterns categorized as “Read and Non deterministic” are patterns of SQL queries and stored procedures that contain reference to non-deterministic functions, like getdate(). Such SQL Patterns are reviewed by the SafePeak administrator and in usually most of them are activated manually for caching (point and click activation); Fully Dynamic Caching: Automated detection of all dependent tables in each SQL Pattern, with automated real-time eviction of the relevant cache items in the event of “write” commands (a DML or a stored procedure) to one of relevant tables. A default setting; Semi Dynamic Caching: A manual cache configuration option enabling reducing the sensitivity of specific SQL Patterns to “write” commands to certain tables/views. An optimization technique relevant for cases when the query data is either known to be static (like archive order details), or when the application sensitivity to fresh data is not critical and can be stale for short period of time (gaining better performance and reduced load); Scheduled Cache Eviction: A manual cache configuration option enabling scheduling SQL Pattern cache eviction based on certain time(s) during a day. A very useful optimization technique when (for example) certain SQL Patterns can be cached but are time sensitive. Example: “select customers that today is their birthday”, an SQL with getdate() function, which can and should be cached, but the data stays relevant only until 00:00 (midnight); Parsing Exceptions Management: Stored procedures that were not fully parsed by SafePeak (due to too complex dynamic SQL or unfamiliar syntax), are signed as “Dynamic Objects” with highest transaction safety settings (such as: Full global cache eviction, DDL Check = lock cache and check for schema changes, and more). The SafePeak solution points the user to the Dynamic Objects that are important for cache effectiveness, provides easy configuration interface, allowing you to improve cache hits and reduce cache global evictions. Usually this is the first configuration in a deployment; Overriding Settings of Stored Procedures: Override the settings of stored procedures (or other object types) for cache optimization. For example, in case a stored procedure SP1 has an “insert” into table T1, it will not be allowed to be cached. However, it is possible that T1 is just a “logging or instrumentation” table left by developers. By overriding the settings a user can allow caching of the problematic stored procedure; Advanced Cache Warm-Up: Creating an XML-based list of queries and stored procedure (with lists of parameters) for periodically automated pre-fetching and caching. An advanced tool allowing you to handle more rare but very performance sensitive queries pre-fetch them into cache allowing high performance for users’ data access; Configuration Driven by Deep SQL Analytics: All SQL queries are continuously logged and analyzed, providing users with deep SQL Analytics and Performance Monitoring. Reduce troubleshooting from days to minutes with database objects and SQL Patterns heat-map. The performance driven configuration helps you to focus on the most important settings that bring you the highest performance gains. Use of SafePeak SQL Analytics allows continuous performance monitoring and analysis, easy identification of bottlenecks of both real-time and historical data; Cloud Ready: Available for instant deployment on Amazon Web Services (AWS). As you can see, there are many options to configure SafePeak’s SQL Server database and application acceleration caching technology to best fit a lot of situations. If you’re not familiar with their technology, they offer free-trial software you can download that comes with a free “help session” to help get you started. You can access the free trial here. Also, SafePeak is available to use on Amazon Cloud. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • CodePlex Daily Summary for Thursday, June 17, 2010

    CodePlex Daily Summary for Thursday, June 17, 2010New ProjectsAstalanumerator: A JavaScript based recursive DOM/JS object inspector. Uses a simple tree menu to enumerate all properties of a object.BDD Log Converter: A simple .NET class and console application that will convert BDD logs (MDT) into XML format.CastleInvestProj: Castle Investigating project Easy Callback: This library facilitates the use of multiple asynchronous calls on the same page, and asynchronous calls from a user control also have a clean cod...Easy Wings: Small webApp to manage aircraft booking in flying club. French only for the moment.EPiServer Template Foundation: EPiServer Template Foundation builds on top of Page Type Builder to provide a framework for common site features such as basic page type properties...guidebook: a project to plan your road trip.Look into documents for e-discovery: Search, browse, tag, annotate documents such as MS Word, PDF, e-mail, etc. Good for legal professionals do e-discovery. One Bus Away for Windows Phone: A Windows Phone 7 application written in Silverlight for the OneBusAway (www.onebusaway.org) website. Allows mobile users to search for public tra...OneBusAway for Windows Phone 7: OneBusAway is a service with transit information for the Seattle, WA region. We are creating a mobile application for Windows Phone 7 utilizing th...PoFabLab - Poetry Generation Library and Editor in .NET: PoFabLab is an open source library and word processor designed for digital poets. The library can scan lines, perform Markov analysis, filter text...Project Axure: More details coming soon.Чат кутежа 2.0: ИРЦ чат специально для форума ЕНЕ简易代码生成器: 初次使用CodePlex,这只是一个测试项目。打算用WPF做一个简单的代码生成器,兼具SQL Server Client功能。使用.Net 4.0, C#开发。运营工作系统: TRAS(Team resource assist system) is a toolkit that help the studio to manage and distribute the daily work, like publish the news, GM broadcast a...New ReleasesAmuse - A New MU* Client For Windows: 2010 June: Important Notice to TestersPlease uninstall any previous versions of Amuse prior to this one before installing. Changes and InformationFirst relea...ASP.NET Generic Data Source Control: V1.0: GenericDataSource - Version 1.0Binary This is the first official binary release of the GenericDataSource for ASP.NET - stable and ready for product...Astalanumerator: Astalanumerator 0.7: I wanted to map all properties in javascript and inspect them regardless if they were objects or not. IE doesn’t support for(i in..) for native pro...BDD Log Converter: BDD Log Converter 0.1.0: First release (0.1.0).DVD Swarm: 0.8.10.616: Major update with improvements to encoding speed.Easy Callback: Easy Callback 1.0.0.0: Easy Callback library 1.0.0.0Facebook Connect Authentication for ASP.NET: Facebook Connect Authentication for ASP.NET - v1.0: Now supporting Facebook's new Open Graph API JavaScript SDK, this release of FBConnectAuth also adds support for running in partially trusted envir...FlickrNet API Library: 3.0 Beta 3: Another small Beta. Changed parsing code so exceptions aren't raised when new attributes are added by Flickr. This affects searches where you are ...Infragistics Analytics Framework: Infragistics Analytics Framework 10.2: An updated version of Infragistics Analytics Framework, which utilizes the newest version (v.1.4.4) of MSAF as well as the newest release (v.10.2) ...NUnit Add-in for Growl Notifications: NUnit Add-in for Growl Notifications 1.0 build 1: Version 1.0 build 1:[change] Test run failure notification now disappears automaticallyOpen Source PLM Activities: 3dxml player integration for Aras Innovator: This is just a simple html file you need to add to your Aras Innovator install directory. It loads the 3Dxml player for your 3dxml files. Tested o...patterns & practices - Windows Azure Guidance: WAAG - Part 2 - Drop 1: First code and docs drop for Part 2 of the Windows Azure Architecture Guide Part 1 of the Guide is released here. Highlights of this release are:...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (June 2010): Installer of the latest binaries of Phalanger 2.0 (June 2010) and its integration into Visual Studio 2008 SP1. * Improved compatibility with P...RIA Services Essentials: Book Club Application (June 16, 2010): Added some XAML to hide/show link to BookShelf page based on whether the user is logged in or not. Updated IsBookOwner authorization rule implement...secs4net: Relase 1.01: version 1.01 releasesELedit: sELedit v1.1c: Added: Tool for exporting NPC/Mob database file that is used by sNPCeditSharePoint Ad Rotator: SPAdRotator 2.0 Beta 2: Added: Open tool pane link to default Web Part text Made all images except the first hidden by default, so the Web Part will degrade gracefully w...sMAPtool: sMAPtool v0.7f (without Maps): Added: 3rd party magnifier softwaresNPCedit: sNPCedit v0.9c: Added: npc/mob names and corresponding datbaseSolidWorks Addin Development: GenericAddinFrameworkR1-06.17.2010: .sTASKedit: sTASKedit v0.8: Important BugFix: there was an mistake in the structure, team-member block and get-items block was swapped internally. Tasks that contains both blo...stefvanhooijdonk.com: UnitTesting-SP2010-TFS2010: Files for my post on TFS2010 and NUnit testing with SP2010 projects. see the post here: http://wp.me/pMnlQ-88 The XSLT here is from http://nunit4t...Telerik CAB Enabling Kit for RadControls for WinForms: TCEK 2010.1.10.504: What's new in v2010.1.0610 (Beta): RadDocking component has been replaced with the latest RadDock control Requirements: Visual Studio 2005+ Tele...TFS Buddy: TFS Buddy 1.2: Fixes a problem with notificationsThales Simulator Library: Version 0.9: The Thales Simulator Library is an implementation of a software emulation of the Thales (formerly Zaxus & Racal) Hardware Security Module cryptogra...Triton Application Framework: Tools - Code Generator - Build 1.0: This is the first release of the Generator. This is buggy but works.VCC: Latest build, v2.1.30616.0: Automatic drop of latest buildXsltDb - DotNetNuke Module Builder: 01.01.27: Code completion for XsltDb, HTML and XSL stuff!! Full screen editing Some bugs are still in EditArea component and object lists in code completi...Чат кутежа 2.0: 0.9a build 2 версия: вторая сборка первой альфа-версии ирц-клиента.Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsdotSpatialpatterns & practices: Enterprise Library Contribpatterns & practices – Enterprise LibraryBlogEngine.NETLightweight Fluent WorkflowRhyduino - Arduino and Managed CodeSunlit World SchemeNB_Store - Free DotNetNuke Ecommerce Catalog ModuleSolidWorks Addin DevelopmentN2 CMS

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #048

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Order of Result Set of SELECT Statement on Clustered Indexed Table When ORDER BY is Not Used Above theory is true in most of the cases. However SQL Server does not use that logic when returning the resultset. SQL Server always returns the resultset which it can return fastest.In most of the cases the resultset which can be returned fastest is the resultset which is returned using clustered index. Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT One of the Jr. Developer asked me this question (What will be the Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT?) while I was rushing to an important meeting. I was getting late so I asked him to talk with his Application Tech Lead. When I came back from meeting both of them were looking for me. They said they are confused. I quickly wrote down following example for them. 2008 SQL SERVER – Guidelines and Coding Standards Complete List Download Coding standards and guidelines are very important for any developer on the path of a successful career. A coding standard is a set of guidelines, rules and regulations on how to write code. Coding standards should be flexible enough or should take care of the situation where they should not prevent best practices for coding. They are basically the guidelines that one should follow for better understanding. Download Guidelines and Coding Standards complete List Download Get Answer in Float When Dividing of Two Integer Many times we have requirements of some calculations amongst different fields in Tables. One of the software developers here was trying to calculate some fields having integer values and divide it which gave incorrect results in integer where accurate results including decimals was expected. Puzzle – Computed Columns Datatype Explanation SQL Server automatically does a cast to the data type having the highest precedence. So the result of INT and INT will be INT, but INT and FLOAT will be FLOAT because FLOAT has a higher precedence. If you want a different data type, you need to do an EXPLICIT cast. Renaming SP is Not Good Idea – Renaming Stored Procedure Does Not Update sys.procedures I have written many articles about renaming a tables, columns and procedures SQL SERVER – How to Rename a Column Name or Table Name, here I found something interesting about renaming the stored procedures and felt like sharing it with you all. The interesting fact is that when we rename a stored procedure using SP_Rename command, the Stored Procedure is successfully renamed. But when we try to test the procedure using sp_helptext, the procedure will be having the old name instead of new names. 2009 Insert Values of Stored Procedure in Table – Use Table Valued Function It is clear from the result set that , where I have converted stored procedure logic into the table valued function, is much better in terms of logic as it saves a large number of operations. However, this option should be used carefully. The performance of the stored procedure is “usually” better than that of functions. Interesting Observation – Index on Index View Used in Similar Query Recently, I was working on an optimization project for one of the largest organizations. While working on one of the queries, we came across a very interesting observation. We found that there was a query on the base table and when the query was run, it used the index, which did not exist in the base table. On careful examination, we found that the query was using the index that was on another view. This was very interesting as I have personally never experienced a scenario like this. In simple words, “Query on the base table can use the index created on the indexed view of the same base table.” Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries Working with SQL Server has never seemed to be monotonous – no matter how long one has worked with it. Quite often, I come across some excellent comments that I feel like acknowledging them as blog posts. Recently, I wrote an article on SQL SERVER – Execution Plan and Results of Aggregate Concatenation Queries Depend Upon Expression Location, which is well received in the community. 2010 I encourage all of you to go through complete series and write your own on the subject. If you write an article and send it to me, I will publish it on this blog with due credit to you. If you write on your own blog, I will update this blog post pointing to your blog post. SQL SERVER – ORDER BY Does Not Work – Limitation of the View 1 SQL SERVER – Adding Column is Expensive by Joining Table Outside View – Limitation of the View 2 SQL SERVER – Index Created on View not Used Often – Limitation of the View 3 SQL SERVER – SELECT * and Adding Column Issue in View – Limitation of the View 4 SQL SERVER – COUNT(*) Not Allowed but COUNT_BIG(*) Allowed – Limitation of the View 5 SQL SERVER – UNION Not Allowed but OR Allowed in Index View – Limitation of the View 6 SQL SERVER – Cross Database Queries Not Allowed in Indexed View – Limitation of the View 7 SQL SERVER – Outer Join Not Allowed in Indexed Views – Limitation of the View 8 SQL SERVER – SELF JOIN Not Allowed in Indexed View – Limitation of the View 9 SQL SERVER – Keywords View Definition Must Not Contain for Indexed View – Limitation of the View 10 SQL SERVER – View Over the View Not Possible with Index View – Limitations of the View 11 2011 Startup Parameters Easy to Configure If you are a regular reader of this blog, you must be aware that I have written about SQL Server Denali recently. Here is the quickest way to reach into the screen where we can change the startup parameters. Go to SQL Server Configuration Manager >> SQL Server Services >> Right Click on the Server >> Properties >> Startup Parameters 2012 Validating Unique Columnname Across Whole Database I sometimes come across very strange requirements and often I do not receive a proper explanation of the same. Here is the one of those examples. For example “Our business requirement is when we add new column we want it unique across current database.” Read the solution to this strange request in this blog post. Excel Losing Decimal Values When Value Pasted from SSMS ResultSet It is very common when users are coping the resultset to Excel, the floating point or decimals are missed. The solution is very much simple and it requires a small adjustment in the Excel. By default Excel is very smart and when it detects the value which is getting pasted is numeric it changes the column format to accommodate that. Basic Calculation and PEMDAS Order of Operation Read this interesting blog post for fantastic conversation about the subject. Copy Column Headers from Resultset – SQL in Sixty Seconds #027 – Video http://www.youtube.com/watch?v=x_-3tLqTRv0 Delete From Multiple Table – Update Multiple Table in Single Statement There are two questions which I get every single day multiple times. In my gmail, I have created standard canned reply for them. Let us see the questions here. I want to delete from multiple table in a single statement how will I do it? I want to update multiple table in a single statement how will I do it? Read the answer in the blog post. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Can't install graphic drivers in 12.04

    - by yinon
    The driver is ATI/AMD proprietary FGLRX graphics driver. After clicking Activate, it asks for my password and starts downloading. Then it shows an error message: 2012-10-03 16:16:04,227 DEBUG: updating <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> 2012-10-03 16:16:06,172 DEBUG: reading modalias file /lib/modules/3.2.0-29-generic-pae/modules.alias 2012-10-03 16:16:06,383 DEBUG: reading modalias file /usr/share/jockey/modaliases/b43 2012-10-03 16:16:06,386 DEBUG: reading modalias file /usr/share/jockey/modaliases/disable-upstream-nvidia 2012-10-03 16:16:06,456 DEBUG: loading custom handler /usr/share/jockey/handlers/pvr-omap4.py 2012-10-03 16:16:06,506 WARNING: modinfo for module omapdrm_pvr failed: ERROR: modinfo: could not find module omapdrm_pvr 2012-10-03 16:16:06,509 DEBUG: Instantiated Handler subclass __builtin__.PVROmap4Driver from name PVROmap4Driver 2012-10-03 16:16:06,682 DEBUG: PowerVR SGX proprietary graphics driver for OMAP 4 not available 2012-10-03 16:16:06,682 DEBUG: loading custom handler /usr/share/jockey/handlers/cdv.py 2012-10-03 16:16:06,727 WARNING: modinfo for module cedarview_gfx failed: ERROR: modinfo: could not find module cedarview_gfx 2012-10-03 16:16:06,728 DEBUG: Instantiated Handler subclass __builtin__.CdvDriver from name CdvDriver 2012-10-03 16:16:06,728 DEBUG: cdv.available: falling back to default 2012-10-03 16:16:06,772 DEBUG: Intel Cedarview graphics driver availability undetermined, adding to pool 2012-10-03 16:16:06,772 DEBUG: loading custom handler /usr/share/jockey/handlers/vmware-client.py 2012-10-03 16:16:06,781 WARNING: modinfo for module vmxnet failed: ERROR: modinfo: could not find module vmxnet 2012-10-03 16:16:06,781 DEBUG: Instantiated Handler subclass __builtin__.VmwareClientHandler from name VmwareClientHandler 2012-10-03 16:16:06,795 DEBUG: VMWare Client Tools availability undetermined, adding to pool 2012-10-03 16:16:06,796 DEBUG: loading custom handler /usr/share/jockey/handlers/fglrx.py 2012-10-03 16:16:06,801 WARNING: modinfo for module fglrx_updates failed: ERROR: modinfo: could not find module fglrx_updates 2012-10-03 16:16:06,805 DEBUG: Instantiated Handler subclass __builtin__.FglrxDriverUpdate from name FglrxDriverUpdate 2012-10-03 16:16:06,805 DEBUG: fglrx.available: falling back to default 2012-10-03 16:16:06,833 DEBUG: ATI/AMD proprietary FGLRX graphics driver (post-release updates) availability undetermined, adding to pool 2012-10-03 16:16:06,836 WARNING: modinfo for module fglrx failed: ERROR: modinfo: could not find module fglrx 2012-10-03 16:16:06,840 DEBUG: Instantiated Handler subclass __builtin__.FglrxDriver from name FglrxDriver 2012-10-03 16:16:06,840 DEBUG: fglrx.available: falling back to default 2012-10-03 16:16:06,873 DEBUG: ATI/AMD proprietary FGLRX graphics driver availability undetermined, adding to pool 2012-10-03 16:16:06,873 DEBUG: loading custom handler /usr/share/jockey/handlers/dvb_usb_firmware.py 2012-10-03 16:16:06,925 DEBUG: Instantiated Handler subclass __builtin__.DvbUsbFirmwareHandler from name DvbUsbFirmwareHandler 2012-10-03 16:16:06,926 DEBUG: Firmware for DVB cards not available 2012-10-03 16:16:06,926 DEBUG: loading custom handler /usr/share/jockey/handlers/nvidia.py 2012-10-03 16:16:06,961 WARNING: modinfo for module nvidia_96 failed: ERROR: modinfo: could not find module nvidia_96 2012-10-03 16:16:06,967 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver96 from name NvidiaDriver96 2012-10-03 16:16:06,968 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:06,980 DEBUG: XorgDriverHandler(nvidia_96, nvidia-96, None): Disabling as package video ABI xorg-video-abi-10 does not match X.org video ABI xorg-video-abi-11 2012-10-03 16:16:06,980 DEBUG: NVIDIA accelerated graphics driver not available 2012-10-03 16:16:06,983 WARNING: modinfo for module nvidia_current failed: ERROR: modinfo: could not find module nvidia_current 2012-10-03 16:16:06,987 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriverCurrent from name NvidiaDriverCurrent 2012-10-03 16:16:06,987 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,015 DEBUG: NVIDIA accelerated graphics driver availability undetermined, adding to pool 2012-10-03 16:16:07,018 WARNING: modinfo for module nvidia_current_updates failed: ERROR: modinfo: could not find module nvidia_current_updates 2012-10-03 16:16:07,021 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriverCurrentUpdates from name NvidiaDriverCurrentUpdates 2012-10-03 16:16:07,022 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,066 DEBUG: NVIDIA accelerated graphics driver (post-release updates) availability undetermined, adding to pool 2012-10-03 16:16:07,069 WARNING: modinfo for module nvidia_173_updates failed: ERROR: modinfo: could not find module nvidia_173_updates 2012-10-03 16:16:07,072 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver173Updates from name NvidiaDriver173Updates 2012-10-03 16:16:07,073 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,105 DEBUG: NVIDIA accelerated graphics driver (post-release updates) availability undetermined, adding to pool 2012-10-03 16:16:07,112 WARNING: modinfo for module nvidia_173 failed: ERROR: modinfo: could not find module nvidia_173 2012-10-03 16:16:07,118 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver173 from name NvidiaDriver173 2012-10-03 16:16:07,119 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,159 DEBUG: NVIDIA accelerated graphics driver availability undetermined, adding to pool 2012-10-03 16:16:07,166 WARNING: modinfo for module nvidia_96_updates failed: ERROR: modinfo: could not find module nvidia_96_updates 2012-10-03 16:16:07,171 DEBUG: Instantiated Handler subclass __builtin__.NvidiaDriver96Updates from name NvidiaDriver96Updates 2012-10-03 16:16:07,171 DEBUG: nvidia.available: falling back to default 2012-10-03 16:16:07,188 DEBUG: XorgDriverHandler(nvidia_96_updates, nvidia-96-updates, None): Disabling as package video ABI xorg-video-abi-10 does not match X.org video ABI xorg-video-abi-11 2012-10-03 16:16:07,188 DEBUG: NVIDIA accelerated graphics driver (post-release updates) not available 2012-10-03 16:16:07,188 DEBUG: loading custom handler /usr/share/jockey/handlers/madwifi.py 2012-10-03 16:16:07,195 WARNING: modinfo for module ath_pci failed: ERROR: modinfo: could not find module ath_pci 2012-10-03 16:16:07,195 DEBUG: Instantiated Handler subclass __builtin__.MadwifiHandler from name MadwifiHandler 2012-10-03 16:16:07,196 DEBUG: Alternate Atheros "madwifi" driver availability undetermined, adding to pool 2012-10-03 16:16:07,196 DEBUG: loading custom handler /usr/share/jockey/handlers/sl_modem.py 2012-10-03 16:16:07,213 DEBUG: Instantiated Handler subclass __builtin__.SlModem from name SlModem 2012-10-03 16:16:07,234 DEBUG: Software modem not available 2012-10-03 16:16:07,234 DEBUG: loading custom handler /usr/share/jockey/handlers/broadcom_wl.py 2012-10-03 16:16:07,239 WARNING: modinfo for module wl failed: ERROR: modinfo: could not find module wl 2012-10-03 16:16:07,277 DEBUG: Instantiated Handler subclass __builtin__.BroadcomWLHandler from name BroadcomWLHandler 2012-10-03 16:16:07,277 DEBUG: Broadcom STA wireless driver availability undetermined, adding to pool 2012-10-03 16:16:07,278 DEBUG: all custom handlers loaded 2012-10-03 16:16:07,278 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00008086d000027D8sv00001043sd000082EAbc04sc03i00') 2012-10-03 16:16:07,568 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'snd_hda_intel'} 2012-10-03 16:16:07,699 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'snd_hda_intel', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,699 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'snd_hda_intel'} 2012-10-03 16:16:07,699 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'snd_hda_intel', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,699 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'input:b0000v0000p0000e0000-e0,5,kramlsfw6,') 2012-10-03 16:16:07,704 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'evbug'} 2012-10-03 16:16:07,704 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'evbug', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,704 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00008086d000027DAsv00001043sd00008179bc0Csc05i00') 2012-10-03 16:16:07,707 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'i2c_i801'} 2012-10-03 16:16:07,707 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'i2c_i801', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,707 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0C01:') 2012-10-03 16:16:07,707 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0B00:') 2012-10-03 16:16:07,707 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00001969d00001026sv00001043sd00008304bc02sc00i00') 2012-10-03 16:16:07,710 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'atl1e'} 2012-10-03 16:16:07,710 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'atl1e', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,710 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'input:b0003v04F2p0816e0111-e0,1,4,11,14,k71,72,73,74,75,77,79,7A,7B,7C,7D,7E,7F,80,81,82,83,84,85,86,87,88,89,8A,8C,8E,96,98,9E,9F,A1,A3,A4,A5,A6,AD,B0,B1,B2,B3,B4,B7,B8,B9,BA,BB,BC,BD,BE,BF,C0,C1,C2,F0,ram4,l0,1,2,sfw') 2012-10-03 16:16:07,711 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'evbug'} 2012-10-03 16:16:07,711 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'evbug', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,711 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'mac_hid'} 2012-10-03 16:16:07,711 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'mac_hid', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,711 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'platform:pcspkr') 2012-10-03 16:16:07,711 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'pcspkr'} 2012-10-03 16:16:07,711 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'pcspkr', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,712 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'snd_pcsp'} 2012-10-03 16:16:07,712 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'snd_pcsp', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,712 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'usb:v1D6Bp0001d0302dc09dsc00dp00ic09isc00ip00') 2012-10-03 16:16:07,724 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'input:b0019v0000p0001e0000-e0,1,k74,ramlsfw') 2012-10-03 16:16:07,724 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'evbug'} 2012-10-03 16:16:07,724 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'evbug', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,724 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'mac_hid'} 2012-10-03 16:16:07,724 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'mac_hid', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,724 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0C04:') 2012-10-03 16:16:07,724 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'platform:eisa') 2012-10-03 16:16:07,725 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00008086d000027CCsv00001043sd00008179bc0Csc03i20') 2012-10-03 16:16:07,728 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'platform:Fixed MDIO bus') 2012-10-03 16:16:07,728 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00008086d000029C0sv00001043sd000082B0bc06sc00i00') 2012-10-03 16:16:07,731 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'usb:v045Ep0766d0101dcEFdsc02dp01ic01isc01ip00') 2012-10-03 16:16:07,777 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'snd_usb_audio'} 2012-10-03 16:16:07,777 DEBUG: no corresponding handler available for {'driver_type': 'kernel_module', 'kernel_module': 'snd_usb_audio', 'jockey_handler': 'KernelModuleHandler'} 2012-10-03 16:16:07,777 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0F03:PNP0F13:') 2012-10-03 16:16:07,777 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'acpi:PNP0000:') 2012-10-03 16:16:07,777 DEBUG: querying driver db <jockey.detection.LocalKernelModulesDriverDB instance at 0xb7231a0c> about HardwareID('modalias', 'pci:v00001002d000095C5sv0000174Bsd0000E400bc03sc00i00') 2012-10-03 16:16:08,072 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'fglrx_updates', 'package': 'fglrx-updates'} 2012-10-03 16:16:08,133 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/i386-linux-gnu/mesa/ld.so.conf other target alt None other current alt None 2012-10-03 16:16:08,134 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:16:08,072 DEBUG: found match in handler pool xorg:fglrx_updates([FglrxDriverUpdate, nonfree, disabled] ATI/AMD proprietary FGLRX graphics driver (post-release updates)) 2012-10-03 16:16:08,136 WARNING: modinfo for module fglrx_updates failed: ERROR: modinfo: could not find module fglrx_updates 2012-10-03 16:16:08,147 DEBUG: fglrx.available: falling back to default 2012-10-03 16:16:08,173 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/i386-linux-gnu/mesa/ld.so.conf other target alt None other current alt None 2012-10-03 16:16:08,173 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:16:08,162 DEBUG: got handler xorg:fglrx_updates([FglrxDriverUpdate, nonfree, disabled] ATI/AMD proprietary FGLRX graphics driver (post-release updates)) 2012-10-03 16:16:08,173 DEBUG: searching handler for driver ID {'driver_type': 'kernel_module', 'kernel_module': 'fglrx', 'package': 'fglrx'} 2012-10-03 16:16:08,184 DEBUG: fglrx.enabled(fglrx): target_alt None current_alt /usr/lib/i386-linux-gnu/mesa/ld.so.conf other target alt None other current alt None 2012-10-03 16:16:08,184 DEBUG: fglrx is not the alternative in use 2012-10-03 16:16:08,173 DEBUG: found match in handler pool xorg:fglrx([FglrxDriver, nonfree, disabled] ATI/AMD proprietary FGLRX graphics driver) 2012-10-03 16:16:08,187 WARNING: modinfo for module fglrx failed: ERROR: modinfo: could not find module fglrx 2012-10-03 16:16:08,190 DEBUG: fglrx.available: falling back to default 2012-10-03 16:16:08,216 DEBUG: fglrx.enabled(fglrx): target_alt None current_alt /usr/lib/i386-linux-gnu/mesa/ld.so.conf other target alt None other current alt None . . . 2012-10-03 16:18:10,552 DEBUG: install progress initramfs-tools 62.500000 2012-10-03 16:18:22,249 DEBUG: install progress libc-bin 62.500000 2012-10-03 16:18:23,251 DEBUG: Selecting previously unselected package dkms. (Reading database ... 142496 files and directories currently installed.) Unpacking dkms (from .../dkms_2.2.0.3-1ubuntu3_all.deb) ... Selecting previously unselected package fakeroot. Unpacking fakeroot (from .../fakeroot_1.18.2-1_i386.deb) ... Selecting previously unselected package fglrx-updates. Unpacking fglrx-updates (from .../fglrx-updates_2%3a8.960-0ubuntu1.1_i386.deb) ... Selecting previously unselected package fglrx-amdcccle-updates. Unpacking fglrx-amdcccle-updates (from .../fglrx-amdcccle-updates_2%3a8.960-0ubuntu1.1_i386.deb) ... Processing triggers for man-db ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot dpkg: error processing libxss1 (--configure): package libxss1 is already installed and configured dpkg: error processing chromium-codecs-ffmpeg (--configure): package chromium-codecs-ffmpeg is already installed and configured dpkg: error processing chromium-browser (--configure): package chromium-browser is already installed and configured dpkg: error processing chromium-browser-l10n (--configure): package chromium-browser-l10n is already installed and configured Setting up dkms (2.2.0.3-1ubuntu3) ... No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Setting up fakeroot (1.18.2-1) ... update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode. Setting up fglrx-updates (2:8.960-0ubuntu1.1) ... update-alternatives: using /usr/lib/fglrx/ld.so.conf to provide /etc/ld.so.conf.d/i386-linux-gnu_GL.conf (i386-linux-gnu_gl_conf) in auto mode. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: using /usr/lib/fglrx/alt_ld.so.conf to provide /etc/ld.so.conf.d/x86_64-linux-gnu_GL.conf (x86_64-linux-gnu_gl_conf) in auto mode. update-initramfs: deferring update (trigger activated) Loading new fglrx-updates-8.960 DKMS files... First Installation: checking all kernels... Building only for 3.2.0-29-generic-pae Building for architecture i686 Building initial module for 3.2.0-29-generic-pae Done. fglrx_updates: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/3.2.0-29-generic-pae/updates/dkms/ depmod...... DKMS: install completed. update-initramfs: deferring update (trigger activated) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up fglrx-amdcccle-updates (2:8.960-0ubuntu1.1) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-29-generic-pae Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: libxss1 chromium-codecs-ffmpeg chromium-browser chromium-browser-l10n Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) 2012-10-03 16:18:23,256 ERROR: Package failed to install: Selecting previously unselected package dkms. (Reading database ... 142496 files and directories currently installed.) Unpacking dkms (from .../dkms_2.2.0.3-1ubuntu3_all.deb) ... Selecting previously unselected package fakeroot. Unpacking fakeroot (from .../fakeroot_1.18.2-1_i386.deb) ... Selecting previously unselected package fglrx-updates. Unpacking fglrx-updates (from .../fglrx-updates_2%3a8.960-0ubuntu1.1_i386.deb) ... Selecting previously unselected package fglrx-amdcccle-updates. Unpacking fglrx-amdcccle-updates (from .../fglrx-amdcccle-updates_2%3a8.960-0ubuntu1.1_i386.deb) ... Processing triggers for man-db ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot dpkg: error processing libxss1 (--configure): package libxss1 is already installed and configured dpkg: error processing chromium-codecs-ffmpeg (--configure): package chromium-codecs-ffmpeg is already installed and configured dpkg: error processing chromium-browser (--configure): package chromium-browser is already installed and configured dpkg: error processing chromium-browser-l10n (--configure): package chromium-browser-l10n is already installed and configured Setting up dkms (2.2.0.3-1ubuntu3) ... No apport report written because MaxReports is reached already No apport report written because MaxReports is reached already Setting up fakeroot (1.18.2-1) ... update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode. Setting up fglrx-updates (2:8.960-0ubuntu1.1) ... update-alternatives: using /usr/lib/fglrx/ld.so.conf to provide /etc/ld.so.conf.d/i386-linux-gnu_GL.conf (i386-linux-gnu_gl_conf) in auto mode. update-alternatives: warning: skip creation of /etc/OpenCL/vendors/amdocl64.icd because associated file /usr/lib/fglrx/etc/OpenCL/vendors/amdocl64.icd (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalcl.so because associated file /usr/lib32/fglrx/libaticalcl.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: warning: skip creation of /usr/lib32/libaticalrt.so because associated file /usr/lib32/fglrx/libaticalrt.so (of link group i386-linux-gnu_gl_conf) doesn't exist. update-alternatives: using /usr/lib/fglrx/alt_ld.so.conf to provide /etc/ld.so.conf.d/x86_64-linux-gnu_GL.conf (x86_64-linux-gnu_gl_conf) in auto mode. update-initramfs: deferring update (trigger activated) Loading new fglrx-updates-8.960 DKMS files... First Installation: checking all kernels... Building only for 3.2.0-29-generic-pae Building for architecture i686 Building initial module for 3.2.0-29-generic-pae Done. fglrx_updates: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/3.2.0-29-generic-pae/updates/dkms/ depmod...... DKMS: install completed. update-initramfs: deferring update (trigger activated) Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up fglrx-amdcccle-updates (2:8.960-0ubuntu1.1) ... Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-29-generic-pae Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: libxss1 chromium-codecs-ffmpeg chromium-browser chromium-browser-l10n Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) 2012-10-03 16:18:23,590 WARNING: /sys/module/fglrx_updates/drivers does not exist, cannot rebind fglrx_updates driver 2012-10-03 16:18:43,601 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:43,601 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:43,617 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:43,617 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,143 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,144 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,154 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,154 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,182 DEBUG: fglrx.enabled(fglrx): target_alt /usr/lib/fglrx/ld.so.conf current_alt /usr/lib/fglrx/ld.so.conf other target alt /usr/lib/fglrx/alt_ld.so.conf other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,182 DEBUG: XorgDriverHandler(%s, %s).enabled(): No X.org driver set, not checking 2012-10-03 16:18:54,215 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,215 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,229 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,229 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,268 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,268 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,279 DEBUG: fglrx.enabled(fglrx_updates): target_alt None current_alt /usr/lib/fglrx/ld.so.conf other target alt None other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,279 DEBUG: fglrx_updates is not the alternative in use 2012-10-03 16:18:54,298 DEBUG: fglrx.enabled(fglrx): target_alt /usr/lib/fglrx/ld.so.conf current_alt /usr/lib/fglrx/ld.so.conf other target alt /usr/lib/fglrx/alt_ld.so.conf other current alt /usr/lib/fglrx/alt_ld.so.conf 2012-10-03 16:18:54,298 DEBUG: XorgDriverHandler(%s, %s).enabled(): No X.org driver set, not checking 2012-10-03 16:18:57,828 DEBUG: Shutting down I don't know how to troubleshoot from looking at the log file, could somebody assist me with this please? You can download the log file at: https://www.dropbox.com/s/a59d2hyabo02q5z/jockey.log

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • Cannot install Acrobat Reader on Ubuntu 12.04.1

    - by user91137
    I have been trying to install Acrobat Reader on my Ubuntu 12.04.1. In the be;ggining, I tried to install it from the software-center, but it crashes with the report: (Reading database ... 189311 files and directories currently installed.) Unpacking acroread (from .../acroread_9.5.1-1precise1_i386.deb) ... dpkg: error processing /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb (--unpack): trying to overwrite '/usr/bin/acroread', which is also in package adobereader-ptb 8.1.7-2 No apport report written because MaxReports is reached already Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for man-db ... Errors were encountered while processing: /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb" As as solution, I tried to install it via terminal, with the $sudo apt-get install acroread and receive the following: arcanjo@arcanjo:~$ sudo apt-get install acroread Lendo listas de pacotes... Pronto Construindo árvore de dependências Lendo informação de estado... Pronto Pacotes sugeridos: libldap2 libgnome-speech7 Os NOVOS pacotes a seguir serão instalados: acroread 0 pacotes atualizados, 1 pacotes novos instalados, 0 a serem removidos e 0 não atualizados. É preciso baixar 60,1 MB de arquivos. Depois desta operação, 142 MB adicionais de espaço em disco serão usados. Obter:1 http://archive.canonical.com/ubuntu/ precise/partner acroread i386 9.5.1-1precise1 [60,1 MB] Baixados 60,1 MB em 4min 17s (234 kB/s) (Lendo banco de dados ... 189311 ficheiros e directórios actualmente instalados.) Desempacotando acroread (de .../acroread_9.5.1-1precise1_i386.deb) ... dpkg: erro processando /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb (--unpack): a tentar sobre-escrever '/usr/bin/acroread', que também está no pacote adobereader-ptb 8.1.7-2 Nenhum relatório apport escrito pois MaxReports já foi atingido Processando gatilhos para desktop-file-utils ... Processando gatilhos para bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processando gatilhos para gnome-menus ... Processando gatilhos para man-db ... Erros foram encontrados durante o processamento de: /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) arcanjo@arcanjo:~$ I've already tried to upgrade and update the apt-get, also tried to remove and re-install the software-center, tried deleting the "problematic" files and re-updating the apt-get... Nothing seems to work... Any solutions?

    Read the article

  • Personal Development : Time, Planning , Repairs & Maintenance

    - by Rajesh Pillai
    Personal Development : Time, Planning, Repairs & Maintenance These are just my thoughts, but some you may find something interesting in it. Please think over it. We may know many things, but still we always keeps procrastinating it. I have written this as I have heard many people coming back and saying they don’t have time to do things they like. These are my thoughts buy may be useful to someone else too. Certain things in life needs periodic repairs and maintenance. To cite some examples , your CAR, your HOUSE, your personal laptop/desktop, your health etc. Likewise there are certain other things in professional life that requires repair/ maintenance /or some kind of polishing, so that you always stay on top of it. But they are not always obvious. Some of them are - Improving your communication skills - Increasing your vocabulary - Upgrading your technical skills - Pursuing your hobby - Increasing your knowledge/awareness etc… etc… And then there are certain things that we are always short of…. one is TIME. We all know TIME is one of the most precious things in life and yet we all are very miserable at managing it. Remember you can only manage it and not control it. You can only control which you own or which you create. In theory time is infinite. So, there should be abundant of it. But remember one thing, you know this, it’s not reversible. Once it has elapsed you cannot live it again. Think over it. So, how do find that golden 25th hour every day. To find the 25th hour you need to reflect back on your current daily activities. Analyze them and see where you are spending most of your time and is it really important. Even the 8 hours that you spent in the office, is it spent fruitfully. At the end of the day is the 8 precious hour that you spent was worth it. Just reflect back on your activities. Did you learn something? If yes did you make a point to NOTE IT. If you didn’t NOTED it then was the time you spent really worth it. Just ponder over it. Some calculations of your daily activities where most of the time is spent. Let’s start (in no particular order though) - Sleep (6.5 hours) [Remember you only require 6 good hours of sleep every day]. Some may thing it is 8, but it’s a myth.   o To achive 6 hours of sleep and be in good health you can practice 15 minutes of daily meditation. So effectively you can    round it to 6.5 hours. - Morning chores(2 hours) : Some may need to prepare breakfast and all other things. - Office commuting (avg. to and fro 3 hours) - Office Work (avg 9.5 hours) Total Hours: 21 hours effective time which is spent irrespective of what you do. There may be some variations here and there. Still you have 3 hours EXTRA. Where do these 3 hours go? If you can find it, then you may get that golden 25th hour out of these 3 hours. Let’s discount 2 hours for contingencies, still you have 1 hour with you. If you can’t find it then you are living a direction less life. As you can see, the 25th Hour lies within the 24 hours of the day. It’s upto each one of us to find and make use of it. Now what can you do with that 25th hour i.e. 1 hour extra of your life. Imagine the possibility. Again some calculations 1 hour daily * 30 days = 30 hours every month 30 hours pm * 12 month = 360 hours every year. 360 hours every year seems very promising. Let’s add some contingencies, say, let’s be optimistic and say 50 % contingency. Still you have 180 hours every year. That leaves with 30 minutes every day of extra time. That’s hell a lot of time, if you could manage it. These may sound like a high talk [yes, it is, unless you apply these simple rules and rationalize your everyday living and stop procrastinating]. NOTE: I haven’t taken weekend, holidays and leaves into account. So, that leaves us with a lot of buffer time. You can meet family friends, relatives, other tasks, and yet have these 180 pure hours of joy every year. Do whatever you want to do with it. So, how important is this 180 hours per year to you? Just think over it. You may use it the way you like - 50 hours [pursue your hobby like drawing, crafting, learn dance, learn juggling, learn swimming, travelling hmm.. anything you like doing and you didn’t had time to do it.] - 30 hours you can learn a new programming language or technology (i.e. you can get comfortable with it) - 50 hours [improve existing skills] - 20 hours [improve you communication skill]. Do some light reading. - 30 hours [YOU DECIDE WHAT TO DO]? So, if you had done this for one year you would have learnt a new programming language, upgraded existing skills, improved you communication etc.. If you had done this for two years.. imagine the level of personal development or growth which you may have attained….. If you had done this for three years….. NOW I think I don’t need to mention this… So, you still have TIME, as they say TIME is infinite. So, make judicious use of this precious thing. And never ever comeback saying “I don’t have time”. So, if you are RICH in TIME, everything else will be automatically taken care of, as those things may just be a byproduct of how you spend your time… So, happy TIMING your TIME everyday.

    Read the article

  • nfs-kernel-server installation : file does not exist

    - by Stuti Rastogi
    I am extremely new to Ubuntu and need to work on EdX platform. I need to install the NFS Client on Ubuntu 12.04 for the same. I used the following stuti@stuti:/$ sudo apt-get install nfs-kernel-server However this gives me an error as follows: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: nfs-common The following NEW packages will be installed: nfs-common nfs-kernel-server 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/355 kB of archives. After this operation, 1,222 kB of additional disk space will be used. Do you want to continue [Y/n]? y Selecting previously unselected package nfs-common. (Reading database ... 200367 files and directories currently installed.) Unpacking nfs-common (from .../nfs-common_1%3a1.2.5-3ubuntu3.1_i386.deb) ... Selecting previously unselected package nfs-kernel-server. Unpacking nfs-kernel-server (from .../nfs-kernel-server_1%3a1.2.5-3ubuntu3.1_i386.deb) ... Processing triggers for ureadahead ... Processing triggers for man-db ... Setting up nfs-common (1:1.2.5-3ubuntu3.1) ... statd start/running, process 4574 gssd stop/pre-start, process 4603 idmapd start/running, process 4643 Setting up nfs-kernel-server (1:1.2.5-3ubuntu3.1) ... update-rc.d: /etc/init.d/nfs-kernel-server: file does not exist dpkg: error processing nfs-kernel-server (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: nfs-kernel-server E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried sudo apt-get autoremove nfs-kernel-server sudo apt-get autoremove nfs-common After these, I tried to install but I keep getting the same error. apt-get update or upgrade also do not help and give the same error. I am clueless as to where can I find this missing file as stated in the output. I tried to google about this problem but none of the solutions I came across have helped or I have not been able to understand some of them. Any help would really be appreciated. Thanks in advance for your time and attention.

    Read the article

  • The Data Scientist

    - by BuckWoody
    A new term - well, perhaps not that new - has come up and I’m actually very excited about it. The term is Data Scientist, and since it’s new, it’s fairly undefined. I’ll explain what I think it means, and why I’m excited about it. In general, I’ve found the term deals at its most basic with analyzing data. Of course, we all do that, and the term itself in that definition is redundant. There is no science that I know of that does not work with analyzing lots of data. But the term seems to refer to more than the common practices of looking at data visually, putting it in a spreadsheet or report, or even using simple coding to examine data sets. The term Data Scientist (as far as I can make out this early in it’s use) is someone who has a strong understanding of data sources, relevance (statistical and otherwise) and processing methods as well as front-end displays of large sets of complicated data. Some - but not all - Business Intelligence professionals have these skills. In other cases, senior developers, database architects or others fill these needs, but in my experience, many lack the strong mathematical skills needed to make these choices properly. I’ve divided the knowledge base for someone that would wear this title into three large segments. It remains to be seen if a given Data Scientist would be responsible for knowing all these areas or would specialize. There are pretty high requirements on the math side, specifically in graduate-degree level statistics, but in my experience a company will only have a few of these folks, so they are expected to know quite a bit in each of these areas. Persistence The first area is finding, cleaning and storing the data. In some cases, no cleaning is done prior to storage - it’s just identified and the cleansing is done in a later step. This area is where the professional would be able to tell if a particular data set should be stored in a Relational Database Management System (RDBMS), across a set of key/value pair storage (NoSQL) or in a file system like HDFS (part of the Hadoop landscape) or other methods. Or do you examine the stream of data without storing it in another system at all? This is an important decision - it’s a foundation choice that deals not only with a lot of expense of purchasing systems or even using Cloud Computing (PaaS, SaaS or IaaS) to source it, but also the skillsets and other resources needed to care and feed the system for a long time. The Data Scientist sets something into motion that will probably outlast his or her career at a company or organization. Often these choices are made by senior developers, database administrators or architects in a company. But sometimes each of these has a certain bias towards making a decision one way or another. The Data Scientist would examine these choices in light of the data itself, starting perhaps even before the business requirements are created. The business may not even be aware of all the strategic and tactical data sources that they have access to. Processing Once the decision is made to store the data, the next set of decisions are based around how to process the data. An RDBMS scales well to a certain level, and provides a high degree of ACID compliance as well as offering a well-known set-based language to work with this data. In other cases, scale should be spread among multiple nodes (as in the case of Hadoop landscapes or NoSQL offerings) or even across a Cloud provider like Windows Azure Table Storage. In fact, in many cases - most of the ones I’m dealing with lately - the data should be split among multiple types of processing environments. This is a newer idea. Many data professionals simply pick a methodology (RDBMS with Star Schemas, NoSQL, etc.) and put all data there, regardless of its shape, processing needs and so on. A Data Scientist is familiar not only with the various processing methods, but how they work, so that they can choose the right one for a given need. This is a huge time commitment, hence the need for a dedicated title like this one. Presentation This is where the need for a Data Scientist is most often already being filled, sometimes with more or less success. The latest Business Intelligence systems are quite good at allowing you to create amazing graphics - but it’s the data behind the graphics that are the most important component of truly effective displays. This is where the mathematics requirement of the Data Scientist title is the most unforgiving. In fact, someone without a good foundation in statistics is not a good candidate for creating reports. Even a basic level of statistics can be dangerous. Anyone who works in analyzing data will tell you that there are multiple errors possible when data just seems right - and basic statistics bears out that you’re on the right track - that are only solvable when you understanding why the statistical formula works the way it does. And there are lots of ways of presenting data. Sometimes all you need is a “yes” or “no” answer that can only come after heavy analysis work. In that case, a simple e-mail might be all the reporting you need. In others, complex relationships and multiple components require a deep understanding of the various graphical methods of presenting data. Knowing which kind of chart, color, graphic or shape conveys a particular datum best is essential knowledge for the Data Scientist. Why I’m excited I love this area of study. I like math, stats, and computing technologies, but it goes beyond that. I love what data can do - how it can help an organization. I’ve been fortunate enough in my professional career these past two decades to work with lots of folks who perform this role at companies from aerospace to medical firms, from manufacturing to retail. Interestingly, the size of the company really isn’t germane here. I worked with one very small bio-tech (cryogenics) company that worked deeply with analysis of complex interrelated data. So  watch this space. No, I’m not leaving Azure or distributed computing or Microsoft. In fact, I think I’m perfectly situated to investigate this role further. We have a huge set of tools, from RDBMS to Hadoop to allow me to explore. And I’m happy to share what I learn along the way.

    Read the article

  • What Makes a Good Design Critic? CHI 2010 Panel Review

    - by jatin.thaker
    Author: Daniel Schwartz, Senior Interaction Designer, Oracle Applications User Experience Oracle Applications UX Chief Evangelist Patanjali Venkatacharya organized and moderated an innovative and stimulating panel discussion titled "What Makes a Good Design Critic? Food Design vs. Product Design Criticism" at CHI 2010, the annual ACM Conference on Human Factors in Computing Systems. The panelists included Janice Rohn, VP of User Experience at Experian; Tami Hardeman, a food stylist; Ed Seiber, a restaurant architect and designer; John Kessler, a food critic and writer at the Atlanta Journal-Constitution; and Larry Powers, Chef de Cuisine at Shaun's restaurant in Atlanta, Georgia. Building off the momentum of his highly acclaimed panel at CHI 2009 on what interaction design can learn from food design (for which I was on the other side as a panelist), Venkatacharya brought together new people with different roles in the restaurant and software interaction design fields. The session was also quite delicious -- but more on that later. Criticism, as it applies to food and product or interaction design, was the tasty topic for this forum and showed that strong parallels exist between food and interaction design criticism. Figure 1. The panelists in discussion: (left to right) Janice Rohn, Ed Seiber, Tami Hardeman, and John Kessler. The panelists had great insights to share from their respective fields, and they enthusiastically discussed as if they were at a casual collegial dinner. John Kessler stated that he prefers to have one professional critic's opinion in general than a large sampling of customers, however, "Web sites like Yelp get users excited by the collective approach. People are attracted to things desired by so many." Janice Rohn added that this collective desire was especially true for users of consumer products. Ed Seiber remarked that while people looked to the popular view for their target tastes and product choices, "professional critics like John [Kessler] still hold a big weight on public opinion." Chef Powers indicated that chefs take in feedback from all sources, adding, "word of mouth is very powerful. We also look heavily at the sales of the dishes to see what's moving; what's selling and thus successful." Hearing this discussion validates our design work at Oracle in that we listen to our users (our diners) and industry feedback (our critics) to ensure an optimal user experience of our products. Rohn considers that restaurateur Danny Meyer's book, Setting the Table: The Transforming Power of Hospitality in Business, which is about creating successful restaurant experiences, has many applicable parallels to user experience design. Meyer actually argues that the customer is not always right, but that "they must always feel heard." Seiber agreed, but noted "customers are not designers," and while designers need to listen to customer feedback, it is the designer's job to synthesize it. Seiber feels it's the critic's job to point out when something is missing or not well-prioritized. In interaction design, our challenges are quite similar, if not parallel. Software tasks are like puzzles that are in search of a solution on how to be best completed. As a food stylist, Tami Hardeman has the demanding and challenging task of presenting food to be as delectable as can be. To present food in its best light requires a lot of creativity and insight into consumer tastes. It's no doubt then that this former fashion stylist came up with the ultimate catch phrase to capture the emotion that clients want to draw from their users: "craveability." The phrase was a hit with the audience and panelists alike. Sometime later in the discussion, Seiber remarked, "designers strive to apply craveability to products, and I do so for restaurants in my case." Craveabilty is also very applicable to interaction design. Creating straightforward and smooth workflows for users of Oracle Applications is a primary goal for my colleagues. We want our users to really enjoy working with our products where it makes them more efficient and better at their jobs. That's our "craveability." Patanjali Venkatacharya asked the panel, "if a design's "craveability" appeals to some cultures but not to others, then what is the impact to the food or product design process?" Rohn stated that "taste is part nature and part nurture" and that the design must take the full context of a product's usage into consideration. Kessler added, "good design is about understanding the context" that the experience necessitates. Seiber remarked how important seat comfort is for diners and how the quality of seating will add so much to the complete dining experience. Sometimes if these non-food factors are not well executed, they can also take away from an otherwise pleasant dining experience. Kessler recounted a time when he was dining at a restaurant that actually had very good food, but the photographs hanging on all the walls did not fit in with the overall décor and created a negative overall dining experience. While the tastiness of the food is critical to a restaurant's success, it is a captivating complete user experience, as in interaction design, which will keep customers coming back and ultimately making the restaurant a hit. Figure 2. Patanjali Venkatacharya enjoyed the Sardinian flatbread salad. As a surprise Chef Powers brought out a signature dish from Shaun's restaurant for all the panelists to sample and critique. The Sardinian flatbread dish showcased Atlanta's taste for fresh and local produce and cheese at its finest as a salad served on a crispy flavorful flat bread. Hardeman said it could be photographed from any angle, a high compliment coming from a food stylist. Seiber really enjoyed the colors that the dish brought together and thought it would be served very well in a casual restaurant on a summer's day. The panel really appreciated the taste and quality of the different components and how the rosemary brought all the flavors together. Seiber remarked that "a lot of effort goes into the appearance of simplicity." Rohn indicated that the same notion holds true with software user interface design. A tremendous amount of work goes into crafting straightforward interfaces, including user research, prototyping, design iterations, and usability studies. Design criticism for food and software interfaces clearly share many similarities. Both areas value expert opinions and user feedback. Both areas understand the importance of great design needing to work well in its context. Last but not least, both food and interaction design criticism value "craveability" and how having users excited about experiencing and enjoying the designs is an important goal. Now if we can just improve the taste of software user interfaces, people may choose to dine on their enterprise applications over a fresh organic salad.

    Read the article

  • Problem booting Windows after failed installation Ubuntu 12.04 alongside Windows 7

    - by Tassos
    I tried to install in my laptop Ubuntu 12.04 so that I can dual-boot with Windows 7. I made some mistakes during this process and I didn't manage to install Ubuntu. But my real problem now is that I'm afraid that I also destroyed the installation of Windows 7. Your help would be precious for me. Here are the details of what I did: 1) I followed these instructions to dual boot Ubuntu 12.04 and Windows 7: http://www.linuxbsdos.com/2012/05/17/how-to-dual-boot-ubuntu-12-04-and-windows-7/ The only difference from what is described above, is that in my case the device names where: /dev/mapper/isw_fdjdhbadc_Volume0* instead of: /dev/sda* Note that I had created a bootable USB stick to do that. 2) The installation proceeded normally, but in the end I got a fatal error because the grub-install failed. 3) Then, after googling this problem, I runned ubuntu from the USB stick and run this command: sudo grub-install --root-directory=/home/ubuntu/temp /dev/mapper/isw_fdjdhbadc_Volume0p5 (/isw_fdjdhbadc_Volume0p5 was the partition that I had made for /boot) but this command also failed. 4) Then, I did something stupid (I think): I run the above command as: sudo grub-install --root-directory=/home/ubuntu/temp /dev/mapper/isw_fdjdhbadc_Volume0 namely I tried to install grub in the device isw_fdjdhbadc_Volume0 instead of the boot partition isw_fdjdhbadc_Volume0p5 The above command did not fail and was executed ok. 5) After that, I tried to boot my laptop, but it seemed that I had no operation system. Not even windows were detected. 6) I thought that I should uninstall grub from isw_fdjdhbadc_Volume0. So following some online instructions that I found, I booted again Ubuntu from the USB stick and run the following command (this was stupid since the instructions were for a totally different case than mine): sudo dd if=/dev/zero of=/dev/mapper/isw_fdjdhbadc_Volume0 bs=446 count=1 Afte that, I was still unable to boot Windows. I realize that I deleted something that I shouldn't, but I'm hope that this is not crucial and I can recover somehow. When I boot Ubuntu from the USB, I can see that the partition with Windows is still there, with all the directories, Windows files, my data etc. So, my question is: Is there a way to undo the mistakes that I desribed above and recover Windows 7? This is my major question. After solving that, I'd also like to know what I did wrong with the installation of Ubuntu. Thanks in advance for you valuable help!

    Read the article

  • Installing Forms and Reports on a development system

    - by Duncan Mills
    By popular demand I've resurrected / updated one of the old blog postings from Jan Carlin's Blog on GroundSide here. A recent (lengthy) post on the Forms forums chronicles the problems some of you have had installing F&R on a development machine. See the link in the headline of this post for the main one. When installing, here are some points to bear in mind: Download and install Weblogic Server first. http://www.oracle.com/technology/software/products/middleware/index.htmlFind the Forms and Reports (and Disco and Portal) zip files here. Download them to the desktop (or some other temporary directory of your choosing). Unzip both of the two zip files into the same new directory (maybe called 'stage') and check that you have 4 directories in the stage dir when you are finished unzipping: 'Disk1', 'Disk2', 'Disk3' and 'Disk4'. These folders are specified in the zip file structure and must be preserved for the setup executable to work. If you use WinZip and have a right click menu option that say "Extract to here", use that by right click-dragging the zip file onto the newly created directory. Don't use the "Extract to folder %HOME%\Desktop\ofm_pfrd...disk_1of2" option. That will get you into the trouble that was reported early in this thread. Free up as much memory as you can. Stop services and background processes and virus scanners and databases (you don't need a DB to install Forms) and other things lurking about on your machine. You can restart them when the install is done. Around 1.5 GB free real memory should do it. If it doesn't, free up more if you can. Don't change the swap space unless you know what you are doing. Let Windows handle it. A 1 GB machine will likely not be enough. You will likely need at least 2GB of RAM.Start the install with setup.exe from the 'Disk1' directoryChoose the Install and Configure option unless you have a good reason not to.Choose a unique instance name even if you deinstalled and removed the last install. I suggest using 'asinst_20090722_1' (today's date in ISO format with a running incremented number at the end if you install more than two times on a particular day).Unselect Portal and Discoverer and select the Builders you want.Unselect WebCacheUnselect OHS.Unselect the single sign-on option Check for any failures and choose the retry option if any occur. If that doesn't fix the problem, call Oracle Customer Support .

    Read the article

  • CodePlex Daily Summary for Sunday, March 28, 2010

    CodePlex Daily Summary for Sunday, March 28, 2010New ProjectsFeed Tracker: Feed Tracker allows you to track your favorite feeds (RSS 2.0 and Atom 1.0) and open them up directly in your browser.FIM 2010 Resource Management Client: The Forefront Identity Management 2010 Resource Management Client is a library to communicate with the FIM 2010 web service. The development langu...Infection Protection: A game about controlling disease outbreak in a city. Developed for OGPC 2010, using Qt.OrthoLab: Homepage of Orthocone open-source laboratory.Paragliding ThermalMarker: Paragliding / Hanggliding Windows Application that receives waypoint files and returns only the thermals that get triggered more often in a place.RSSFalls: RssFalls makes it easier for developers to download RSS or Podcast enclosures.String Library for C++ Language: StrLib++ is a string library for C++ language. for now it support only ANSI strings, later Unicode support will added for UT8, UTF16 and UTF32 for...Sweeper: Sweeper is a Visual Studio 2008 add-in for C# that takes care of many of the trivial code-formatting issues that developers run into - particularly...System.Common: A .Net library that provides methods, properties and more that the .Net Framework doesn't provide.Tiveriad: The framework is designed to help you more easily build modular Windows applicationT-Shirts Online: Online shop build in Silverlight 4 using DIBS as payment module.New ReleasesArkSwitch: ArkSwitch v1.1.3: This release has some important changes. Thanks to MichyPrima for helping with some of the code. 1. Improved theming to more easily support multip...Catharsis: Catharsis 2.5 on catarsa.com: The Catharsis framework has finally its own portal http://catarsa.com The latest release version is 2.5 - string names of properties are not any ...EffiProz - A Pure C# Database: EffiProz CF 1.0: EffiProz for .Net compact framework.Encrypted Notes: Encrypted Notes 1.6: This is the latest version of Encrypted Notes (1.6). It has an installer - it will create a directory 'CPascoe' in My Documents. Once you have ext...Extend SmallBasic: Teaching Extensions v.009: Added Pentagon Crazy Recipe QuizGapi.NET - .NET (C#) wrapper for Google API: Gapi.NET 0.5.0.0: - Fixed some minor bugs. - Add minor features. - Performance improvement. See code check-ins for detailed informationHouseFly experimental controls: HouseFly experimental control: Alpha version of HouseFly experimental controlsiTuner - The iTunes Companion: iTuner 1.2.3738 Beta 2: V1.2 allows you to synchronize one or more iTunes playlists to a USB MP3 player. Beta 2 resolves all known issues. This continues the evolution ye...jQuery Library for SharePoint Web Services: SPServices 0.5.4: IMPORTANT NOTE: This release is in an alpha state. You should only download it if you know what you are getting and are interested in testing it f...JSINQ - LINQ to Objects for JavaScript: JSINQ 1.0: This is the first stable release of JSINQ. It is fully compatible with the 0.9 beta release. It contains the following new features: Now supports ...MSBuild Mercurial Tasks: 1.0.0 Beta: First release of the application. This version integrates all the basic functionalities of Mercurial as defined in the Use Case 1.Open Portal Foundation: Open Portal Foundation V1.4.2: What's news? noscript template was updated naming convention for layout autogenerated controls now use the "master" prefix. The documentation ce...Open Portal Foundation: Open Portal Foundation V1.4.4: What's news? ASP .NET Master page support for custom aspx integrated pages New usercontrols for ASCX, ASPX and Master page integration : Link: f...Paint.NET PSD Plugin: 1.5.0: RLE compression is now working fully on save. File sizes are now competitive with Photoshop's. Saving takes about twice as long with RLE compressi...Paragliding ThermalMarker: ThermalMarker_Alfa0.1: Release Alfa 1Simple Service Locator: Simple Service Locator v0.7: The Simple Service Locator is an easy-to-use Inversion of Control library that is a complete implementation of the Common Service Locator interface...String Library for C++ Language: Release 0.9: version 0.9 beta release DO NOT USE IN SERIOUS PROJECTS this release use default application heap, and because visual studio is using special debug...Sweeper: Sweeper Alpha 1: SweeperA Visual Studio Add-in for C# Code Formatting - Visual Studio 2008 Includes: A UI for options, Enable or disable any specific task you want ...T-Shirts Online: 1.0: First release of the online shop.Twilio Server Library for .NET (TSL.NET): v0.1.0 Beta: This is the first release of TSL.NET. This v0.1.0 release is a Beta. Subsequent builds will be posted as v0.1.x and release-candidate Betas will be...Vr30 OS: Facebook 1.0: Connect you to Facebook without your web browser.Vr30 OS: SkyBlog 1.0: SkyBlog without web browser.Vr30 OS: YouTube 1.0: Youtube without web browserWeb Image Resize Handler: Web Image Resize, Zoom, Rotate and Greyscale v.1.0: Efficient Web Image Resize, Zoom, Rotate and Greyscale cacheing handler for ASP.Net.WinXound: WinXound 3.3.0 Beta 1 for Mac OsX: This is the first Beta release for Apple Mac OsX (Universal Binary). DEBUG HELP NEEDED ! Please signal bugs, suggestions or feedback to: stefano_b...Most Popular ProjectsMetaSharpRawrWBFS ManagerASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)ASP.NETMost Active ProjectsRawrjQuery Library for SharePoint Web ServicesManaged Extensibility FrameworkBlogEngine.NETMicrosoft Biology Foundationpatterns & practices: Composite WPF and SilverlightLINQ to TwitterFarseer Physics EngineTable2ClassNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

  • Applications: How to create a custom dialog box for Windows Mobile 6 (native)

    - by TechTwaddle
    Ashraf, on the MSDN forum, asks, “Is there a way to make a default choice for the messagebox that happens after a period of time if the user doesn't choose (Clicked ) Yes or No buttons.” To elaborate, the requirement is to show a message box to the user with certain options to select, and if the user does not respond within a predefined time limit (say 8 seconds) then the message box must dismiss itself and select a default option. Now such a functionality is not available with the MessageBox() api, you will have to write your own custom dialog box. Surely, creating a dialog box is quite a simple task using the DialogBox() api, and we have been creating full screen dialog boxes all the while. So how will this custom message box be any different? It’s not much different from a regular dialog box except for a few changes in its properties. First, it has a title bar but no buttons on the title bar (no ‘x’ or ‘ok’ button on the title bar), it doesn’t occupy full screen and it contains the controls that you put into it, thus justifying the title ‘custom’. So in this post we create a custom dialog box with two buttons, ‘Black’ and ‘White’. The user is given 8 seconds to select one of those colours, if the user doesn’t make a selection in 8 seconds, the default option ‘Black’ is selected. Before going into the implementation here is a video of how the dialog box works; Custom dialog box To start off, add a new dialog resource into your application, size it appropriately and add whatever controls you need to the dialog. In my case, I added two static text labels and two buttons, as below; Now we need to write up the window procedure for this dialog, here is the complete function; BOOL CALLBACK CustomDialogProc(HWND hDlg, UINT uMessage, WPARAM wParam, LPARAM lParam) {     int wmID, wmEvent;     PAINTSTRUCT ps;     HDC hdc;     static int timeCount = 0;     switch(uMessage)     {         case WM_INITDIALOG:             {                 SHINITDLGINFO shidi;                 memset(&shidi, 0, sizeof(shidi));                 shidi.dwMask = SHIDIM_FLAGS;                 //shidi.dwFlags = SHIDIF_DONEBUTTON | SHIDIF_SIPDOWN | SHIDIF_SIZEDLGFULLSCREEN | SHIDIF_EMPTYMENU;                 shidi.dwFlags = SHIDIF_SIPDOWN | SHIDIF_EMPTYMENU;                 shidi.hDlg = hDlg;                 SHInitDialog(&shidi);                 SHDoneButton(hDlg, SHDB_HIDE);                 timeCount = 0;                 SetWindowText(GetDlgItem(hDlg, IDC_STATIC_TIME_REMAINING), L"Time remaining: 8 second(s)");                 SetTimer(hDlg, MY_TIMER, 1000, NULL);             }             return TRUE;         case WM_COMMAND:             {                 wmID = LOWORD(wParam);                 wmEvent = HIWORD(wParam);                 switch(wmID)                 {                     case IDC_BUTTON_BLACK:                         KillTimer(hDlg, MY_TIMER);                         EndDialog(hDlg, IDC_BUTTON_BLACK);                         break;                     case IDC_BUTTON_WHITE:                         KillTimer(hDlg, MY_TIMER);                         EndDialog(hDlg, IDC_BUTTON_WHITE);                         break;                 }             }             break;         case WM_TIMER:             {                 if (wParam == MY_TIMER)                 {                     WCHAR wszText[128];                     memset(&wszText, 0, sizeof(wszText));                     timeCount++;                     //8 seconds are over, dismiss the dialog, select def value                     if (timeCount >= 8)                     {                         KillTimer(hDlg, MY_TIMER);                         EndDialog(hDlg, IDC_BUTTON_BLACK_DEF);                     }                     wsprintf(wszText, L"Time remaining: %d second(s)", 8-timeCount);                     SetWindowText(GetDlgItem(hDlg, IDC_STATIC_TIME_REMAINING), wszText);                     UpdateWindow(GetDlgItem(hDlg, IDC_STATIC_TIME_REMAINING));                 }             }             break;         case WM_PAINT:             {                 hdc = BeginPaint(hDlg, &ps);                 EndPaint(hDlg, &ps);             }             break;     }     return FALSE; } The MSDN documentation mentions that you need to specify the flag WS_NONAVDONEBUTTON, but I got an error saying that the value could not be found, so we can ignore this for now. Next up, while calling SHInitDialog() for your custom dialog, make sure that you don’t specify SHDIF_DONEBUTTON in the dwFlags member of the SHINITDIALOG structure, this member makes the ‘ok’ button appear on the dialog title bar. Finally, we need to call SHDoneButton() with SHDB_HIDE flag to, well, hide the Done button. The ‘Done’ button is the same as the ‘ok’ button, so this step might seem redundant, and the dialog works fine without calling SHDoneButton() too, but it’s better to stick with the documentation (; So you can see that we have followed all these steps above, under WM_INITDIALOG. We also setup a few things like a variable to keep track of the time, and setting off a one second timer. Every time the timer fires, we receive a WM_TIMER message. We then update the static label displaying the amount of time left to the user. If 8 seconds go by without the user selecting any option, we kill the timer and end the dialog with IDC_BUTTON_BLACK_DEF. This is just a #define’d integer value, make sure it’s unique. You’ll see why this is important. If the user makes a selection, either Black or White, we kill the timer and end the dialog with corresponding selection the user made, that is, either IDC_BUTTON_BLACK or IDC_BUTTON_WHITE. Ok, so now our custom dialog is ready to be used. I invoke the custom dialog from a menu entry in the main windows as below, case IDM_MENU_CUSTOMDLG:     {         int ret = DialogBox(g_hInst, MAKEINTRESOURCE(IDD_CUSTOM_DIALOG), hWnd, CustomDialogProc);         switch(ret)         {             case IDC_BUTTON_BLACK_DEF:                 SetWindowText(g_hStaticSelection, L"You Selected: Black (default)");                 break;             case IDC_BUTTON_BLACK:                 SetWindowText(g_hStaticSelection, L"You Selected: Black");                 break;             case IDC_BUTTON_WHITE:                 SetWindowText(g_hStaticSelection, L"You Selected: White");                 break;         }         UpdateWindow(g_hStaticSelection);     }     break; So you see why ending the dialog with the corresponding value was important, that’s what the DialogBox() api returns with. And in the main window I update a static text label to show which option was selected. I cranked this out in about an hour, and unfortunately don’t have time for a managed C# version. That will have to be another post, if I manage to get it working that is (;

    Read the article

  • 14+ WordPress Portfolio Themes

    - by Edward
    There are various portfolio themes for WordPress out there, with this collection we are trying to help you choose the best one. These themes can be used to create any type of personal, photography, art or corporate portfolio. Display 3 in 1 Display 3 in 1 – Business & Portfolio WordPress Theme. Features a fantastic 3D Image slideshow that can be controlled from your backend with a custom tool. The Theme has a huge wordpress custom backend (8 additional Admin Pages) that make customization of the Theme easy for those who dont know much about coding or wordpress. Price: $40 View Demo Download DeepFocus Tempting features such as automatic separation of blog and portfolio content by template, publishing of most important information on homepage, styles to choose from and many more such features. It also provides for page templates for blog, portfolio, blog archive, tags etc. It has the best feature that helps you to manage everything from one place. Price: $39 (Package includes more than 55 themes) View Demo Download SimplePress Simple, yet awesome. One of the best portfolio theme. Price: $39 (Package includes more than 55 themes) View Demo Download Graphix Graphix is one of best word press portfolio themes. It is most suited to aspiring designers, developers, artists and photographers who’d like a framework theme, which has a great-looking portfolio with a feature-rich blog. It has theme option page, 5-color style, SEO option, featured content blocks, drop down multi-level menu, social profile link custom widgets, custom post, custom page template etc. Price: $69 Single & $149 Developer Package View Demo Download Bizznizz It boasts of many features such as custom homepage, custom post types, custom widgets, portfolio templates, alternative styles and many more. View Demo Download Showtime Ultimate WordPress Theme for you to create your web portfolio, It has 3 different styles for you to choose from. Price: $40 View Demo Download Montana WP Horizontal Portfolio Theme Montana Theme – WP Horizontal Portfolio Theme, best suited for creative studios to showcase design, photography, illustration, paintings and art. Price: $30 View Demo Download OverALL OverALL Premium WordPress Blog & Portfolio Theme, is low priced & has amazing tons of features. Price: $17 View Demo Download Habitat Habitat – Blog and Portfolio Theme. Unique Portfolio Sorting/Filtering with a custom jQuery script (each entry supports multiple images or a video) Multiple Featured Images for each post to generate individual Slideshows per Post, or the option to directly embed video content from youtube, vimeo, hulu etc. Price: $35 View Demo Download Fresh Folio Fresh Folio from WooThemes, can be used as both portfolio and a premium WordPress theme. The theme is a remix of the Fresh News Theme and Proud Folio Theme which combines all the best elements of the respective blog and portfolio style themes. View Demo Download Fresh Folio Features: Can be used to create an impressive portfolio. 7 diverse theme styles to choose from (default, blue, red, grunge light, grunge floral, antique, blue creamer, nightlife) The template will automatically (visually) separate your blog & portfolio content, making this an amazing theme for aspiring designers, developers, artists, photographers etc. Unique page templates types for the portfolio, blog, blog archives, tags & search results. Integrated Theme Options (for WordPress) to tweak the layout, colour scheme etc. for the theme Optional Automatic Image Resize, which is used to dynamically create the thumbnails and featured images Includes Widget enabled Sidebars. eGallery eGallery is a theme made to transform your wordpress blog into a fully functional online portfolio. Theme is perfectly designed to emphasize the artwork you choose to showcase. The design has been greatly enhanced using javascript, and is easy to implement. Price: $39 (Package includes more than 55 themes) View Demo Download ProudFolio ProudFolio is a portfolio premium WordPress theme from Woo Themes. The theme is for designers, developers, artists and photographers who would like a showcase theme which would depict as a portfolio and also serves a purpose of blog. ProudFolio puts a strong emphasis on the portfolio pieces, allowing for decent-sized thumbnails, huge fullscreen views via Lightbox, and full details on the single page. The theme file also contains a choice of three different background images and color schemes. Price: $70 Single $150 Developer License View Demo Download Features: The template will automatically (visually) separate your blog & portfolio content. An unique homepage layout, which publishes only the most important information; Unique page templates for the portfolio, blog, blog archives, tags & search results. Integrated Theme Options (for WordPress) to tweak the layout, colour scheme etc. for the theme; Built-in video panel, which you can use to publish any web-based Flash videos; Automatic Image Resize, which is used to dynamically create the thumbnails and featured images; Custom Page Templates for Archives, Sitemap & Image Gallery; Built-in Gravatar Support for Authors & Comments; Integrated Banner Management script to display randomized banner ads of your choice site-wide; Pretty drop down navigation everywhere; and Widget Enabled Sidebars. Porftolio WordPress Theme A FREE wordpress theme designed for web portfolios and (for now) just for web portfolios. It is coming with an Administrative Panel from where you can edit the head quote text, you can edit all theme colors, font families, font sizes and you can fill a curriculum vitae and display it into a special page. Theme demo and download can be found here Viz | Biz Viz | Biz is a premium WordPress photo gallery and portfolio theme designed specifically for photographers, graphic designers and web designers who want to display their creative work online, market their services, as well as have a typical text blog, using the power and flexibility of WordPress. It is priced for $79.95. Theme Features: Premium quality portfolio template Custom logo uploader to replace the standard graphic with your own unique look from the WP Dashboard Integrated blog component (front images are custom fields and thumbnails, but you can also have a typical blog) Four tabbed feature areas (About Me, Services, Recent Posts, and Tags) Two home page feature photos (You choose which photos to feature using a WP category) Manage your online portfolio through the WordPress CMS Crop two sizes of your work: One for the front page thumbnails and another full size version and upload to WP Search engine optimized. Related posts:14 WordPress Photo Blog & Portfolio Themes 6 PhotoBlog Portfolio WordPress Themes Professional WordPress Business Themes

    Read the article

  • What Makes a Good Design Critic? CHI 2010 Panel Review

    - by Applications User Experience
    Author: Daniel Schwartz, Senior Interaction Designer, Oracle Applications User Experience Oracle Applications UX Chief Evangelist Patanjali Venkatacharya organized and moderated an innovative and stimulating panel discussion titled "What Makes a Good Design Critic? Food Design vs. Product Design Criticism" at CHI 2010, the annual ACM Conference on Human Factors in Computing Systems. The panelists included Janice Rohn, VP of User Experience at Experian; Tami Hardeman, a food stylist; Ed Seiber, a restaurant architect and designer; Jonathan Kessler, a food critic and writer at the Atlanta Journal-Constitution; and Larry Powers, Chef de Cuisine at Shaun's restaurant in Atlanta, Georgia. Building off the momentum of his highly acclaimed panel at CHI 2009 on what interaction design can learn from food design (for which I was on the other side as a panelist), Venkatacharya brought together new people with different roles in the restaurant and software interaction design fields. The session was also quite delicious -- but more on that later. Criticism, as it applies to food and product or interaction design, was the tasty topic for this forum and showed that strong parallels exist between food and interaction design criticism. Figure 1. The panelists in discussion: (left to right) Janice Rohn, Ed Seiber, Tami Hardeman, and Jonathan Kessler. The panelists had great insights to share from their respective fields, and they enthusiastically discussed as if they were at a casual collegial dinner. Jonathan Kessler stated that he prefers to have one professional critic's opinion in general than a large sampling of customers, however, "Web sites like Yelp get users excited by the collective approach. People are attracted to things desired by so many." Janice Rohn added that this collective desire was especially true for users of consumer products. Ed Seiber remarked that while people looked to the popular view for their target tastes and product choices, "professional critics like John [Kessler] still hold a big weight on public opinion." Chef Powers indicated that chefs take in feedback from all sources, adding, "word of mouth is very powerful. We also look heavily at the sales of the dishes to see what's moving; what's selling and thus successful." Hearing this discussion validates our design work at Oracle in that we listen to our users (our diners) and industry feedback (our critics) to ensure an optimal user experience of our products. Rohn considers that restaurateur Danny Meyer's book, Setting the Table: The Transforming Power of Hospitality in Business, which is about creating successful restaurant experiences, has many applicable parallels to user experience design. Meyer actually argues that the customer is not always right, but that "they must always feel heard." Seiber agreed, but noted "customers are not designers," and while designers need to listen to customer feedback, it is the designer's job to synthesize it. Seiber feels it's the critic's job to point out when something is missing or not well-prioritized. In interaction design, our challenges are quite similar, if not parallel. Software tasks are like puzzles that are in search of a solution on how to be best completed. As a food stylist, Tami Hardeman has the demanding and challenging task of presenting food to be as delectable as can be. To present food in its best light requires a lot of creativity and insight into consumer tastes. It's no doubt then that this former fashion stylist came up with the ultimate catch phrase to capture the emotion that clients want to draw from their users: "craveability." The phrase was a hit with the audience and panelists alike. Sometime later in the discussion, Seiber remarked, "designers strive to apply craveability to products, and I do so for restaurants in my case." Craveabilty is also very applicable to interaction design. Creating straightforward and smooth workflows for users of Oracle Applications is a primary goal for my colleagues. We want our users to really enjoy working with our products where it makes them more efficient and better at their jobs. That's our "craveability." Patanjali Venkatacharya asked the panel, "if a design's "craveability" appeals to some cultures but not to others, then what is the impact to the food or product design process?" Rohn stated that "taste is part nature and part nurture" and that the design must take the full context of a product's usage into consideration. Kessler added, "good design is about understanding the context" that the experience necessitates. Seiber remarked how important seat comfort is for diners and how the quality of seating will add so much to the complete dining experience. Sometimes if these non-food factors are not well executed, they can also take away from an otherwise pleasant dining experience. Kessler recounted a time when he was dining at a restaurant that actually had very good food, but the photographs hanging on all the walls did not fit in with the overall décor and created a negative overall dining experience. While the tastiness of the food is critical to a restaurant's success, it is a captivating complete user experience, as in interaction design, which will keep customers coming back and ultimately making the restaurant a hit. Figure 2. Patnajali Venkatacharya enjoyed the Sardian flatbread salad. As a surprise Chef Powers brought out a signature dish from Shaun's restaurant for all the panelists to sample and critique. The Sardinian flatbread dish showcased Atlanta's taste for fresh and local produce and cheese at its finest as a salad served on a crispy flavorful flat bread. Hardeman said it could be photographed from any angle, a high compliment coming from a food stylist. Seiber really enjoyed the colors that the dish brought together and thought it would be served very well in a casual restaurant on a summer's day. The panel really appreciated the taste and quality of the different components and how the rosemary brought all the flavors together. Seiber remarked that "a lot of effort goes into the appearance of simplicity." Rohn indicated that the same notion holds true with software user interface design. A tremendous amount of work goes into crafting straightforward interfaces, including user research, prototyping, design iterations, and usability studies. Design criticism for food and software interfaces clearly share many similarities. Both areas value expert opinions and user feedback. Both areas understand the importance of great design needing to work well in its context. Last but not least, both food and interaction design criticism value "craveability" and how having users excited about experiencing and enjoying the designs is an important goal. Now if we can just improve the taste of software user interfaces, people may choose to dine on their enterprise applications over a fresh organic salad.

    Read the article

  • How to Format a USB Drive in Ubuntu Using GParted

    - by Trevor Bekolay
    If a USB hard drive or flash drive is not properly formatted, then it will not show up in the Ubuntu Places menu, making it hard to interact with. We’ll show you how to format a USB drive using the tool GParted. Note: Formatting a USB drive will destroy any data currently stored on it. If you think that your USB drive is already properly formatted, but Ubuntu just isn’t picking it up, try unplugging it and plugging it back in to a different USB slot, or restarting your machine with the device plugged in on start-up. Open a terminal by clicking on Applications in the top-left of the screen, then Accessories > Terminal. GParted should be installed by default, but we’ll make sure it’s installed by entering the following command in the terminal: sudo apt-get install gparted To open GParted, enter the following command in the terminal: sudo gparted Find your USB drive in the drop-down box at the top right of the GParted window. The drive should be unallocated – if it has a valid partition on it, then you may be looking at the wrong drive. Note: Make sure you’re on the correct drive, as making changes on the wrong hard drive with GParted can delete all data on a hard drive! Assuming you’re on the right drive, right-click on the unallocated grey block and click New. In the window that pops up, change the File System to fat32 for USB Flash Drives, NTFS for USB Hard Drives that will be used in Windows, or ext3/ext4 for USB Hard Drives that will be used exclusively in Linux. Add a label if you’d like, and then click Add. Click the green checkmark and then the Apply button to apply the changes. GParted will now format your drive. If you’re formatting a large USB Hard Drive, this can take some time. Once the process is done, you can close GParted, and the drive will now show up in the Places menu. Clicking on the drive will mount it and open it in a File Browser window. It will also add a shortcut to the drive on the Desktop by default. Your USB drive is now ready to store your files! Similar Articles Productive Geek Tips Using GParted to Resize Your Windows Vista PartitionInstall an RPM Package on Ubuntu LinuxCreate a Persistent Bootable Ubuntu USB Flash DriveShare Ubuntu Home Directories using SambaCreate a Samba User on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott

    Read the article

  • The Internet of Things Is Really the Internet of People

    - by HCM-Oracle
    By Mark Hurd - Originally Posted on LinkedIn As I speak with CEOs around the world, our conversations invariably come down to this central question: Can we change our corporate cultures and the ways we train and reward our people as rapidly as new technology is changing the work we do, the products we make and how we engage with customers? It’s a critical consideration given today’s pace of disruption, which already is straining traditional management models and HR strategies. Winning companies will bring innovation and vision to their employees and partners by attracting people who will thrive in this emerging world of relentless data, predictive analytics and unlimited what-if scenarios. So, where are we going to find employees who are as familiar with complex data as I am with orderly financial statements and business plans? I’m not just talking about high-end data scientists who most certainly will sit at or near the top of the new decision-making pyramid. Global organizations will need creative and motivated people who will devote their time to manipulating, reviewing, analyzing, sorting and reshaping data to drive business and delight customers. This might seem evident, but my conversations with business people across the globe indicate that only a small number of companies get it. In the past few years, executives have been busy keeping pace with seismic upheavals, including the rise of social customer engagement, the rapid acceleration of product-development cycles and the relentless move to mobile-first. But all of that, I think, is the start of an uphill climb to the top of a roller-coaster. Today, about 10 billion devices across the globe are connected to the Internet. In a couple of years, that number will probably double, and not because we will have bought 10 billion more computers, smart phones and tablets. This unprecedented explosion of Big Data is being triggered by the Internet of Things, which is another way of saying that the numerous intelligent devices touching our everyday lives are all becoming interconnected. Home appliances, food, industrial equipment, pets, pharmaceutical products, pallets, cars, luggage, packaged goods, athletic equipment, even clothing will be streaming data. Some data will provide important information about how to run our businesses and lead healthier lives. Much of it will be extraneous. How does a CEO cope with this unimaginable volume and velocity of data, much less harness it to excite and delight customers? Here are three things CEOs must do to tackle this challenge: 1) Take care of your employees, take care of your customers. Larry Ellison recently noted that the two most important priorities for any CEO today revolve around people: Taking care of your employees and taking care of your customers. Companies in today’s hypercompetitive business environment simply won’t be able to survive unless they’ve got world-class people at all levels of the organization. CEOs must demonstrate a commitment to employees by becoming champions for HR systems that empower every employee to fully understand his or her job, how it ties into the corporate framework, what’s expected of them, what training is available, and how they can use an embedded social network to communicate, collaborate and excel. Over the next several years, many of the world’s top industrialized economies will see a turnover in the workforce on an unprecedented scale. Across the United States, Europe, China and Japan, the “baby boomer” generation will be retiring and, by 2020, we’ll see turnovers in those regions ranging from 10 to 30 percent. How will companies replace all that brainpower, experience and know-how? How will CEOs perpetuate the best elements of their corporate cultures in the midst of this profound turnover? The challenge will be daunting, but it can be met with world-class HR technology. As companies begin replacing up to 30 percent of their workforce, they will need thousands of new types of data-native workers to exploit the Internet of Things in the service of the Internet of People. The shift in corporate mindset here can’t be overstated. The CEO has to be at the forefront of this new way of recruiting, training, motivating, aligning and developing truly 21-century talent. 2) Start thinking today about the Internet of People. Some forward-looking companies have begun pursuing the “democratization of data.” This allows more people within a company greater access to data that can help them make better decisions, move more quickly and keep pace with the changing interests and demands of their customers. As a result, we’ve seen organizations flatten out, growing numbers of well-informed people authorized to make decisions without corporate approval and a movement of engagement away from headquarters to the point of contact with the customer. These are profound changes, and I’m a huge proponent. As I think about what the next few years will bring as companies become deluged with unprecedented streams of data, I’m convinced that we’ll need dramatically different organizational structures, decision-making models, risk-management profiles and reward systems. For example, if a car company’s marketing department mines incoming data to determine that customers are shifting rapidly toward neon-green models, how many layers of approval, review, analysis and sign-off will be needed before the factory starts cranking out more neon-green cars? Will we continue to have organizations where too many people are empowered to say “No” and too few are allowed to say “Yes”? If so, how will those companies be able to compete in a world in which customers have more choices, instant access to more information and less loyalty than ever before? That’s why I think CEOs need to begin thinking about this problem right now, not in a year or two when competitors are already reshaping their organizations to match the marketplace’s new realities. 3) Partner with universities to help create a new type of highly skilled workers. Several years ago, universities introduced new undergraduate as well as graduate-level programs in analytics and informatics as the business need for deeper insights into the booming world of data began to explode. Today, as the growth rate of data continues to soar, we know that the Internet of Things will only intensify that growth. Moreover, as Big Data fuels insights that can be shaped into products and services that generate revenue, the demand for data scientists and data specialists will go on unabated. Beyond that top-level expertise, companies are going to need data-native thinkers at all levels of the organization. Where will this new type of worker come from? I think it’s incumbent on the business community to collaborate with universities to develop new curricula designed to turn out graduates who can capitalize on the data-driven world that the Internet of Things is surely going to create. These new workers will create opportunities to help their companies in fields as diverse as product design, customer service, marketing, manufacturing and distribution. They will become innovative leaders in fashioning an entirely new type of workforce and organizational structure optimized to fully exploit the Internet of Things so that it becomes a high-value enabler of the Internet of People. Mark Hurd is President of Oracle Corporation and a member of the company's Board of Directors. He joined Oracle in 2010, bringing more than 30 years of technology industry leadership, computer hardware expertise, and executive management experience to his role with the company. As President, Mr. Hurd oversees the corporate direction and strategy for Oracle's global field operations, including marketing, sales, consulting, alliances and channels, and support. He focuses on strategy, leadership, innovation, and customers.

    Read the article

  • Cost Comparison Hard Disk Drive to Solid State Drive on Price per Gigabyte - dispelling a myth!

    - by tonyrogerson
    It is often said that Hard Disk Drive storage is significantly cheaper per GiByte than Solid State Devices – this is wholly inaccurate within the database space. People need to look at the cost of the complete solution and not just a single component part in isolation to what is really required to meet the business requirement. Buying a single Hitachi Ultrastar 600GB 3.5” SAS 15Krpm hard disk drive will cost approximately £239.60 (http://scan.co.uk, 22nd March 2012) compared to an OCZ 600GB Z-Drive R4 CM84 PCIe costing £2,316.54 (http://scan.co.uk, 22nd March 2012); I’ve not included FusionIO ioDrive because there is no public pricing available for it – something I never understand and personally when companies do this I immediately think what are they hiding, luckily in FusionIO’s case the product is proven though is expensive compared to OCZ enterprise offerings. On the face of it the single 15Krpm hard disk has a price per GB of £0.39, the SSD £3.86; this is what you will see in the press and this is what sales people will use in comparing the two technologies – do not be fooled by this bullshit people! What is the requirement? The requirement is the database will have a static size of 400GB kept static through archiving so growth and trim will balance the database size, the client requires resilience, there will be several hundred call centre staff querying the database where queries will read a small amount of data but there will be no hot spot in the data so the randomness will come across the entire 400GB of the database, estimates predict that the IOps required will be approximately 4,000IOps at peak times, because it’s a call centre system the IO latency is important and must remain below 5ms per IO. The balance between read and write is 70% read, 30% write. The requirement is now defined and we have three of the most important pieces of the puzzle – space required, estimated IOps and maximum latency per IO. Something to consider with regard SQL Server; write activity requires synchronous IO to the storage media specifically the transaction log; that means the write thread will wait until the IO is completed and hardened off until the thread can continue execution, the requirement has stated that 30% of the system activity will be write so we can expect a high amount of synchronous activity. The hardware solution needs to be defined; two possible solutions: hard disk or solid state based; the real question now is how many hard disks are required to achieve the IO throughput, the latency and resilience, ditto for the solid state. Hard Drive solution On a test on an HP DL380, P410i controller using IOMeter against a single 15Krpm 146GB SAS drive, the throughput given on a transfer size of 8KiB against a 40GiB file on a freshly formatted disk where the partition is the only partition on the disk thus the 40GiB file is on the outer edge of the drive so more sectors can be read before head movement is required: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 3,733 IOps at an average latency of 34.06ms (34 MiB/s). The same test was done on the same disk but the test file was 130GiB: For 100% sequential IO at a queue depth of 16 with 8 worker threads 43,537 IOps at an average latency of 2.93ms (340 MiB/s), for 100% random IO at the same queue depth and worker threads 528 IOps at an average latency of 217.49ms (4 MiB/s). From the result it is clear random performance gets worse as the disk fills up – I’m currently writing an article on short stroking which will cover this in detail. Given the work load is random in nature looking at the random performance of the single drive when only 40 GiB of the 146 GB is used gives near the IOps required but the latency is way out. Luckily I have tested 6 x 15Krpm 146GB SAS 15Krpm drives in a RAID 0 using the same test methodology, for the same test above on a 130 GiB for each drive added the performance boost is near linear, for each drive added throughput goes up by 5 MiB/sec, IOps by 700 IOps and latency reducing nearly 50% per drive added (172 ms, 94 ms, 65 ms, 47 ms, 37 ms, 30 ms). This is because the same 130GiB is spread out more as you add drives 130 / 1, 130 / 2, 130 / 3 etc. so implicit short stroking is occurring because there is less file on each drive so less head movement required. The best latency is still 30 ms but we have the IOps required now, but that’s on a 130GiB file and not the 400GiB we need. Some reality check here: a) the drive randomness is more likely to be 50/50 and not a full 100% but the above has highlighted the effect randomness has on the drive and the more a drive fills with data the worse the effect. For argument sake let us assume that for the given workload we need 8 disks to do the job, for resilience reasons we will need 16 because we need to RAID 1+0 them in order to get the throughput and the resilience, RAID 5 would degrade performance. Cost for hard drives: 16 x £239.60 = £3,833.60 For the hard drives we will need disk controllers and a separate external disk array because the likelihood is that the server itself won’t take the drives, a quick spec off DELL for a PowerVault MD1220 which gives the dual pathing with 16 disks 146GB 15Krpm 2.5” disks is priced at £7,438.00, note its probably more once we had two controller cards to sit in the server in, racking etc. Minimum cost taking the DELL quote as an example is therefore: {Cost of Hardware} / {Storage Required} £7,438.60 / 400 = £18.595 per GB £18.59 per GiB is a far cry from the £0.39 we had been told by the salesman and the myth. Yes, the storage array is composed of 16 x 146 disks in RAID 10 (therefore 8 usable) giving an effective usable storage availability of 1168GB but the actual storage requirement is only 400 and the extra disks have had to be purchased to get the  IOps up. Solid State Drive solution A single card significantly exceeds the IOps and latency required, for resilience two will be required. ( £2,316.54 * 2 ) / 400 = £11.58 per GB With the SSD solution only two PCIe sockets are required, no external disk units, no additional controllers, no redundant controllers etc. Conclusion I hope by showing you an example that the myth that hard disk drives are cheaper per GiB than Solid State has now been dispelled - £11.58 per GB for SSD compared to £18.59 for Hard Disk. I’ve not even touched on the running costs, compare the costs of running 18 hard disks, that’s a lot of heat and power compared to two PCIe cards!Just a quick note: I've left a fair amount of information out due to this being a blog! If in doubt, email me :)I'll also deal with the myth that SSD's wear out at a later date as well - that's just way over done still, yes, 5 years ago, but now - no.

    Read the article

  • Find a Faster DNS Server with Namebench

    - by Mysticgeek
    One way to speed up your Internet browsing experience is using a faster DNS server. Today we take a look at Namebench, which will compare your current DNS server against others out there, and help you find a faster one. Namebench Download the file and run the executable (link below). Namebench starts up and will include the current DNS server you have configured on your system. In this example we’re behind a router and using the DNS server from the ISP. Include the global DNS providers and the best available regional DNS server, then start the Benchmark. The test starts to run and you’ll see the queries it’s running through. The benchmark takes about 5-10 minutes to complete. After it’s complete you’ll get a report of the results. Based on its findings, it will show you what DNS server is fastest for your system. It also displays different types of graphs so you can get a better feel for the different results. You can export the results to a .csv file as well so you can present the results in Excel. Conclusion This is a free project that is in continuing development, so results might not be perfect, and there may be more features added in the future. If you’re looking for a method to help find a faster DNS server for your system, Namebench is a cool free utility to help you out. If you’re looking for a public DNS server that is customizable and includes filters, you might want to check out our article on helping to protect your kids from questionable content using OpenDNS. You can also check out how to speed up your web browsing with Google Public DNS. Links Download NameBench for Windows, Mac, and Linux from Google Code Learn More About the Project on the Namebench Wiki Page Similar Articles Productive Geek Tips Open a Second Console Session on Ubuntu ServerShare Ubuntu Home Directories using SambaSetup OpenSSH Server on Ubuntu LinuxDisable the Annoying “This device can perform faster” Balloon Message in Windows 7Search For Rows With Special Characters in SQL Server TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app

    Read the article

  • SQL University: What and why of database testing

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 2 – Tools of the trade With that out of the way let us sharpen our pencils and get going. Why test a database The sad state of the industry today is that there is very little emphasis on testing in general. Test driven development is still a small niche of the programming world while refactoring is even smaller. The cause of this is the inability of developers to convince themselves and their managers that writing tests is beneficial. At the moment they are mostly viewed as waste of time. This is because the average person (let’s not fool ourselves, we’re all average) is unable to think about lower future costs in relation to little more current work. It’s orders of magnitude easier to know about the current costs in relation to current amount of work. That’s why programmers convince themselves testing is a waste of time. However we have to ask ourselves what tests are really about? Maybe finding bugs? No, not really. If we introduce bugs, we’re likely to write test around those bugs too. But yes we can find some bugs with tests. The main point of tests is to have reproducible repeatability in our systems. By having a code base largely covered by tests we can know with better certainty what a small code change can break in other parts of the system. By having repeatability we can make code changes with confidence, since we know we’ll see what breaks in other tests. And here comes the inability to estimate future costs. By spending just a few more hours writing those tests we’d know instantly what broke where. Imagine we fix a reported bug. We check-in the code, deploy it and the users are happy. Until we get a call 2 weeks later about a certain monthly process has stopped working. What we don’t know is that this process was developed by a long gone coworker and for some reason it relied on that same bug we’ve happily fixed. There’s no way we could’ve known that. We say OK and go in and fix the monthly process. But what we have no clue about is that there’s this ETL job that relied on data from that monthly process. Now that we’ve fixed the process it’s giving unexpected (yet correct since we fixed it) data to the ETL job. So we have to fix that too. But there’s this part of the app we coded that relies on data from that exact ETL job. And just like that we enter the “Loop of maintenance horror”. With the loop eventually comes blame. Here’s a nice tip for all developers and DBAs out there: If you make a mistake man up and admit to it. All of the above is valid for any kind of software development. Keeping this in mind the database is nothing other than just a part of the application. But a big part! One reason why testing a database is even more important than testing an application is that one database is usually accessed from multiple applications and processes. This makes it the central and vital part of the enterprise software infrastructure. Knowing all this can we really afford not to have tests? What to test in a database Now that we’ve decided we’ll dive into this testing thing we have to ask ourselves what needs to be tested? The short answer is: everything. The long answer is: read on! There are 2 main ways of doing tests: Black box and White box testing. Black box testing means we have no idea how the system internals are built and we only have access to it’s inputs and outputs. With it we test that the internal changes to the system haven’t caused the input/output behavior of the system to change. The most important thing to test here are the edge conditions. It’s where most programs break. Having good edge condition tests we can be more confident that the systems changes won’t break. White box testing has the full knowledge of the system internals. With it we test the internal system changes, different states of the application, etc… White and Black box tests should be complementary to each other as they are very much interconnected. Testing database routines includes testing stored procedures, views, user defined functions and anything you use to access the data with. Database routines are your input/output interface to the database system. They count as black box testing. We test then for 2 things: Data and schema. When testing schema we only care about the columns and the data types they’re returning. After all the schema is the contract to the out side systems. If it changes we usually have to change the applications accessing it. One helpful T-SQL command when doing schema tests is SET FMTONLY ON. It tells the SQL Server to return only empty results sets. This speeds up tests because it doesn’t return any data to the client. After we’ve validated the schema we have to test the returned data. There no other way to do this but to have expected data known before the tests executes and comparing that data to the database routine output. Testing Authentication and Authorization helps us validate who has access to the SQL Server box (Authentication) and who has access to certain database objects (Authorization). For desktop applications and windows authentication this works well. But the biggest problem here are web apps. They usually connect to the database as a single user. Please ensure that that user is not SA or an account with admin privileges. That is just bad. Load testing ensures us that our database can handle peak loads. One often overlooked tool for load testing is Microsoft’s OSTRESS tool. It’s part of RML utilities (x86, x64) for SQL Server and can help determine if our database server can handle loads like 100 simultaneous users each doing 10 requests per second. SQL Profiler can also help us here by looking at why certain queries are slow and what to do to fix them.   One particular problem to think about is how to begin testing existing databases. First thing we have to do is to get to know those databases. We can’t test something when we don’t know how it works. To do this we have to talk to the users of the applications accessing the database, run SQL Profiler to see what queries are being run, use existing documentation to decipher all the object relationships, etc… The way to approach this is to choose one part of the database (say a logical grouping of tables that go together) and filter our traces accordingly. Once we’ve done that we move on to the next grouping and so on until we’ve covered the whole database. Then we move on to the next one. Database Testing is a topic that we can spent many hours discussing but let this be a nice intro to the world of database testing. See you in the next post.

    Read the article

  • Installing Ubuntu 12.04 on NUC intel i3 DC3217IYE

    - by Kieron
    System: NUC i3, 2 hdmi ports, ethernet, no wireless. UEFI boot 2x2gb ram 30gb mSATA internal drive Ubuntu 64bit 12.04.03 Hello i am having much trouble loading an OS on my NUC. I started out attempting another OS with various loaders (beast/hack) without much success,(various panics on boot or endless reboot loops, or graphics failures) after many tries i decided to attempt Ubuntu. Many years ago i loaded Ubuntu on an e-machine without an issue so i figured it would go smoothly, nothing could be further from the truth. The Live USB stick loads, but when i am installing the OS it always fails to load grub 2. Obviously it wont boot from SSD without grub2. I searched and found that Boot-repair should fix it...so i created a live usb with boot repair...it refuses to repair the grub because the install never finished and the needed partions are not fully created and flagged. It also demands an internet connection which i am unable to provide. I then ran gparted in an attempt to create the needed partitions manually, then re-ran boot repair turning off the "check internet" option. and disabling the re-install grub hoping it would create the missing directories and or fix the flags. it appeared to run successfully but upon return to the Ubuntu live USB it still fails at the grub2 install. also gparted doesnt have the same choices that Ubuntu install has when creating partitions, causing it to not recognize that i already had a root, or an EFI directory or it sometimes couldnt tell what the format of the partition was...all very annoying the reason i cant connect to internet (and cant upload the error logs) is the nuc only has Ethernet and the location i have to set up is too far away from modem. i can not move the monitor closer to modem as it is a 50inch LCD. I just want to do a basic install with one user acct and remote desktop (vnc) turned on so i can move the NUC to the modem connect via ethernet and then finish setting it up via Remote desktop/VNC chicken from my mac. While i await any assistance you maybe able to provide i am going to attempt to switch to the 32bit version and legacy boot to see if that can load grub. thnx again to anyone that can come up with a possible solution. i would love to hit "erase and install ubuntu" if anyone can figure out what is stopping that simple answer from working. Also disks (CD/DVD) are not an option as neither my Mac mini or my NUC have optical drives, and i have no desire to buy one for one task

    Read the article

  • apt-get install is not able to access /etc

    - by HorusKol
    I put together an ubuntu 12.04 server a couple of weeks ago and everything seemed fine until this morning. Suddenly, I'm having trouble installing new packages - at first I thought there was something wrong with tinyproxy and so I tried installing squid instead. However, I get similar results: Starting tinyproxy: tinyproxy: Could not open config file "/etc/tinyproxy.conf".\ ... /var/lib/dpkg/info/squid3.postinst: 1: /var/lib/dpkg/info/squid3.postinst: cannot open /etc/squid3/squid.conf: No such file It seems that apt-get is not creating the configuration files needed for these programs. I haven't modified any configuration or user groups since the last successful update/install of packages. /etc is present, and is populated with a nice healthy tree of configuration files. It is owned and grouped to root, and has the properties drwxr-xr-x - all the files and folders inside seem to be fine to, as far as I can tell. I've even been able to edit/save a couple as sudo. Full output from installing tinyproxy: Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: tinyproxy 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/61.6 kB of archives. After this operation, 201 kB of additional disk space will be used. Selecting previously unselected package tinyproxy. (Reading database ... 58916 files and directories currently installed.) Unpacking tinyproxy (from .../tinyproxy_1.8.3-1_amd64.deb) ... Processing triggers for ureadahead ... Processing triggers for man-db ... Setting up tinyproxy (1.8.3-1) ... Starting tinyproxy: tinyproxy: Could not open config file "/etc/tinyproxy.conf". invoke-rc.d: initscript tinyproxy, action "start" failed. dpkg: error processing tinyproxy (--configure): subprocess installed post-installation script returned error exit status 70 Errors were encountered while processing: tinyproxy E: Sub-process /usr/bin/dpkg returned an error code (1) Result of strace after installation: 18467 open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 18467 open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 18467 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\200\30\2\0\0\0\0\0"..., 832) = 832 18467 open("/etc/tinyproxy.conf", O_RDONLY) = -1 ENOENT (No such file or directory)

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >