Search Results

Search found 62161 results on 2487 pages for 'set difference'.

Page 349/2487 | < Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >

  • SQL SERVER – Weekly Series – Memory Lane – #038

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 CASE Statement in ORDER BY Clause – ORDER BY using Variable This article is as per request from the Application Development Team Leader of my company. His team encountered code where the application was preparing string for ORDER BY clause of the SELECT statement. Application was passing this string as variable to Stored Procedure (SP) and SP was using EXEC to execute the SQL string. This is not good for performance as Stored Procedure has to recompile every time due to EXEC. sp_executesql can do the same task but still not the best performance. SSMS – View/Send Query Results to Text/Grid/Files Results to Text – CTRL + T Results to Grid – CTRL + D Results to File – CTRL + SHIFT + F 2008 Introduction to SPARSE Columns Part 2 I wrote about Introduction to SPARSE Columns Part 1. Let us understand the concept of the SPARSE column in more detail. I suggest you read the first part before continuing reading this article. All SPARSE columns are stored as one XML column in the database. Let us see some of the advantage and disadvantage of SPARSE column. Deferred Name Resolution How come when table name is incorrect SP can be created successfully but when an incorrect column is used SP cannot be created? 2009 Backup Timeline and Understanding of Database Restore Process in Full Recovery Model In general, databases backup in full recovery mode is taken in three different kinds of database files. Full Database Backup Differential Database Backup Log Backup Restore Sequence and Understanding NORECOVERY and RECOVERY While doing RESTORE Operation if you restoring database files, always use NORECOVER option as that will keep the database in a state where more backup file are restored. This will also keep database offline also to prevent any changes, which can create itegrity issues. Once all backup file is restored run RESTORE command with a RECOVERY option to get database online and operational. Four Different Ways to Find Recovery Model for Database Perhaps, the best thing about technical domain is that most of the things can be executed in more than one ways. It is always useful to know about the various methods of performing a single task. Two Methods to Retrieve List of Primary Keys and Foreign Keys of Database When Information Schema is used, we will not be able to discern between primary key and foreign key; we will have both the keys together. In the case of sys schema, we can query the data in our preferred way and can join this table to another table, which can retrieve additional data from the same. Get Last Running Query Based on SPID PID is returns sessions ID of the current user process. The acronym SPID comes from the name of its earlier version, Server Process ID. 2010 SELECT * FROM dual – Dual Equivalent Dual is a table that is created by Oracle together with data dictionary. It consists of exactly one column named “dummy”, and one record. The value of that record is X. You can check the content of the DUAL table using the following syntax. SELECT * FROM dual Identifying Statistics Used by Query Someone asked this question in my training class of query optimization and performance tuning.  “Can I know which statistics were used by my query?” 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 14 of 31 What are the basic functions for master, msdb, model, tempdb and resource databases? What is the Maximum Number of Index per Table? Explain Few of the New Features of SQL Server 2008 Management Studio Explain IntelliSense for Query Editing Explain MultiServer Query Explain Query Editor Regions Explain Object Explorer Enhancements Explain Activity Monitors SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 15 of 31 What is Service Broker? Where are SQL server Usernames and Passwords Stored in the SQL server? What is Policy Management? What is Database Mirroring? What are Sparse Columns? What does TOP Operator Do? What is CTE? What is MERGE Statement? What is Filtered Index? Which are the New Data Types Introduced in SQL SERVER 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 16 of 31 What are the Advantages of Using CTE? How can we Rewrite Sub-Queries into Simple Select Statements or with Joins? What is CLR? What are Synonyms? What is LINQ? What are Isolation Levels? What is Use of EXCEPT Clause? What is XPath? What is NOLOCK? What is the Difference between Update Lock and Exclusive Lock? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 17 of 31 How will you Handle Error in SQL SERVER 2008? What is RAISEERROR? What is RAISEERROR? How to Rebuild the Master Database? What is the XML Datatype? What is Data Compression? What is Use of DBCC Commands? How to Copy the Tables, Schema and Views from one SQL Server to Another? How to Find Tables without Indexes? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 18 of 31 How to Copy Data from One Table to Another Table? What is Catalog Views? What is PIVOT and UNPIVOT? What is a Filestream? What is SQLCMD? What do you mean by TABLESAMPLE? What is ROW_NUMBER()? What are Ranking Functions? What is Change Data Capture (CDC) in SQL Server 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 19 of 31 How can I Track the Changes or Identify the Latest Insert-Update-Delete from a Table? What is the CPU Pressure? How can I Get Data from a Database on Another Server? What is the Bookmark Lookup and RID Lookup? What is Difference between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE? What is Difference between GETDATE and SYSDATETIME in SQL Server 2008? How can I Check that whether Automatic Statistic Update is Enabled or not? How to Find Index Size for Each Index on Table? What is the Difference between Seek Predicate and Predicate? What are Basics of Policy Management? What are the Advantages of Policy Management? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 20 of 31 What are Policy Management Terms? What is the ‘FILLFACTOR’? Where in MS SQL Server is ’100’ equal to ‘0’? What are Points to Remember while Using the FILLFACTOR Argument? What is a ROLLUP Clause? What are Various Limitations of the Views? What is a Covered index? When I Delete any Data from a Table, does the SQL Server reduce the size of that table? What are Wait Types? How to Stop Log File Growing too Big? If any Stored Procedure is Encrypted, then can we see its definition in Activity Monitor? 2012 Example of Width Sensitive and Width Insensitive Collation Width Sensitive Collation: A single-byte character (half-width) represented as single-byte and the same character represented as a double-byte character (full-width) are when compared are not equal the collation is width sensitive. In this example we have one table with two columns. One column has a collation of width sensitive and the second column has a collation of width insensitive. Find Column Used in Stored Procedure – Search Stored Procedure for Column Name Very interesting conversation about how to find column used in a stored procedure. There are two different characters in the story and both are having a conversation about how to find column in the stored procedure. Here are two part story Part 1 | Part 2 SQL SERVER – 2012 Functions – FORMAT() and CONCAT() – An Interesting Usage Generate Script for Schema and Data – SQL in Sixty Seconds #021 – Video In simple words, in many cases the database move from one place to another place. It is not always possible to back up and restore databases. There are possibilities when only part of the database (with schema and data) has to be moved. In this video we learn that we can easily generate script for schema for data and move from one server to another one. INFORMATION_SCHEMA.COLUMNS and Value Character Maximum Length -1 I often see the value -1 in the CHARACTER_MAXIMUM_LENGTH column of INFORMATION_SCHEMA.COLUMNS table. I understand that the length of any column can be between 0 to large number but I do not get it when I see value in negative (i.e. -1). Any insight on this subject? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Autoscaling in a modern world&hellip;. Part 1

    - by Steve Loethen
    It has been a while since I have had time to sit down and blog.  I need to make sure I take the time.  It helps me to focus on technology and not let the administrivia keep me from doing the things I love. I have been focusing on the cloud for the last couple of years.  Specifically the  PaaS platform from Microsoft called Azure.  Time to dig in.. I wanted to explore Autoscaling.  Autoscaling is not native part of Azure.  The platform has the needed connection points.  You can write code that looks at the health and performance of your application components and react to needed scaling changes.  But that means you have to write all the code.  Luckily, an add on to the Enterprise Library provides a lot of code that gets you a long way to being able to autoscale without having to start from scratch. The tool set is primarily composed of a Autoscaler object that you need to host.  This object, when hosted and configured, looks at the performance criteria you specify and adjusts your application based on your needs.  Sounds perfect. I started with the a set of HOL’s that gave me a good basis to understand the mechanics.  I worked through labs 1 and 2 just to get the feel, but let’s start our saga at the end of lab3.  Lab3 end results in a web application, hosted in Azure and a console app running on premise.  The web app has a few buttons on it.  One set adds messages to a queue, another removes them.  A second set of buttons drives processor utilization to 100%.  If you want to guess, a safe bet is that the Autoscaler is configured to react to a queue that has filled up or high cpu usage.  We will continue our saga in the next post…

    Read the article

  • Differentiate procedural language(c) from oop languages(c++)

    - by niko
    I have been trying to differentiate c and c++(or oop languages) but I don't understand where the difference is. Note I have never used c++ but I asked my friends and some of them to differentiate c and c++ They say c++ has oop concepts and also the public, private modes for definition of variables and which c does not have though. Seriously I have done vb.net programming for a while 2 to 3 months, I never faced a situation to use class concepts and modes of definition like public and private. So I thought what could be the use for these? My friend explained me a program saying that if a variable is public, it can be accessed anywhere I said why not declare it as a global variable like in c? He did not get back to my question and he said if a variable is private it cannot be accessed by some other functions I said why not define it as a local variable, even these he was unable to answer. No matter where I read private variables cannot be accessed whereas public variables can be then why not make public as global and private as local whats the difference? whats the real use of public and private ? please don't say it can be used by everyone, I suppose why not we use some conditions and make the calls? I have heard people saying security reasons, a friend said if a function need to be accessed it should be inherited first. He explained saying that only admin should be able to have some rights and not all so that functions are made private and inherited only by the admin to use Then I said why not we use if condition if ( login == "admin") invoke the function he still did not answer these question. Please clear me with these things, I have done vb.net and vba and little c++ without using oop concepts because I never found their real use while I was writing the code, I'm a little afraid am I too back in oop concepts?

    Read the article

  • Forwarding a subdomain to main domain using Godaddy.

    - by Ryan Hayes
    I have current blog, which was hosted on Tumblr at http://blog.ryanhayes.net. I'm moving it over to http://ryanhayes.net, and have all the 301 redirects set up for the blog entries to map to my new blog, which is hosted using Godaddy (domain included). When I try to set up a subdomain forward, I'm greeted with a nice 403 Forbidden response (as of this writing, you can see it at http://blog.ryanhayes.net. When I try to ping both the subdomain and domain, they point to the same IP address, so I know blog subdomain has at least switched over to point to the same content. I don't really understand why I would get a 403 Forbidden on the same content that I can see perfectly fine via another domain. Currently, I have a CNAME of blog pointing to @, which is how "www" is set up to forward, so I'm assuming it would do the same thing. My question is what is the proper way to set up my DNS to make the blog subdomain forward to my main domain (301) using the GoDaddy DNS manager? Bonus: What is the background on why I am getting a 403 error the current way? Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. UPDATE 12/7/2010 Error on site has been fixed, you can no longer view it from my site.

    Read the article

  • Enum.HasFlag

    - by Scott Dorman
    An enumerated type, also called an enumeration (or just an enum for short), is simply a way to create a numeric type restricted to a predetermined set of valid values with meaningful names for those values. While most enumerations represent discrete values, or well-known combinations of those values, sometimes you want to combine values in an arbitrary fashion. These enumerations are known as flags enumerations because the values represent flags which can be set or unset. To combine multiple enumeration values, you use the logical OR operator. For example, consider the following: public enum FileAccess { None = 0, Read = 1, Write = 2, }   class Program { static void Main(string[] args) { FileAccess access = FileAccess.Read | FileAccess.Write; Console.WriteLine(access); } } The output of this simple console application is: The value 3 is the numeric value associated with the combination of FileAccess.Read and FileAccess.Write. Clearly, this isn’t the best representation. What you really want is for the output to look like: To achieve this result, you simply add the Flags attribute to the enumeration. The Flags attribute changes how the string representation of the enumeration value is displayed when using the ToString() method. Although the .NET Framework does not require it, enumerations that will be used to represent flags should be decorated with the Flags attribute since it provides a clear indication of intent. One “problem” with Flags enumerations is determining when a particular flag is set. The code to do this isn’t particularly difficult, but unless you use it regularly it can be easy to forget. To test if the access variable has the FileAccess.Read flag set, you would use the following code: (access & FileAccess.Read) == FileAccess.Read Starting with .NET 4, a HasFlag static method has been added to the Enum class which allows you to easily perform these tests: access.HasFlag(FileAccess.Read) This method follows one of the “themes” for the .NET Framework 4, which is to simplify and reduce the amount of boilerplate code like this you must write. Technorati Tags: .NET,C# 4

    Read the article

  • Unknown CSS font-family oddity with IE7-10 on Win Vista-8

    - by Jeff
    I am seeing the following "oddity" with IE7-10 on Win Vista-8: When declaring font-family: serif; I am seeing an old bitmapped serif font that I can't identify (see screenshot below) instead of the expected font Times New Roman. I know it's an old bitmapped font because it displays aliased, without any font smoothing, with IE7-10 on Win Vista-8 (just like Courier on every version of Win). Screenshot: I would like to know (1) can anyone else confirm my research and (2) BONUS: which font is IE displaying? Notes: IE6 and IE7 on Win XP displays Times New Roman, as they should. It doesn't matter if font-family: serif; is declared in an external stylesheet or inline on the element. Quoting the CSS attribute makes no difference. Adding "Unkown Font" to the stack also makes no difference. New Screenshot: The answer from Jukka below is correct. Here is a new screenshot with Batang (not BatangChe) to illustrate. Hope this helps someone.

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • Algorithm for a lucky game [on hold]

    - by Ronnie
    Assume we have the following Keno(lottery type) game: From 80 numbers(from 1 to 80), 20 are being drawn. The players choose 1 or 2 or 3..... or 12 numbers to play(12 categories). If they choose for example 4 then they win if they predict correctly a certain amount of numbers(2,3 or 4) from the 4 they have played and lose if the predict only 1 or 0 numbers. They win X times their money accordingly to some predefined factor depending on how many numbers they predict from each category. The same with the other categories. And e.g 11 out of 11 gives 250000 times your money and 12 out of 12 gives 1000000 your money. So the company would want to avoid winnings so high. Every draw by the company is being made every 5 minutes and in each draw around 120000 (let's say) different predictions(Keno tickets) are being played. Let's assume 12000 are being played in category 10 and 12000 in category 11 and also 12000 in category 12. I'm wondering if there is an algorithm to allow the company that provides the game in the 5 minutes between the drawings, to find a 20 number set, in order to avoid any "12 out of 12" and "11 out of 11" and "11 out of 12" and "10 out of 11" and "10 out of 10" winning ticket. That means is there any algorithm, where in a time of less than 1 minute approximately(in todays hardware), to be able to find a 20 number set so that none of the 12000 12 and 11 and 10 number sets that the players played(in categories 10,11 and 12) contains any winning of "12 out of 12" and "11 out of 11" and "11 out of 12" and "10 out of 11" and "10 out of 10"? Or even better the generalization of the problem: What is the best algorithm(from a perspective of minimal time), to be able to find a Y number set from numbers 1 to Z(e.g Y=20, Z=80) so that none of the X sets of K-numbers that are being played(in category K) contains more than K-m numbers from the Y-set? (Note that for Y=K and m=1 there is a practical algorithm.)

    Read the article

  • SQL SERVER – DMV sys.dm_exec_describe_first_result_set_for_object – Describes the First Result Metadata for the Module

    - by pinaldave
    Here is another interesting follow up blog post of SQL SERVER – sp_describe_first_result_set New System Stored Procedure in SQL Server 2012. While I was writing earlier blog post I had come across DMV sys.dm_exec_describe_first_result_set_for_object as well. I found that SQL Server 2012 is providing all this quick and new features which quite often we miss  to learn it and when in future someone demonstrates the same to us, we express our surprise on the subject. DMV sys.dm_exec_describe_first_result_set_for_object returns result set which describes the columns used in the stored procedure. Here is the quick example. Let us first create stored procedure. USE [AdventureWorks] GO ALTER PROCEDURE [dbo].[CompSP] AS SELECT [DepartmentID] id ,[Name] n ,[GroupName] gn FROM [HumanResources].[Department] GO Now let us run following two DMV which gives us meta data description of the stored procedure passed as a parameter. Option1: Pass second parameter @include_browse_information as a 0. SELECT * FROM sys.dm_exec_describe_first_result_set_for_object ( OBJECT_ID('[dbo].[CompSP]'),0) AS Table1 GO Option2: Pass second parameter @include_browse_information as a 1. SELECT * FROM sys.dm_exec_describe_first_result_set_for_object ( OBJECT_ID('[dbo].[CompSP]'),1) AS Table1 GO Here is the result of Option1 and Option2. If you see the result, there is absolutely no difference between the results. Both of the resultset are returning column names which are aliased in the stored procedure. Let us scroll on the right side and you will notice that there is clear difference in some columns. You will see in second resultset source_database, Source_schema as well few other columns are reporting original table instead of NULL values. When @include_browse_information result is set to 1 it will provide the columns details of the underlying table. I have just discovered this DMV, I have yet to use it in production code and find out where exactly I will use this DMV. Do you have any idea? Does any thing comes up to your mind where this DMV can be helpful. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Filtering GridViewComboBoxColumn in RadGridView for WPF

    The GridViewComboBox column is used to display lookup data in user friendly manner. For our demo we bind RadGridView for WPF to a collection of custom Location objects. As you may notice – each location has a selectable Country field.  Here is the underlying ‘data model’: public class Location {  public int CountryID { get; set; }  public string CityName { get; set; } } public class Country {  public int ID { get; set; }  public string Name { get; set; } }   The location object contains the integer CountryID. Each CountryID corresponds to a certain country and with the help of GridViewComboBoxColumn we see some human readable country names instead of the underlying numeric IDs. Now The Problem: Unfortunately the filter control does not know that our Country IDs are associated with Country objects, so it displays the ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Lost in Translation

    - by antony.reynolds
    Using the Correct Character Set for the SOA Suite Database A couple of years ago I spent a wonderful week in Tel Aviv helping with the first Oracle BAM implementation in Israel.  Although everyone I interacted spoke better English than I did, the screens and data for the implementation were all in Hebrew, meaning the Hebrew alphabet.  Over the week I learnt to recognize a few Hebrew words, enough to enable me to test what we were doing.  So I knew SOA Suite worked OK with non-English and non-Latin character sets so I was suspicious recently when a customer was having data corruption of non-Latin characters.  On investigation it turned out that the data received correctly in the SOA Suite, but then it was corrupted after being stored in the database. A little investigation revealed that the customer was using the default database character set, which is “WE8ISO8859P1” which, as the name suggests only supports West European 8-bit characters.  What was happening was that when the customer had installed his SOA repository he had ignored the message that his database was not using AL32UTF as the character. After changing the character set on his database he no longer saw the corruption of non-English character data. So the moral of this story is Always install the SOA Repository in to an AL32UTF8 Database This is true for both SOA Suite 10g and 11g.  Ignore it at your peril, because you never know when you will need to support Hebrew, or Japanese or another multi-byte character set.

    Read the article

  • Gesture Detector not firing

    - by Tyler
    So I'm trying to create a input class that implements a InputHandler & GestureListener in order to support both Android & Desktop. The problem is that not all the methods are being called properly. Here is the input class definition & a couple of the methods: public class InputHandler implements GestureListener, InputProcessor{ ... public InputHandler(OrthographicCamera camera, Map m, Player play, Vector2 maxPos) { ... @Override public boolean zoom(float originalDistance, float currentDistance) { //this.zoom = true; this.zoomRatio = originalDistance / currentDistance; cam.zoom = cam.zoom * zoomRatio; Gdx.app.log("GestureDetector", "Zoom - ratio: " + zoomRatio); return true; } @Override public boolean touchDown(int x, int y, int pointerNum, int button) { booleanConditions[TOUCH_EVENT] = true; this.inputButton = button; this.inputFingerNum = pointerNum; this.lastTouchEventLoc.set(x,y); this.currentCursorPos.set(x,y); if(pointerNum == 1) { //this.fingerOne = true; this.fOnePosition.set(x, y); } else if(pointerNum == 2) { //this.fingerTwo = true; this.fTwoPosition.set(x,y); } Gdx.app.log("GestureDetector", "touch down at " + x + ", " + y + ", pointer: " + pointerNum); return true; } The touchDown event always occurs but I can never trigger Zoom (or pan among others...). The following is where I register and create the input handler in the "Game Screen". public class GameScreen implements Screen { ... this.inputHandler = new InputHandler(this.cam, this.map, this.player, this.map.maxCamPos); Gdx.input.setInputProcessor(this.inputHandler); Anyone have any ideas why zoom, pan, etc... are not triggering? Thanks!

    Read the article

  • SharpDx: using maximized RenderForm

    - by ceiling cat
    I'm trying to learn DirectX via SharpDX, very new to this. What I want to do is be able to draw 2D shapes for a game I'm trying to make. So I started with the demo "MiniRect" that came with SharpDX. Since I want my game to be full-screen, I changed the RenderForm to be maximized (using WindowState) and set the FromBorderStyle to None. I noticed that even if the form is set to maximized, it's size is always 800 by 600. In my renderloop, if I specify the location for the rectangle has 400 by 300, it is drawn in the middle of the screen. If I try to set the location via mouse-click (using the RenderForm's MouseClick event, there is always an offset present between where the mouse was clicked and where the drawing shows up. My system DPI is set to the standard (96) so there shouldn't be any scaling. But it looks like there is a scaling factor of about 2.4 If it's not the DPI settings, does anyone have any idea what this be related to? The problem doesnt happen if the RenderForm is not maximized. Is there another way to be drawing full-screen using SharpDX? Thanks

    Read the article

  • Fixing Chrome&rsquo;s AJAX Request Caching Bug

    - by Steve Wilkes
    I recently had to make a set of web pages restore their state when the user arrived on them after clicking the browser’s back button. The pages in question had various content loaded in response to user actions, which meant I had to manually get them back into a valid state after the page loaded. I got hold of the page’s data in a JavaScript ViewModel using a JQuery ajax call, then iterated over the properties, filling in the fields as I went. I built in the ability to describe dependencies between inputs to make sure fields were filled in in the correct order and at the correct time, and that all worked nicely. To make sure the browser didn’t cache the AJAX call results I used the JQuery’s cache: false option, and ASP.NET MVC’s OutputCache attribute for good measure. That all worked perfectly… except in Chrome. Chrome insisted on retrieving the data from its cache. cache: false adds a random query string parameter to make the browser think it’s a unique request – it made no difference. I made the AJAX call a POST – it made no difference. Eventually what I had to do was add a random token to the URL (not the query string) and use MVC routing to deliver the request to the correct action. The project had a single Controller for all AJAX requests, so this route: routes.MapRoute( name: "NonCachedAjaxActions", url: "AjaxCalls/{cacheDisablingToken}/{action}", defaults: new { controller = "AjaxCalls" }, constraints: new { cacheDisablingToken = "[0-9]+" }); …and this amendment to the ajax call: function loadPageData(url) { // Insert a timestamp before the URL's action segment: var indexOfFinalUrlSeparator = url.lastIndexOf("/"); var uniqueUrl = url.substring(0, indexOfFinalUrlSeparator) + new Date().getTime() + "/" + url.substring(indexOfFinalUrlSeparator); // Call the now-unique action URL: $.ajax(uniqueUrl, { cache: false, success: completePageDataLoad }); } …did the trick.

    Read the article

  • Laptop shuts down upon waking from suspend

    - by Bryan Head
    The computer enters suspend either by closing lid, choosing suspend from the top-right drop down, or hitting the power button and pressing suspend. It doesn't matter. I then attempt to wake the computer either by opening the lid (if it was closed) or hitting the power button. Again, doesn't matter. The computer will then immediately shutdown about 50% of the time. It seems to be more likely to shut down the longer it has been on suspend. I took a snapshot of /var/log/pm-suspend.log after a successfully resume and a shutdown. The only difference (outside of timestamps of course) was that a successful resume, after reporting the success of various suspend hooks, writes: Thu Jul 5 21:36:45 PDT 2012: performing suspend Thu Jul 5 21:37:10 PDT 2012: Awake. Thu Jul 5 21:37:10 PDT 2012: Running hooks for resume and then reports successful resume hooks. When it shuts down, the log ends at "performing suspend". I diffed the two files so I know this is the only difference. Thus, it looks like it's not even trying to wake up. Would love some ideas on this one. I've scoured the web but can't seem to find anyone else running into the same issue (it seems more common that the computer shuts down upon entering suspend, or only on hitting the power button to wake, and haven't seen any that are random like mine). I'll update with any requested information.

    Read the article

  • Is it a "pattern smell" to put getters like "FullName" or "FormattedPhoneNumber" in your model?

    - by DanM
    I'm working on an ASP.NET MVC app, and I've been getting into the habit of putting what seem like helpful and convenient getters into my model/entity classes. For example: public class Member { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string PhoneNumber { get; set; } public string FullName { get { return FirstName + " " + LastName; } } public string FormattedPhoneNumber { get { return "(" + PhoneNumber.Substring(0, 3) + ") " + PhoneNumber.Substring(3, 3) + "-" + PhoneNumber.Substring(6); } } } I'm wondering people think about the FullName and FormattedPhoneNumber getters. They make it very easy to create standardized data formats throughout the app, and they seem to save a lot of repeated code, but it could definitely be argued that data format is something that should be handled in mapping from model to view-model. In fact, I was originally applying these data formats in my service layer where I do my mapping, but it was becoming a burden to constantly have to write formatters then apply them in many different places. E.g., I use "Full Name" in most views, and having to type something like model.FullName = MappingUtilities.GetFullName(entity.FirstName, entity.LastName); all over the place seemed a lot less elegant than just typing model.FullName = entity.FullName (or, if you use something like AutoMapper, potentially not typing anything at all). So, where do you draw the line when it comes to data formatting. Is it "okay" to do data formatting in your model or is that a "pattern smell"? Note: I definitely do not have any html in my model. I use html helpers for that. I'm strictly talking about formatting or combining data (and especially data that is frequently used).

    Read the article

  • Tips for debugging Samba performance?

    - by j-g-faustus
    Samba gives me 24 MB/s read and 44 MB/s write, while ftp gives 97 and 112 MB/s under the same circumstances. The documentation says that Generally, you should find that Samba performs similarly to ftp at raw transfer speed. In my case it clearly doesn't. Where can I find tips on how to debug Samba performance? Or alternatively tips for replacing Samba with something else? (I can't use ftp, unfortunately, as I need something that can be used with rsync/rsnapshot.) More details: Both computers are running Ubuntu 10.10 (using Samba because I have a Mac as well) The Samba share is on a local home network, mounted as $ mount ... //server.local/share/ on /mnt/share type cifs (rw,mand) Samba performance was tested by copying (cp) a single file of ~4GB to and from the share, using time for timing and calculating transfer speed by hand. ftp performance are the numbers from the ftp client for get/put of the same file. iperf gives network speed ~900 Mbits/s bonnie++ gives disk speeds 200 MB/s on both sides for block reads as well as block writes Tried changing the parameters suggested in the performance tuning HOWTO (read/write raw, read size, socket options), most of them made little to no difference. (The one that made a difference caused write speed to drop 50%.)

    Read the article

  • Should one always know what an API is doing just by looking at the code?

    - by markmnl
    Recently I have been developing my own API and with that invested interest in API design I have been keenly interested how I can improve my API design. One aspect that has come up a couple times is (not by users of my API but in my observing discussion about the topic): one should know just by looking at the code calling the API what it is doing. For example see this discussion on GitHub for the discourse repo, it goes something like: foo.update_pinned(true, true); Just by looking at the code (without knowing the parameter names, documentation etc.) one cannot guess what it is going to do - what does the 2nd argument mean? The suggested improvement is to have something like: foo.pin() foo.unpin() foo.pin_globally() And that clears things up (the 2nd arg was whether to pin foo globally, I am guessing), and I agree in this case the later would certainly be an improvement. However I believe there can be instances where methods to set different but logically related state would be better exposed as one method call rather than separate ones, even though you would not know what it is doing just by looking at the code. (So you would have to resort to looking at the parameter names and documentation to find out - which personally I would always do no matter what if I am unfamiliar with an API). For example I expose one method SetVisibility(bool, string, bool) on a FalconPeer and I acknowledge just looking at the line: falconPeer.SetVisibility(true, "aerw3", true); You would have no idea what it is doing. It is setting 3 different values that control the "visibility" of the falconPeer in the logical sense: accept join requests, only with password and reply to discovery requests. Splitting this out into 3 method calls could lead to a user of the API to set one aspect of "visibility" forgetting to set others that I force them to think about by only exposing the one method to set all aspects of "visibility". Furthermore when the user wants to change one aspect they almost always will want to change another aspect and can now do so in one call.

    Read the article

  • Understanding normal maps on terrain

    - by JohnB
    I'm having trouble understanding some of the math behind normal map textures even though I've got it to work using borrowed code, I want to understand it. I have a terrain based on a heightmap. I'm generating a mesh of triangles at load time and rendering that mesh. Now for each vertex I need to calculate a normal, a tangent, and a bitangent. My understanding is as follows, have I got this right? normal is a unit vector facing outwards from the surface of the triangle. For a vertex I take the average of the normals of the triangles using that vertex. tangent is a unit vector in the direction of the 'u' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the x direction. So should be able to calculate this as simply the difference between vertices in the x direction to get a vector, (and normalize it). bitangent is a unit vector in the direction of the 'v' coordinates of the texture map. As my texture u,v coordinates follow the x and y coordinates of the terrain, then my understanding is that this vector is simply the vector along the surface in the y direction. So should be able to calculate this as simply the difference between vertices in the y direction to get a vector, (and normalize it). However the code I have borrowed seems much more complicated than this and takes into account the actual values of u, and v at each vertex which I don't understand the need for as they increase in exactly the same direction as x, and y. I implemented what I thought from above, and it simply doesn't work, the normals are clearly not working for lighting. Have I misunderstood something? Or can someone explain to me the physical meaning of the tangent and bitangent vectors when applied to a mesh generated from a hightmap like this, when u and v texture coordinates map along the x and y directions. Thanks for any help understanding this.

    Read the article

  • throw new exception- C#

    - by Jalpesh P. Vadgama
    This post will be in response to my older post throw vs. throw(ex) best practice and difference- c# comment that I should include throw new exception. What’s wrong with throw new exception: Throw new exception is even worse, It will create a new exception and will erase all the earlier exception data. So it will erase stack trace also.Please go through following code. It’s same earlier post the only difference is throw new exception.   using System; namespace Oops { class Program { static void Main(string[] args) { try { DevideByZero(10); } catch (Exception exception) { throw new Exception (string.Format( "Brand new Exception-Old Message:{0}", exception.Message)); } } public static void DevideByZero(int i) { int j = 0; int k = i/j; Console.WriteLine(k); } } } Now once you run this example. You will get following output as expected. Hope you like it. Stay tuned for more..

    Read the article

  • java UnitilsException: Executed scripts table "PUBLIC"."DBMAINTAIN_SCRIPTS" doesn't exist yet or is invalid [closed]

    - by Philippe
    I want use Unitils for testing my JAVA program, i have following error message "UnitilsException: Executed scripts table "PUBLIC"."DBMAINTAIN_SCRIPTS" doesn't exist yet or is invalid" My table is into an DDL file, and table is not created Could you says me if dataSetStructureGenerator.xsd.dirName=src/test/resources/dataset-schema properties is still active in unitils 3.3 ? My EMPLOYEE table is not created and DBMAINTAIN_SCRIPTS not created too Where is my mistake ? My DDL file SET REFERENTIAL_INTEGRITY FALSE; SET DATABASE COLLATION "French"; SET SCHEMA PUBLIC; CREATE TABLE DBMAINTAIN_SCRIPTS (FILE_NAME VARCHAR2(150), FILE_LAST_MODIFIED_AT INTEGER, CHECKSUM VARCHAR2(50), EXECUTED_AT VARCHAR2(20), SUCCEEDED INTEGER); CREATE TABLE EMPLOYEES(ID IDENTITY NOT NULL,NAME VARCHAR(20),TITLE VARCHAR(20),SALARY DOUBLE,NI INTEGER NOT NULL) My unitils properties file # comments documenting these unitils configuration properties removed for # brevity. look for commenting in unitils-default.properties in the root of the # unitils jar if needed. unitils.modules=database,dbunit,easymock,inject unitils.module.hibernate.enabled=false unitils.module.spring.enabled=false # these placeholders are set in avaje.properties #gere la configuration DBUNIT database.driverClassName=org.hsqldb.jdbcDriver database.url=jdbc:hsqldb:mem:unitils-example database.schemaNames=PUBLIC database.userName=SA database.password= database.dialect=hsqldb # unitils will construct the test database using the ddl file found in this # directory dbMaintainer.fileScriptSource.scripts.location=src/main/resources updateDataBaseSchema.enabled=true sequenceUpdater.sequencevalue.lowestacceptable=100 dataSetStructureGenerator.xsd.dirName=src/test/resources/dataset-schema #dbMaintainer.autoCreateExecutedScriptsTable property to true

    Read the article

  • SQL SERVER – Answer – Value of Identity Column after TRUNCATE command

    - by pinaldave
    Earlier I had one conversation with reader where I almost got a headache. I suggest all of you to read it before continuing this blog post SQL SERVER – Reseting Identity Values for All Tables. I believed that he faced this situation because he did not understand the difference between SQL SERVER – DELETE, TRUNCATE and RESEED Identity. I wrote a follow up blog post explaining the difference between them. I asked a small question in the second blog post and I received many interesting comments. Let us go over the question and its answer here one more time. Here is the scenario to set up the puzzle. Create Table with Seed Identity = 11 Insert Value and Check Seed (it will be 11) Reseed it to 1 Insert Value and Check Seed (it will be 2) TRUNCATE Table Insert Value and Check Seed (it will be 11) Let us see the T-SQL Script for the same. USE [TempDB] GO -- Create Table CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(11,1) NOT NULL, [var] [nchar](10) NULL ) ON [PRIMARY] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Reseed to 1 DBCC CHECKIDENT ('TestTable', RESEED, 1) GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Truncate table TRUNCATE TABLE [TestTable] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO -- Question for you Here -- Clean up DROP TABLE [TestTable] GO Now let us see the output of three of the select statements. 1) First Select after create table 2) Second Select after reseed table 3) Third Select after truncate table The reason is simple: If the table contains an identity column, the counter for that column is reset to the seed value defined for the column. Reference: Pinal Dave (http://blog.sqlauthority.com)       Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Fonts look bad in Microsoft Office using Wine

    - by amfcosta
    Office fonts in wine look very different from what they look in Windows or LibreOffice. As can be seen from the attached screenshots, they look blurry in some sizes and aliased in other sizes. You can see the differences not only in the document text but also in the ribbon menu. It happens with a lot of fonts. I'm testing it with Office 2010 now, but it also happens in Office 2007. Things I've tried: Changing fontsmooth settings with winetricks - made no difference. Copying fonts from a Windows system - made no difference. Using Ubuntu's fonts (by removing the Windows/Fonts from the wineprefix) - removed the blurriness in some fonts but increased aliasing. The three screenshots correspond to different "configurations": office_wine.png - Office Word in Wine using Wine's original fonts; office_nowinefonts.png - Office Word in Wine using Ubuntu's fonts; office_windows.png - Office Word in Windows. PS: please make sure to see the screenshots without scaling them to notice the problem. EDIT: A screenshot of how Calibri behaves in Wine here.

    Read the article

  • TDD - Outside In vs Inside Out

    - by Songo
    What is the difference between building an application Outside In vs building it Inside Out using TDD? These are the books I read about TDD and unit testing: Test Driven Development: By Example Test-Driven Development: A Practical Guide: A Practical Guide Real-World Solutions for Developing High-Quality PHP Frameworks and Applications Test-Driven Development in Microsoft .NET xUnit Test Patterns: Refactoring Test Code The Art of Unit Testing: With Examples in .Net Growing Object-Oriented Software, Guided by Tests---This one was really hard to understand since JAVA isn't my primary language :) Almost all of them explained TDD basics and unit testing in general, but with little mention of the different ways the application can be constructed. Another thing I noticed is that most of these books (if not all) ignore the design phase when writing the application. They focus more on writing the test cases quickly and letting the design emerge by itself. However, I came across a paragraph in xUnit Test Patterns that discussed the ways people approach TDD. There are 2 schools out there Outside In vs Inside Out. Sadly the book doesn't elaborate more on this point. I wish to know what is the main difference between these 2 cases. When should I use each one of them? To a TDD beginner which one is easier to grasp? What is the drawbacks of each method? Is there any materials out there that discuss this topic specifically?

    Read the article

  • Unattended grub configuration after kernel upgrade

    - by bouke
    Today I have been working on automatic deployment of an ubuntu server. I got stuck on automatic updating of the server using apt-get upgrade trying to upgrade to a new kernel. The log looks like this: Setting up linux-image-3.2.0-24-generic (3.2.0-24.39) ... Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst (...) Then a question is presented: Package configuration +---------------------------------¦ +---------------------------------+ ¦ A new version of /boot/grub/menu.lst is available, but the version ¦ ¦ installed currently has been locally modified. ¦ ¦ ¦ ¦ What would you like to do about menu.lst? ¦ ¦ ¦ ¦ install the package maintainer's version ¦ ¦ keep the local version currently installed ¦ ¦ show the differences between the versions ¦ ¦ show a side-by-side difference between the versions ¦ ¦ show a 3-way difference between available versions ¦ ¦ do a 3-way merge between available versions (experimental) ¦ ¦ start a new shell to examine the situation ¦ ¦ ¦ ¦ ¦ ¦ <Ok> ¦ ¦ ¦ +----------------------------------------------------------------------+ The desired outcome would be to select the first option and to continue: Replacing config file /run/grub/menu.lst with new version Updating /boot/grub/menu.lst ... done After running the upgrade by hand, I used debconf-get-selections to inspect the correct answer for the question (see other settings). It seems like update_grub_changeprompt_threeway is the question that should be answered. However, setting this using debconf-set-selections presented me with the same question: debconf-set-selections <<< "grub grub/update_grub_changeprompt_threeway select install_new" apt-get -y dist-upgrade How can this question be automated?

    Read the article

< Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >