Search Results

Search found 17248 results on 690 pages for 'print documentation'.

Page 25/690 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • SQL SERVER – Developer Training Kit for SQL Server 2012

    - by pinaldave
    Developer Training Kit is my favorite part of any product. The reason behind is very simple because it give the single resource which gives complete overview of the product in nutshell. A developer can learn from many places – books, webcasts, tutorials, blogs, etc. However, I have found that developer training kits are the best starting point for any product. Start with them first, see what are the new features as well what is the new message a product is coming up with. Once it is learned the very next step should be to identify the right learning material to explore the preferred topic. The SQL Server 2012 Developer Training Kit includes technical content including labs, demos and presentations designed to help you learn how to develop SQL Server 2012 database and BI solutions. New and updated content will be released periodically and can be downloaded on-demand using the Web Installer. Download SQL Server 2012 Developer Training Kit Web Installer. This training kit was available earlier this year but it is never late to explore it if you have not referred it earlier. Additionally, if you do not want to download complete kit all together I suggest you refer to Wiki here. This wiki contains all the same presentations and demo notes which web installer contains. Refer to SQL Server 2012 Developer Training Kit Wiki Wiki contains following module and details about Hands On Labs Module 1: Introduction to SQL Server 2012 Module 2: Introduction to SQL Server 2012 AlwaysOn Module 3: Exploring and Managing SQL Server 2012 Database Engine Improvements Module 4: SQL Server 2012 Database Server Programmability Module 5: SQL Server 2012 Application Development Module 6: SQL Server 2012 Enterprise Information Management Module 7: SQL Server 2012 Business Intelligence Hands-On Labs: SQL Server 2012 Database Engine Hands-On Labs: Visual Studio 2010 and .NET 4.0 Hands-On Labs: SQL Server 2012 Enterprise Information Management Hands-On Labs: SQL Server 2012 Business Intelligence Hands-On LabsHands-On Labs: Windows Azure and SQL Azure As I said, if you have not downloaded this so far, it is never late to explore it. Trust me you will atleast learn one thing if you just explore the content. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – Puzzle to Win Print Book – Explain Value of PERCENTILE_CONT() Using Simple Example

    - by pinaldave
    From last several days I am working on various Denali Analytical functions and it is indeed really fun to refresh the concept which I studied in the school. Earlier I wrote article where I explained how we can use PERCENTILE_CONT() to find median over here SQL SERVER – Introduction to PERCENTILE_CONT() – Analytic Functions Introduced in SQL Server 2012. Today I am going to ask question based on the same blog post. Again just like last time the intention of this puzzle is as following: Learn new concept of SQL Server 2012 Learn new concept of SQL Server 2012 even if you are on earlier version of SQL Server. On another note, SQL Server 2012 RC0 has been announced and available to download SQL SERVER – 2012 RC0 Various Resources and Downloads. Now let’s have fun following query: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, PERCENTILE_CONT(0.5) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS MedianCont FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO The above query will give us the following result: The reason we get median is because we are passing value .05 to PERCENTILE_COUNT() function. Now run read the puzzle. Puzzle: Run following T-SQL code: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, PERCENTILE_CONT(0.9) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS MedianCont FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO Observe the result and you will notice that MidianCont has different value than before, the reason is PERCENTILE_CONT function has 0.9 value passed. For first four value the value is 775.1. Now run following T-SQL code: USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, PERCENTILE_CONT(0.1) WITHIN GROUP (ORDER BY ProductID) OVER (PARTITION BY SalesOrderID) AS MedianCont FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY SalesOrderID DESC GO Observe the result and you will notice that MidianCont has different value than before, the reason is PERCENTILE_CONT function has 0.1 value passed. For first four value the value is 709.3. Now in my example I have explained how the median is found using this function. You have to explain using mathematics and explain (in easy words) why the value in last columns are 709.3 and 775.1 Hint: SQL SERVER – Introduction to PERCENTILE_CONT() – Analytic Functions Introduced in SQL Server 2012 Rules Leave a comment with your detailed answer by Nov 25's blog post. Open world-wide (where Amazon ships books) If you blog about puzzle’s solution and if you win, you win additional surprise gift as well. Prizes Print copy of my new book SQL Server Interview Questions Amazon|Flipkart If you already have this book, you can opt for any of my other books SQL Wait Stats [Amazon|Flipkart|Kindle] and SQL Programming [Amazon|Flipkart|Kindle]. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – SQL Server 2012 Upgrade Technical Guide – A Comprehensive Whitepaper – (454 pages – 9 MB)

    - by pinaldave
    Microsoft has just released SQL Server 2012 Upgrade Technical Guide. This guide is very comprehensive and covers the subject of upgrade in-depth. This is indeed a helpful detailed white paper. Even writing a summary of this white paper would take over 100 pages. This further proves that SQL Server 2012 is quite an important release from Microsoft. This white paper discusses how to upgrade from SQL Server 2008/R2 to SQL Server 2012. I love how it starts with the most interesting and basic discussion of upgrade strategies: 1) In-place upgrades, 2) Side by side upgrade, 3) One-server, and 4) Two-server. This whitepaper is not just pure theory but is also an excellent source for some tips and tricks. Here is an example of a good tip from the paper: “If you want to upgrade just one database from a legacy instance of SQL Server and not upgrade the other databases on the server, use the side-by-side upgrade method instead of the in-place method.” There are so many trivia, tips and tricks that make creating the list seems humanly impossible given a short period of time. My friend Vinod Kumar, an SQL Server expert, wrote a very interesting article on SQL Server 2012 Upgrade before. In that article, Vinod addressed the most interesting and practical questions related to upgrades. He started with the fundamentals of how to start backup before upgrade and ended with fail-safe strategies after the upgrade is over. He covered end-to-end concepts in his blog posts in simple words in extremely precise statements. A successful upgrade uses a cycle of: planning, document process, testing, refine process, testing, planning upgrade window, execution, verifying of upgrade and opening for business. If you are at Vinod’s blog post, I suggest you go all the way down and collect the gold mine of most important links. I have bookmarked the blog by blogging about it and I suggest that you bookmark it as well with the way you prefer. Vinod Kumar’s blog post on SQL Server 2012 Upgrade Technical Guide SQL Server 2012 Upgrade Technical Guide is a detailed resource that’s also available online for free. Each chapter was carefully crafted and explained in detail. Here is a quick list of the chapters included in the whitepaper. Before downloading the guide, beware of its size of 9 MB and 454 pages. Here’s the list of chapters: Chapter 1: Upgrade Planning and Deployment Chapter 2: Management Tools Chapter 3: Relational Databases Chapter 4: High Availability Chapter 5: Database Security Chapter 6: Full-Text Search Chapter 7: Service Broker Chapter 8: SQL Server Express Chapter 9: SQL Server Data Tools Chapter 10: Transact-SQL Queries Chapter 11: Spatial Data Chapter 12: XML and XQuery Chapter 13: CLR Chapter 14: SQL Server Management Objects Chapter 15: Business Intelligence Tools Chapter 16: Analysis Services Chapter 17: Integration Services Chapter 18: Reporting Services Chapter 19: Data Mining Chapter 20: Other Microsoft Applications and Platforms Appendix 1: Version and Edition Upgrade Paths Appendix 2: SQL Server 2012: Upgrade Planning Checklist Download SQL Server 2012 Upgrade Technical Guide [454 pages and 9 MB] Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, DBA, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Free eBook Download – EPUB, MOBI, PDF Format

    - by pinaldave
    Microsoft has released recently free eBooks on various Microsoft Technology. The best part is that all these books are available in ePub, Mobi and PDF. You can download them to your local machine or eBook reader and read them. This is a great start as many important subjects are now covered and converted into an eBook. I personally read through a few of the books and found they are very comprehensive and and detailed. The goal is not to cover complete technology in a single book but rather pick a single topic and discuss it in detail. The source of the book is white paper, Technet wiki as well book online and it is clearly listed right bellow the book title. Following are the books available for SQL Server Technology and I encourage all of you to have a look at them as they are great resources. Master Data Services Capacity Guidelines Microsoft SQL Server AlwaysOn Solutions Guide for High Availability and Disaster Recovery Microsoft SQL Server Analysis Services Multidimensional Performance and Operations Guide QuickStart: Learn DAX Basics in 30 Minutes SQL Server 2012 Transact-SQL DML Reference You can download above eBooks from here. This is indeed a great attempt as each book talks about the a single subject in depth keeping author focus on the single and simple subject. I have previously written two books by focusing on the same subject and I had great pleasure writing it as well. Writing on focus subjects gives complete freedom to author to explore the a single subject without having burden to cover everything which is associated with that technology at large. Just like eBooks mentioned earlier my SQL Server Wait Stats was inspired from my article series on SQL Wait Stats. The latest book SQL Server Interview Question and Answers was derived from my article series on SQL Interview Q and A. Writing book is an absolutely different concept than writing blog posts. When I was converting my blog posts to books, I ended up writing 50% new material and end up removing many repetitive content which shows up in blog series. It was indeed fun to focused book at the same time it was a great learning experience as an individual. Reference: TechNet Wiki, Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Generate MERGE statements from a table

    - by Bill Graziano
    We have a requirement to build a test environment where certain tables get reset from production every night.  These are mainly lookup tables.  I played around with all kinds of fancy solutions and finally settled on a series of MERGE statements.  And being lazy I didn’t want to write them myself.  The stored procedure below will generate a MERGE statement for the table you pass it.  If you have identity values it populates those properly.  You need to have primary keys on the table for the joins to be generated properly.  The only thing hard coded is the source database.  You’ll need to update that for your environment.  We actually used a linked server in our situation. CREATE PROC dba_GenerateMergeStatement (@table NVARCHAR(128) )ASset nocount on; declare @return int;PRINT '-- ' + @table + ' -------------------------------------------------------------'--PRINT 'SET NOCOUNT ON;--'-- Set the identity insert on for tables with identitiesselect @return = objectproperty(object_id(@table), 'TableHasIdentity')if @return = 1 PRINT 'SET IDENTITY_INSERT [dbo].[' + @table + '] ON; 'declare @sql varchar(max) = ''declare @list varchar(max) = '';SELECT @list = @list + [name] +', 'from sys.columnswhere object_id = object_id(@table)SELECT @list = @list + [name] +', 'from sys.columnswhere object_id = object_id(@table)SELECT @list = @list + 's.' + [name] +', 'from sys.columnswhere object_id = object_id(@table)-- --------------------------------------------------------------------------------PRINT 'MERGE [dbo].[' + @table + '] AS t'PRINT 'USING (SELECT * FROM [source_database].[dbo].[' + @table + ']) as s'-- Get the join columns ----------------------------------------------------------SET @list = ''select @list = @list + 't.[' + c.COLUMN_NAME + '] = s.[' + c.COLUMN_NAME + '] AND 'from INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk , INFORMATION_SCHEMA.KEY_COLUMN_USAGE cwhere pk.TABLE_NAME = @tableand CONSTRAINT_TYPE = 'PRIMARY KEY'and c.TABLE_NAME = pk.TABLE_NAMEand c.CONSTRAINT_NAME = pk.CONSTRAINT_NAMESELECT @list = LEFT(@list, LEN(@list) -3)PRINT 'ON ( ' + @list + ')'-- WHEN MATCHED ------------------------------------------------------------------PRINT 'WHEN MATCHED THEN UPDATE SET'SELECT @list = '';SELECT @list = @list + ' [' + [name] + '] = s.[' + [name] +'],'from sys.columnswhere object_id = object_id(@table)-- don't update primary keysand [name] NOT IN (SELECT [column_name] from INFORMATION_SCHEMA.TABLE_CONSTRAINTS pk , INFORMATION_SCHEMA.KEY_COLUMN_USAGE c where pk.TABLE_NAME = @table and CONSTRAINT_TYPE = 'PRIMARY KEY' and c.TABLE_NAME = pk.TABLE_NAME and c.CONSTRAINT_NAME = pk.CONSTRAINT_NAME)-- and don't update identity columnsand columnproperty(object_id(@table), [name], 'IsIdentity ') = 0 --print @list PRINT left(@list, len(@list) -3 )-- WHEN NOT MATCHED BY TARGET ------------------------------------------------PRINT ' WHEN NOT MATCHED BY TARGET THEN';-- Get the insert listSET @list = ''SELECT @list = @list + '[' + [name] +'], 'from sys.columnswhere object_id = object_id(@table)SELECT @list = LEFT(@list, LEN(@list) - 1)PRINT ' INSERT(' + @list + ')'-- get the values listSET @list = ''SELECT @list = @list + 's.[' +[name] +'], 'from sys.columnswhere object_id = object_id(@table)SELECT @list = LEFT(@list, LEN(@list) - 1)PRINT ' VALUES(' + @list + ')'-- WHEN NOT MATCHED BY SOURCEprint 'WHEN NOT MATCHED BY SOURCE THEN DELETE; 'PRINT ''PRINT 'PRINT ''' + @table + ': '' + CAST(@@ROWCOUNT AS VARCHAR(100));';PRINT ''-- Set the identity insert OFF for tables with identitiesselect @return = objectproperty(object_id(@table), 'TableHasIdentity')if @return = 1 PRINT 'SET IDENTITY_INSERT [dbo].[' + @table + '] OFF; 'PRINT ''PRINT 'GO'PRINT '';

    Read the article

  • How do I "print" to a PostScript file from LibreOffice Write?

    - by user69245
    Using OpenOffice with 10.04 I was able to print to a Postscript file, but I find I can't do this with LibreOffice and 12.04 - print-to-file goes to PDF. I want this feature so that I can use a FinePrint-like tool called fprint to print .PS files in booklet form. When I print from other applications I'm offered the usual choice of printers, including print to .PS, but LibreOffice restricts my choice.

    Read the article

  • 1 document pending in printer queue in System Tray that won't go away

    - by White Phoenix
    Edit: I didn't want to do it, but I restarted my computer - cleared the problem right away. Though if I were running a Windows 2003/2008 server, I would hate to have to restart the domain controller just to get rid of this irritating problem. If I run into this problem again, I'm going to try that remove printer/reinstall printer thing. Thanks for your help @Psychogeek. Points for your attempt. Running Windows 7, 32-bit Professional. My printer is an HP OfficeJet Wireless 8500. It's connected to my network wirelessly through TCP/IP as a standalone device. I was having some print problems awhile back and had to do some print spooler stuff as part of my troubleshooting (stopping the Print Spooler service, clearing the print spooler files from C:\Windows\System32\spool\PRINTERS and then restarting the service). I've finally narrowed it down to it being application specific, so that's that. However, as a leftover from all that troubleshooting, my printer icon is stuck in the tray - when I mouseover the icon, Windows says that there is 1 document(s) pending for my username. However, when I open up that printer's queue, there's nothing in there. I restarted the Printer Spooler service and also checked C:\Windows\System32\spool\PRINTERS if there's anything in there - nothing. I did a quick Google search and an answer from one of those "reps" at the Microsoft Socialnet site says for me to uninstall and reinstall the printer. The funny thing is, when I send print jobs, they print just fine - that 1 mystery document stuck in queue isn't stopping anything from happening. Short of having to do that, are there any other quick troubleshooting steps I may be missing?

    Read the article

  • Web-To-Print - What technology should I explore? Web-Content -> drag & drop to Templates -> PDF

    - by hamlin11
    I'm discussing some software design issues with a potential client and the idea of Web-to-Print technology has come up. We need users to be able to drag images from an image library to various regions defined by a template. Example: Images may go in box A, B, or C and text may go in boxes D or E. These templates would be setup Boxes A through E would be defined inside a template by administrators using some sort of editor. These templates would serve as a mapping from web-content to a PDF. Once users drag images and insert text in the appropriate regions of the template, the result will be converted into a PDF. Is this feasible these days with jquery & asp.net? That would be nice. If not, what would be the ideal solution?

    Read the article

  • how can I get jquery.sIFR plugin to display sIFR-alternate text for print?

    - by apeBoy
    I'm struggling with this. I've used the jquery sIFR plugin as opposed to sIFR it prevented conflicts with other jquery I am using on my pages. It works fine in its prime function: replacing html text with Flash. However, the .sIFR-alternate class is given an inline style of 'opacity: 0' which persists when flashblock is on. So alternate text does not appear. Neither does it appear when printng the page (yes I have included styles for sIFR-print). I've tried replacing the opacity:0 inline style with display: none but this causes height issues with the output flash. Any one else had this or have any ideas?

    Read the article

  • Print number series in java

    - by user1898282
    I have to print the series shown below in java: ***1*** **2*2** *3*3*3* 4*4*4*4 My current implementation is: public static void printSeries(int number,int numberOfCharsinEachLine){ String s="*"; for(int i=1;i<=number;i++){ int countOfs=(numberOfCharsinEachLine-(i)-(i-1))/2; if(countOfs<0){ System.out.println("Can't be done"); break; } for(int j=0;j<countOfs;j++){ System.out.print(s); } System.out.print(i); for(int k=1;k<i;k++){ System.out.print(s); System.out.print(i); } for(int j=0;j<countOfs;j++){ System.out.print(s); } System.out.println(); } } But there are lot of for loops, so I'm wondering whether this can be done in a better way or not?

    Read the article

  • SQL SERVER – Weekend Project – Experimenting with ACID Transactions, SQL Compliant, Elastically Scalable Database

    - by pinaldave
    Database technology is huge and big world. I like to explore always beyond what I know and share the learning. Weekend is the best time when I sit around download random software on my machine which I like to call as a lab machine (it is a pretty old laptop, hardly a quality as lab machine) and experiment it. There are so many free betas available for download that it’s hard to keep track and even harder to find the time to play with very many of them.  This blog is about one you shouldn’t miss if you are interested in the learning various relational databases. NuoDB just released their Beta 7.  I had already downloaded their Beta 6 and yesterday did the same for 7.   My impression is that they are onto something very very interesting.  In fact, it might be something really promising in terms of database elasticity, scale and operational cost reduction. The folks at NuoDB say they are working on the world’s first “emergent” database which they tout as a brand new transitional database that is intended to dramatically change what’s possible with OLTP.  It is SQL compliant, guarantees ACID transactions, yet scales elastically on heterogeneous and decentralized cloud-based resources. Interesting note for sure, making me explore more. Based on what I’ve seen so far, they are solving the architectural challenge that exists between elastic, cloud-based compute infrastructures designed to scale out in response to workload requirements versus the traditional relational database management system’s architecture of central control. Here’s my experience with the NuoDB Beta 6 so far: First they pretty much threw away all the features you’d associate with existing RDBMS architectures except the SQL and ACID transactions which they were smart to keep.  It looks like they have incorporated a number of the big ideas from various algorithms, systems and techniques to achieve maximum DB scalability. From a user’s perspective, the NuoDB Beta software behaves like any other traditional SQL database and seems to offer all the benefits users have come to expect from standards-based SQL solutions. One of the interesting feature is that one can run a transactional node and a storage node on my Windows laptop as well on other platforms – indeed interesting for sure. It’s quite amazing to see a database elastically scale across machine boundaries. So, one of the basic NuoDB concepts is that as you need to scale out, you can easily use more inexpensive hardware when/where you need it.  This is unlike what we have traditionally done to scale a database for an application – we replace the hardware with something more powerful (faster CPU and Disks). This is where I started to feel like NuoDB is on to something that has the potential to elastically scale on commodity hardware while reducing operational expense for a big OLTP database to a degree we’ve never seen before. NuoDB is able to fully leverage the cloud in an asynchronous and highly decentralized manner – while providing both SQL compliance and ACID transactions. Basically what NuoDB is doing is so new that it is all hard to believe until you’ve experienced it in action.  I will keep you up to date as I test the NuoDB Beta 7 but if you are developing a web-scale application or have an on-premise app you are thinking of moving to the cloud, testing this beta is worth your time. If you do try it, let me know what you think.  Before I say anything more, I am going to do more experiments and more test on this product and compare it with other existing similar products. For me it was a weekend worth spent on learning something new. I encourage you to download Beta 7 version and share your opinions here. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Enumerations in Relational Database – Best Practice

    - by pinaldave
    Marko Parkkola This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special thanks to Marko for sending this best practice for Enumerations in Relational Database. He has really wrote excellent piece here and welcome comments here. Enumerations in Relational Database This is a subject which is very basic thing in relational databases but often not very well understood and sometimes badly implemented. There are of course many ways to do this but I concentrate only two cases, one which is “the right way” and one which is definitely wrong way. The concept Let’s say we have table Person in our database. Person has properties/fields like Firstname, Lastname, Birthday and so on. Then there’s a field that tells person’s marital status and let’s name it the same way; MaritalStatus. Now MaritalStatus is an enumeration. In C# I would definitely make it an enumeration with values likes Single, InRelationship, Married, Divorced. Now here comes the problem, SQL doesn’t have enumerations. The wrong way This is, in my opinion, absolutely the wrong way to do this. It has one upside though; you’ll see the enumeration’s description instantly when you do simple SELECT query and you don’t have to deal with mysterious values. There’s plenty of downsides too and one would be database fragmentation. Consider this (I’ve left all indexes and constraints out of the query on purpose). CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] NVARCHAR(10) ) You have nvarchar(20) field in the table that tells the marital status. Obvious problem with this is that what if you create a new value which doesn’t fit into 20 characters? You’ll have to come and alter the table. There are other problems also but I’ll leave those for the reader to think about. The correct way Here’s how I’ve done this in many projects. This model still has one problem but it can be alleviated in the application layer or with CHECK constraints if you like. First I will create a namespace table which tells the name of the enumeration. I will add one row to it too. I’ll write all the indexes and constraints here too. CREATE TABLE [CodeNamespace] ( [Id] INT IDENTITY(1, 1), [Name] NVARCHAR(100) NOT NULL, CONSTRAINT [PK_CodeNamespace] PRIMARY KEY ([Id]), CONSTRAINT [IXQ_CodeNamespace_Name] UNIQUE NONCLUSTERED ([Name]) ) GO INSERT INTO [CodeNamespace] SELECT 'MaritalStatus' GO Then I create a table that holds the actual values and which reference to namespace table in order to group the values under different namespaces. I’ll add couple of rows here too. CREATE TABLE [CodeValue] ( [CodeNamespaceId] INT NOT NULL, [Value] INT NOT NULL, [Description] NVARCHAR(100) NOT NULL, [OrderBy] INT, CONSTRAINT [PK_CodeValue] PRIMARY KEY CLUSTERED ([CodeNamespaceId], [Value]), CONSTRAINT [FK_CodeValue_CodeNamespace] FOREIGN KEY ([CodeNamespaceId]) REFERENCES [CodeNamespace] ([Id]) ) GO -- 1 is the 'MaritalStatus' namespace INSERT INTO [CodeValue] SELECT 1, 1, 'Single', 1 INSERT INTO [CodeValue] SELECT 1, 2, 'In relationship', 2 INSERT INTO [CodeValue] SELECT 1, 3, 'Married', 3 INSERT INTO [CodeValue] SELECT 1, 4, 'Divorced', 4 GO Now there’s four columns in CodeValue table. CodeNamespaceId tells under which namespace values belongs to. Value tells the enumeration value which is used in Person table (I’ll show how this is done below). Description tells what the value means. You can use this, for example, column in UI’s combo box. OrderBy tells if the values needs to be ordered in some way when displayed in the UI. And here’s the Person table again now with correct columns. I’ll add one row here to show how enumerations are to be used. CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] INT ) GO INSERT INTO [Person] SELECT 'Marko', 'Parkkola', '1977-03-04', 3 GO Now I said earlier that there is one problem with this. MaritalStatus column doesn’t have any database enforced relationship to the CodeValue table so you can enter any value you like into this field. I’ve solved this problem in the application layer by selecting all the values from the CodeValue table and put them into a combobox / dropdownlist (with Value field as value and Description as text) so the end user can’t enter any illegal values; and of course I’ll check the entered value in data access layer also. I said in the “The wrong way” section that there is one benefit to it. In fact, you can have the same benefit here by using a simple view, which I schema bound so you can even index it if you like. CREATE VIEW [dbo].[Person_v] WITH SCHEMABINDING AS SELECT p.[Firstname], p.[Lastname], p.[BirthDay], c.[Description] MaritalStatus FROM [dbo].[Person] p JOIN [dbo].[CodeValue] c ON p.[MaritalStatus] = c.[Value] JOIN [dbo].[CodeNamespace] n ON n.[Id] = c.[CodeNamespaceId] AND n.[Name] = 'MaritalStatus' GO -- Select from View SELECT * FROM [dbo].[Person_v] GO This is excellent write up byMarko Parkkola. Do you have this kind of design setup at your organization? Let us know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, DBA, Readers Contribution, Software Development, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Your thoughts on Best Practices for Scientific Computing?

    - by John Smith
    A recent paper by Wilson et al (2014) pointed out 24 Best Practices for scientific programming. It's worth to have a look. I would like to hear opinions about these points from experienced programmers in scientific data analysis. Do you think these advices are helpful and practical? Or are they good only in an ideal world? Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, Haddock SHD, Huff KD, Mitchell IM, Plumbley MD, Waugh B, White EP, Wilson P (2014) Best Practices for Scientific Computing. PLoS Biol 12:e1001745. http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001745 Box 1. Summary of Best Practices Write programs for people, not computers. (a) A program should not require its readers to hold more than a handful of facts in memory at once. (b) Make names consistent, distinctive, and meaningful. (c) Make code style and formatting consistent. Let the computer do the work. (a) Make the computer repeat tasks. (b) Save recent commands in a file for re-use. (c) Use a build tool to automate workflows. Make incremental changes. (a) Work in small steps with frequent feedback and course correction. (b) Use a version control system. (c) Put everything that has been created manually in version control. Don’t repeat yourself (or others). (a) Every piece of data must have a single authoritative representation in the system. (b) Modularize code rather than copying and pasting. (c) Re-use code instead of rewriting it. Plan for mistakes. (a) Add assertions to programs to check their operation. (b) Use an off-the-shelf unit testing library. (c) Turn bugs into test cases. (d) Use a symbolic debugger. Optimize software only after it works correctly. (a) Use a profiler to identify bottlenecks. (b) Write code in the highest-level language possible. Document design and purpose, not mechanics. (a) Document interfaces and reasons, not implementations. (b) Refactor code in preference to explaining how it works. (c) Embed the documentation for a piece of software in that software. Collaborate. (a) Use pre-merge code reviews. (b) Use pair programming when bringing someone new up to speed and when tackling particularly tricky problems. (c) Use an issue tracking tool. I'm relatively new to serious programming for scientific data analysis. When I tried to write code for pilot analyses of some of my data last year, I encountered tremendous amount of bugs both in my code and data. Bugs and errors had been around me all the time, but this time it was somewhat overwhelming. I managed to crunch the numbers at last, but I thought I couldn't put up with this mess any longer. Some actions must be taken. Without a sophisticated guide like the article above, I started to adopt "defensive style" of programming since then. A book titled "The Art of Readable Code" helped me a lot. I deployed meticulous input validations or assertions for every function, renamed a lot of variables and functions for better readability, and extracted many subroutines as reusable functions. Recently, I introduced Git and SourceTree for version control. At the moment, because my co-workers are much more reluctant about these issues, the collaboration practices (8a,b,c) have not been introduced. Actually, as the authors admitted, because all of these practices take some amount of time and effort to introduce, it may be generally hard to persuade your reluctant collaborators to comply them. I think I'm asking your opinions because I still suffer from many bugs despite all my effort on many of these practices. Bug fix may be, or should be, faster than before, but I couldn't really measure the improvement. Moreover, much of my time has been invested on defence, meaning that I haven't actually done much data analysis (offence) these days. Where is the point I should stop at in terms of productivity? I've already deployed: 1a,b,c, 2a, 3a,b,c, 4b,c, 5a,d, 6a,b, 7a,7b I'm about to have a go at: 5b,c Not yet: 2b,c, 4a, 7c, 8a,b,c (I could not really see the advantage of using GNU make (2c) for my purpose. Could anyone tell me how it helps my work with MATLAB?)

    Read the article

  • Printing fails after first print with Centos 6 and HP LaserJet P3015dn printer

    - by Gavin Simpson
    Centos 6 recognises and configures a HP LaserJet P3015dn printer connected via USB. This machine is being configured as a small group file/print server. I can print a test page, which is processed/printed correctly. The next time printing is attempted (say printing a second test page), the page is not printed and the printer is set to disabled. The status of the printer is stated as: Stopped - /usr/lib/cups/backend/hp failed in the printer configuration dialogue. /var/log/cups/error_log contains this information (first two lines were there prior to the failed print job) E [24/Jun/2004:09:12:57 +0100] Returning HTTP Forbidden for Resume-Printer (ipp://localhost/printers/HP-LaserJet-P3010-Series) from localhost E [24/Jun/2004:09:20:59 +0100] Returning HTTP Forbidden for CUPS-Delete-Printer (ipp://localhost/printers/HP-LaserJet-P3010-Series) from localhost D [24/Jun/2004:09:37:28 +0100] [Job 28] The following messages were recorded from 09:36:43 AM to 09:37:28 AM D [24/Jun/2004:09:37:28 +0100] [Job 28] Adding start banner page "none". D [24/Jun/2004:09:37:28 +0100] [Job 28] Adding end banner page "none". D [24/Jun/2004:09:37:28 +0100] [Job 28] File of type application/vnd.cups-banner queued by "gavin". D [24/Jun/2004:09:37:28 +0100] [Job 28] hold_until=0 D [24/Jun/2004:09:37:28 +0100] [Job 28] Queued on "HP-LaserJet-P3010-Series" by "gavin". D [24/Jun/2004:09:37:28 +0100] [Job 28] job-sheets=none,none D [24/Jun/2004:09:37:28 +0100] [Job 28] argv[0]="HP-LaserJet-P3010-Series" D [24/Jun/2004:09:37:28 +0100] [Job 28] argv[1]="28" D [24/Jun/2004:09:37:28 +0100] [Job 28] argv[2]="gavin" D [24/Jun/2004:09:37:28 +0100] [Job 28] argv[3]="Test Page" D [24/Jun/2004:09:37:28 +0100] [Job 28] argv[4]="1" D [24/Jun/2004:09:37:28 +0100] [Job 28] argv[5]="job-uuid=urn:uuid:b3370a97-4ab6-3451-40a2-6239b13fa3e1 job-originating-host-name=localhost" D [24/Jun/2004:09:37:28 +0100] [Job 28] argv[6]="/var/spool/cups/d00028-001" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[0]="CUPS_CACHEDIR=/var/cache/cups" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[1]="CUPS_DATADIR=/usr/share/cups" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[2]="CUPS_DOCROOT=/usr/share/cups/www" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[3]="CUPS_FONTPATH=/usr/share/cups/fonts" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[4]="CUPS_REQUESTROOT=/var/spool/cups" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[5]="CUPS_SERVERBIN=/usr/lib/cups" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[6]="CUPS_SERVERROOT=/etc/cups" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[7]="CUPS_STATEDIR=/var/run/cups" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[8]="HOME=/var/spool/cups/tmp" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[9]="PATH=/usr/lib/cups/filter:/usr/bin:/usr/sbin:/bin:/usr/bin" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[10]="[email protected]" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[11]="SOFTWARE=CUPS/1.4.2" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[12]="TMPDIR=/var/spool/cups/tmp" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[13]="USER=root" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[14]="CUPS_SERVER=/var/run/cups/cups.sock" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[15]="CUPS_ENCRYPTION=IfRequested" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[16]="IPP_PORT=631" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[17]="CHARSET=utf-8" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[18]="LANG=en_US.UTF-8" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[19]="PPD=/etc/cups/ppd/HP-LaserJet-P3010-Series.ppd" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[20]="RIP_MAX_CACHE=8m" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[21]="CONTENT_TYPE=application/vnd.cups-banner" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[22]="DEVICE_URI=hp:/usb/HP_LaserJet_P3010_Series?serial=VNBV993GM4" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[23]="PRINTER_INFO=Hewlett-Packard HP LaserJet P3010 Series" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[24]="PRINTER_LOCATION=electra.geog.ucl.ac.uk" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[25]="PRINTER=HP-LaserJet-P3010-Series" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[26]="CUPS_FILETYPE=document" D [24/Jun/2004:09:37:28 +0100] [Job 28] envp[27]="FINAL_CONTENT_TYPE=application/vnd.cups-postscript" D [24/Jun/2004:09:37:28 +0100] [Job 28] Started filter /usr/lib/cups/filter/bannertops (PID 2858) D [24/Jun/2004:09:37:28 +0100] [Job 28] Started filter /usr/lib/cups/filter/pstops (PID 2859) D [24/Jun/2004:09:37:28 +0100] [Job 28] Started backend /usr/lib/cups/backend/hp (PID 2860) D [24/Jun/2004:09:37:28 +0100] [Job 28] load_banner(filename="/var/spool/cups/d00028-001") D [24/Jun/2004:09:37:28 +0100] [Job 28] Page = 612x792; 12,12 to 600,780 D [24/Jun/2004:09:37:28 +0100] [Job 28] Page = 612x792; 12,12 to 600,780 D [24/Jun/2004:09:37:28 +0100] [Job 28] slow_collate=0, slow_duplex=0, slow_order=0 D [24/Jun/2004:09:37:28 +0100] [Job 28] Before copy_comments - %!PS-Adobe-3.0 D [24/Jun/2004:09:37:28 +0100] [Job 28] %!PS-Adobe-3.0 D [24/Jun/2004:09:37:28 +0100] [Job 28] %%BoundingBox: 12 12 600 780 D [24/Jun/2004:09:37:28 +0100] [Job 28] %cupsRotation: 0 D [24/Jun/2004:09:37:28 +0100] [Job 28] %%Creator: bannertops/CUPS v1.4.2 D [24/Jun/2004:09:37:28 +0100] [Job 28] %%CreationDate: Thu 24 Jun 2004 09:36:43 AM BST D [24/Jun/2004:09:37:28 +0100] [Job 28] %%LanguageLevel: 2 D [24/Jun/2004:09:37:28 +0100] [Job 28] %%DocumentData: Clean7Bit D [24/Jun/2004:09:37:28 +0100] [Job 28] %%Title: (Test Page) D [24/Jun/2004:09:37:28 +0100] [Job 28] %%For: (gavin) D [24/Jun/2004:09:37:28 +0100] [Job 28] %%Pages: 1 D [24/Jun/2004:09:37:28 +0100] [Job 28] %%DocumentSuppliedResources: font Monospace D [24/Jun/2004:09:37:28 +0100] [Job 28] %%+ font Monospace-Bold D [24/Jun/2004:09:37:28 +0100] [Job 28] %%+ font Monospace-BoldOblique D [24/Jun/2004:09:37:28 +0100] [Job 28] %%+ font Monospace-Oblique D [24/Jun/2004:09:37:28 +0100] [Job 28] %%EndComments D [24/Jun/2004:09:37:28 +0100] [Job 28] Before copy_prolog - %%BeginProlog D [24/Jun/2004:09:37:28 +0100] [Job 28] STATE: +connecting-to-device D [24/Jun/2004:09:37:28 +0100] [Job 28] prnt/backend/hp.c 762: ERROR: cannot open channel PRINT D [24/Jun/2004:09:37:28 +0100] [Job 28] Backend returned status 1 (failed) D [24/Jun/2004:09:37:28 +0100] [Job 28] Printer stopped due to backend errors; please consult the error_log file for details. D [24/Jun/2004:09:37:28 +0100] [Job 28] End of messages D [24/Jun/2004:09:37:28 +0100] [Job 28] printer-state=5(stopped) D [24/Jun/2004:09:37:28 +0100] [Job 28] printer-state-message="/usr/lib/cups/backend/hp failed" D [24/Jun/2004:09:37:28 +0100] [Job 28] printer-state-reasons=paused /var/log/messages contains the following reports associated with the recognition of the printer and the failed print job: Jun 24 09:35:07 electra kernel: usb 1-8: new high speed USB device using ehci_hcd and address 2 Jun 24 09:35:07 electra kernel: usb 1-8: New USB device found, idVendor=03f0, idProduct=8d17 Jun 24 09:35:07 electra kernel: usb 1-8: New USB device strings: Mfr=1, Product=2, SerialNumber=3 Jun 24 09:35:07 electra kernel: usb 1-8: Product: HP LaserJet P3010 Series Jun 24 09:35:07 electra kernel: usb 1-8: Manufacturer: Hewlett-Packard Jun 24 09:35:07 electra kernel: usb 1-8: SerialNumber: VNBV993GM4 Jun 24 09:35:07 electra kernel: usb 1-8: configuration #1 chosen from 1 choice Jun 24 09:35:07 electra kernel: usblp0: USB Bidirectional printer dev 2 if 0 alt 1 proto 2 vid 0x03F0 pid 0x8D17 Jun 24 09:35:07 electra kernel: usbcore: registered new interface driver usblp Jun 24 09:35:07 electra udev-configure-printer: invalid or missing IEEE 1284 Device ID Jun 24 09:35:08 electra hp[1942]: io/hpmud/pp.c 627: unable to read device-id ret=-1 Jun 24 09:35:09 electra python: io/hpmud/pp.c 627: unable to read device-id ret=-1 Jun 24 09:35:51 electra kernel: usblp0: removed Jun 24 09:37:28 electra hp[2860]: io/hpmud/dot4.c 254: unable to read Dot4ReverseReply data: Resource temporarily unavailable exp=2 act=0 Jun 24 09:37:28 electra hp[2860]: io/hpmud/dot4.c 330: invalid DOT4InitReply: cmd=0, result=20#012, revision=0 Jun 24 09:37:28 electra hp[2860]: prnt/backend/hp.c 762: ERROR: cannot open channel PRINT I am now at a loss as to how to proceed to get this printer working on my Centos machine. How can I configure the machine to print more than a single print job without needing to be unplugged/plugged in repeatedly?

    Read the article

  • SQL SERVER – Using expressor Composite Types to Enforce Business Rules

    - by pinaldave
    One of the features that distinguish the expressor Data Integration Platform from other products in the data integration space is its concept of composite types, which provide an effective and easily reusable way to clearly define the structure and characteristics of data within your application.  An important feature of the composite type approach is that it allows you to easily adjust the content of a record to its ultimate purpose.  For example, a record used to update a row in a database table is easily defined to include only the minimum set of columns, that is, a value for the key column and values for only those columns that need to be updated. Much like a class in higher level programming languages, you can also use the composite type as a way to enforce business rules onto your data by encapsulating a datum’s name, data type, and constraints (for example, maximum, minimum, or acceptable values) as a single entity, which ensures that your data can not assume an invalid value.  To what extent you use this functionality is a decision you make when designing your application; the expressor design paradigm does not force this approach on you. Let’s take a look at how these features are used.  Suppose you want to create a group of applications that maintain the employee table in your human resources database. Your table might have a structure similar to the HumanResources.Employee table in the AdventureWorks database.  This table includes two columns, EmployeID and rowguid, that are maintained by the relational database management system; you cannot provide values for these columns when inserting new rows into the table. Additionally, there are columns such as VacationHours and SickLeaveHours that you might choose to update for all employees on a monthly basis, which justifies creation of a dedicated application. By creating distinct composite types for the read, insert and update operations against this table, you can more easily manage this table’s content. When developing this application within expressor Studio, your first task is to create a schema artifact for the database table.  This process is completely driven by a wizard, only requiring that you select the desired database schema and table.  The resulting schema artifact defines the mapping of result set records to a record within the expressor data integration application.  The structure of the record within the expressor application is a composite type that is given the default name CompositeType1.  As you can see in the following figure, all columns from the table are included in the result set and mapped to an identically named attribute in the default composite type. If you are developing an application that needs to read this table, perhaps to prepare a year-end report of employees by department, you would probably not be interested in the data in the rowguid and ModifiedDate columns.  A typical approach would be to drop this unwanted data in a downstream operator.  But using an alternative composite type provides a better approach in which the unwanted data never enters your application. While working in expressor  Studio’s schema editor, simply create a second composite type within the same schema artifact, which you could name ReadTable, and remove the attributes corresponding to the unwanted columns. The value of an alternative composite type is even more apparent when you want to insert into or update the table.  In the composite type used to insert rows, remove the attributes corresponding to the EmployeeID primary key and rowguid uniqueidentifier columns since these values are provided by the relational database management system. And to update just the VacationHours and SickLeaveHours columns, use a composite type that includes only the attributes corresponding to the EmployeeID, VacationHours, SickLeaveHours and ModifiedDate columns. By specifying this schema artifact and composite type in a Write Table operator, your upstream application need only deal with the four required attributes and there is no risk of unintentionally overwriting a value in a column that does not need to be updated. Now, what about the option to use the composite type to enforce business rules?  If you review the composition of the default composite type CompositeType1, you will note that the constraints defined for many of the attributes mirror the table column specifications.  For example, the maximum number of characters in the NationaIDNumber, LoginID and Title attributes is equivalent to the maximum width of the target column, and the size of the MaritalStatus and Gender attributes is limited to a single character as required by the table column definition.  If your application code leads to a violation of these constraints, an error will be raised.  The expressor design paradigm then allows you to handle the error in a way suitable for your application.  For example, a string value could be truncated or a numeric value could be rounded. Moreover, you have the option of specifying additional constraints that support business rules unrelated to the table definition. Let’s assume that the only acceptable values for marital status are S, M, and D.  Within the schema editor, double-click on the MaritalStatus attribute to open the Edit Attribute window.  Then click the Allowed Values checkbox and enter the acceptable values into the Constraint Value text box. The schema editor is updated accordingly. There is one more option that the expressor semantic type paradigm supports.  Since the MaritalStatus attribute now clearly specifies how this type of information should be represented (a single character limited to S, M or D), you can convert this attribute definition into a shared type, which will allow you to quickly incorporate this definition into another composite type or into the description of an output record from a transform operator. Again, double-click on the MaritalStatus attribute and in the Edit Attribute window, click Convert, which opens the Share Local Semantic Type window that you use to name this shared type.  There’s no requirement that you give the shared type the same name as the attribute from which it was derived.  You should supply a name that makes it obvious what the shared type represents. In this posting, I’ve overviewed the expressor semantic type paradigm and shown how it can be used to make your application development process more productive.  The beauty of this feature is that you choose when and to what extent you utilize the functionality, but I’m certain that if you opt to follow this approach your efforts will become more efficient and your work will progress more quickly.  As always, I encourage you to download and evaluate expressor Studio for your current and future data integration needs. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: CodeProject, Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • How to print data form C#

    - by Hybryd
    I've searched Stackoverflow and google and found many ways how I can print stuff in C#. The best way for me would be to populate blank white windows form with some label, textbox and picturebox elements and print it as a windows form. This way is very poor because it prints in 72 DPI, and is not flexible for multiple pages print. Next way that I found that would be good is using iTextSharp, but there is a problem that iTextSharp only generates PDF-s, and you have to open it in PDF viewer and print from there. I love this way of thinking where I create a paragraph, and then fill it with text and graphic, so I found this thread http://www.devarticles.com/c/a/C-Sharp/Printing-Using-C-sharp/ where it discusses how to create your own printing engine in C#, something like iTextSharp, but very lightweight... Now that I've said that, I want to know is there any ready to use printing engine that would be like iTextSharp, made for printing, not for PDF generation? What is the best way to print something, without using reporting services like CrystalReports. I think Crystal Reports wouldn't work for my case cause I don't want to print generic reports, but some text and graphics that I need to dynamicaly generate every time I need to print.

    Read the article

  • error catching exception while System.out.print

    - by user1702633
    I have 2 classes, one that implements a double lookup( int i); and one where I use that lookup(int i) in solving a question, or in this case printing the lookup values. This case is for an array. So I read the exception documentation or google/textbook and come with the following code: public double lookup(int i) throws Exception { if( i > numItems) throw new Exception("out of bounds"); return items[i]; } and take it over to my class and try to print my set, where set is a name of the object type I define in the class above. public void print() { for (int i = 0; i < set.size() - 1; i++) { System.out.print(set.lookup(i) + ","); } System.out.print(set.lookup(set.size())); } I'm using two print()'s to avoid the last "," in the print, but am getting an unhandled exception Exception (my exception's name was Exception) I think I have to catch my exception in my print() but cannot find the correct formatting online. Do I have to write catch exception Exception? because that gives me a syntax error saying invalid type on catch. Sources like http://docs.oracle.com/javase/tutorial/essential/exceptions/ are of little help to me, I'm can't seem to grasp what the text is telling me. I'm also having trouble finding sources with multiple examples where I can actually understand the coding in the examples. so could anybody give me a source/example for the above catch phrase and perhaps a decent source of examples for new Java programmers? my book is horrendous and I cannot seem to find an understandable example for the above catch phrase online.

    Read the article

  • Good documentation tool that is not Latex?

    - by flpgdt
    I am far from being a expert in Latex but I'm ok to document my projects with it. Though I would seldom find people in the corporate word eager to learn latex and going along with the documentation. 99% of the cases they would just ask me the Word version of it. For technical documentation I find less resistance, but still, whenever I start a project with someone not familiar with latex, the starting up is troublesome. That said, latex is a bit of an oversize tool for my needs really. My documents hardly go further from tables, lists, few images and type styles (although I'd love to still be able to produce hyperlinked PDFs). What are other tools there, simpler and with a easier learning curve than Latex, but still PDF worthy and with minimally decent capabilities? It also has to run on windows :( Oh. yeah, MSWord is not an option ;)

    Read the article

  • Finding documentation for /etc/**/*.dpkg-*

    - by intuited
    During upgrades, files with these extensions appear in /etc and its subdirectories. I gather that *.dpkg-dist contains the file that was distributed with the currently-installed version of a package, and *.dpkg-new contains the version from the version being installed, however I'd like to see the docs to be sure that I'm getting it right. Also there are occasionally other similarly named files, eg *.dpkg-original, and I'd like to be able to read up on these. I've checked /usr/share/doc/dpkg for documentation on this, and come up empty; there's no dpkg-doc package; Google doesn't have anything except unanswered questions. Can someone point me to the documentation for this aspect of debian package management?

    Read the article

  • How do I write in-code comments and documentation in a proper way? Is there any standard for this?

    - by hkBattousai
    I want to add documentation in my code by means of comment lines. Is there any standard format for this? For example, consider the code below: class Arithmetic { // This method adds two numbers, and returns the result. // dbNum1 is the first number to add, and dbNum2 is second. // The returning value is dbNum1+dbNum2. static double AddTwoNumbers(double dbNum1, double dbNum2); } For this example code, is there any better way of writing the comment lines?

    Read the article

  • Windows 2008 Terminal Services "Easy Print" and Matrix Printers

    - by Cesar87
    Server: Windows 2008 Server Standard SP2 with "Terminal Services" role Clients: Windows XP SP3 + .NET 3.5 Framework SP1 + Remote Desktop Client 7.0 We are using "Easy Print" feature which allows programs running on server to "see" printers installed on client machines. Everything works fine, EXCEPT when we send a text-only output to a dot-matrix printer. In this case, the printer only outputs a blank page. At first, we had problems with the error "Windows Presentation Foundation Terminal Server Print W has encountered a problem and needs to close." but this was fixed by replacing TsWpfWrp.exe with the one from Vista SP1 as suggested here. But now, we only get a blank page! Every other (graphical) document we sent to printer works 100%. We also tried to use the "Generic text-only" driver, but the result is same. Now we are trying to change parameters like print processor on "advanced" tab from printer driver to see if something happen. But this is just guessing and we really don't know what to try anymore. The problem appears to be on Easy Print driver, but we found almost no resources about it. Any tips are welcome.

    Read the article

  • Microsoft.Office.Interop.Word documentation

    - by rold2007
    I need to use the Microsoft.Office.Interop.Word namespace to extract if a Word document contains macros, and which ones. The MSDN documentation for this namespace doesn't give much information compared to the documentation on other .Net classes. Where can I get more informations about this namespaces (examples, complete documentation, etc.). I already searched on Google and SO but didn't much information.

    Read the article

  • Parsing the output of "uptime" with bash

    - by Keek
    I would like to save the output of the uptime command into a csv file in a Bash script. Since the uptime command has different output formats based on the time since the last reboot I came up with a pretty heavy solution based on case, but there is surely a more elegant way of doing this. uptime output: 8:58AM up 15:12, 1 user, load averages: 0.01, 0.02, 0.00 desired result: 15:12,1 user,0.00 0.02 0.00, current code: case "`uptime | wc -w | awk '{print $1}'`" in #Count the number of words in the uptime output 10) #e.g.: 8:16PM up 2:30, 1 user, load averages: 0.09, 0.05, 0.02 echo -n `uptime | awk '{ print $3 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $4,$5 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $8,$9,$10 }' | awk '{gsub ( ",","" ) ; print $0 }'`"," ;; 12) #e.g.: 1:41pm up 105 days, 21:46, 2 users, load average: 0.28, 0.28, 0.27 echo -n `uptime | awk '{ print $3,$4,$5 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $6,$7 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $10,$11,$12 }' | awk '{gsub ( ",","" ) ; print $0 }'`"," ;; 13) #e.g.: 12:55pm up 105 days, 21 hrs, 2 users, load average: 0.26, 0.26, 0.26 echo -n `uptime | awk '{ print $3,$4,$5,$6 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $7,$8 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $11,$12,$13 }' | awk '{gsub ( ",","" ) ; print $0 }'`"," ;; esac

    Read the article

  • CUPS causes printer to click and doesn't print

    - by Pez Cuckow
    I'm suffering a strange problem with my Cannon iP4850 when trying to use CUPS on a Raspberry Pi (this is not RPi specific, please do not vote to move it). When I plug the printer into my Laptop (OSX) or my Desktop W7 it identifies as a iP4800 and prints perfectly. So I plug it into the Pi (running debian), set it up in CUPS enable sharing and can now see the iP4800 series shared on the network. However if I print to it (using AirPrint etc...); the file gets to CUPS safely (shows in the queue) but when it tries to print the printer clicks (like a loud thunk) 3/4 times and then gives in, with a double amber flashing light. In cups it shows as job completed. Do you know why using the pi and cups would cause what appears to be a hardware fault and what I can do to fix the problem or to provide further debug info? Thanks for your time! Description: Canon iP4800 series Location: Lounge Driver: Canon PIXMA iP4800 - CUPS+Gutenprint v5.2.9 (color, 2-sided printing) Connection: usb://Canon/iP4800%20series?serial=2239B2 Note: I've tried deleting and re-adding the printer to the Laptop, Desktop and PI and the results are always the same Log for plugging in printer and printing (attempting to) something until the printer turned off again pi@pezpi /var/log $ dmesg [ 7284.176336] usb 1-1.2: new high speed USB device number 8 using dwc_otg [ 7284.279703] usb 1-1.2: New USB device found, idVendor=04a9, idProduct=10d5 [ 7284.279750] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 7284.279771] usb 1-1.2: Product: iP4800 series [ 7284.279786] usb 1-1.2: Manufacturer: Canon [ 7284.279800] usb 1-1.2: SerialNumber: 2239B2 Setting cups to verbose: Change loglevel in cupsd.conf to debug (or debug2) pi@pezpi /var/log $ sudo vim /etc/cups/cupsd.conf pi@pezpi /var/log $ sudo /etc/init.d/cups restart [ ok ] Restarting Common Unix Printing System: cupsd. pi@pezpi /var/log $ Log from $ /var/log/cups/error_log is at http://pastebin.com/7VZMRMrG (too large to post here) The log contains - in order (deleted the log and then did the beneath) Restarting the cups server Attempting to print a test page x2 Printing from 192.168.1.90 via AirPrint Printing from 192.168.1.90 via Network Print Turning the printer off and on again

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >