Search Results

Search found 34131 results on 1366 pages for 'project documentation'.

Page 43/1366 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • SQLAuthority News – Scaling Up Your Data Warehouse with SQL Server 2008 R2

    - by pinaldave
    Data Warehouses are suppose to be containing huge amount of the data from the beginning. However, there are cases when too big is not enough. Every Data Warehouse Admin will agree that they have faced situation where they will need to scale up their data warehouse. Microsoft has released white paper discussing the same. Here is the abstract from the Microsoft Official site: SQL Server 2008 introduced many new functional and performance improvements for data warehousing, and SQL Server 2008 R2 includes all these and more. This paper discusses how to use SQL Server 2008 R2 to get great performance as your data warehouse scales up. We present lessons learned during extensive internal data warehouse testing on a 64-core HP Integrity Superdome during the development of the SQL Server 2008 release, and via production experience with large-scale SQL Server customers. Our testing indicates that many customers can expect their performance to nearly double on the same hardware they are currently using, merely by upgrading to SQL Server 2008 R2 from SQL Server 2005 or earlier, and compressing their fact tables. We cover techniques to improve manageability and performance at high-scale, encompassing data loading (extract, transform, load), query processing, partitioning, index maintenance, indexed view (aggregate) management, and backup and restore. Scaling Up Your Data Warehouse with SQL Server 2008 R2 Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Whitepaper Download – Using Star Join and Few-Outer-Row Optimizations to Improve Data Warehousing Queries

    - by pinaldave
    Size of the database is growing every day. Many organizations now a days have more than TB of the Data in their system. Performance is always part of the issue. Microsoft is really paying attention to the same and also focusing on improving performance for Data Warehousing. Microsoft has recently released whitepaper on the performance tuning subject of Data Warehousing. Here is the abstract about the whitepaper from official site: In this white paper we discuss two of the new features introduced in SQL Server 2008, Star Join and Few-Outer-Row optimizations. These two features are in SQL Server 2008 R2 as well.  We test the performance of SQL Server 2008 on a set of complex data warehouse queries designed to highlight the effect of these two features and observed a significant performance gain over SQL Server 2005 (without these two features). The results observed also apply to SQL Server 2008 R2.  On average, about 75 percent of the query execution time has been reduced, compared to SQL Server 2005. We also include data that shows a reduction in the number of rows processed and improved balance in parallel queries, both of which highlight the important role the Star Join and Few Outer-Row features played. I encouraged all of those interested in Data Warehouse to read it and see if they can learn the tricks. Using Star Join and Few-Outer-Row Optimizations to Improve Data Warehousing Queries Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Tender vs. Requirements vs. Solution Design

    - by Tom Tom
    Conventionally, which of the above documents is deemed to hold the most weight when it comes to system acceptance? I recently had a conversation along these lines: It was argued that the initial requirements / tender documentation should be used to determine system acceptance. It was said that the solution design only serves to describe the way in which the system will solve the problem, not the problem it will solve. Furthermore, it was argued that if requirements are missed during solution design, the requirements should be referenced during system acceptance and that if any requirements were missed then the original tender should be referenced. Conversely, I suggested that - while requirements may be based on the original tender - they supersede it once agreed with the stakeholders. Furthermore, during solution design, analysis is performed to address and refine these initial requirements, translating them into a system capable of meeting the actual requirements. Once signed off by the relevant users, this solution design should absolutely represent the requirements (by virtue of the fact that it's designed upon them) but actually supersedes them as the basis for system acceptance. Is one of the above arguments more valid than the other?

    Read the article

  • TFS API-Process Template currently applied to the Team Project

    - by Tarun Arora
    Download Demo Solution - here In this blog post I’ll show you how to use the TFS API to get the name of the Process Template that is currently applied to the Team Project. You can also download the demo solution attached, I’ve tested this solution against TFS 2010 and TFS 2011.    1. Connecting to TFS Programmatically I have a blog post that shows you from where to download the VS 2010 SP1 SDK and how to connect to TFS programmatically. private TfsTeamProjectCollection _tfs; private string _selectedTeamProject;   TeamProjectPicker tfsPP = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPP.ShowDialog(); this._tfs = tfsPP.SelectedTeamProjectCollection; this._selectedTeamProject = tfsPP.SelectedProjects[0].Name; 2. Programmatically get the Process Template details of the selected Team Project I’ll be making use of the VersionControlServer service to get the Team Project details and the ICommonStructureService to get the Project Properties. private ProjectProperty[] GetProcessTemplateDetailsForTheSelectedProject() { var vcs = _tfs.GetService<VersionControlServer>(); var ics = _tfs.GetService<ICommonStructureService>(); ProjectProperty[] ProjectProperties = null; var p = vcs.GetTeamProject(_selectedTeamProject); string ProjectName = string.Empty; string ProjectState = String.Empty; int templateId = 0; ProjectProperties = null; ics.GetProjectProperties(p.ArtifactUri.AbsoluteUri, out ProjectName, out ProjectState, out templateId, out ProjectProperties); return ProjectProperties; } 3. What’s the catch? The ProjectProperties will contain a property “Process Template” which as a value has the name of the process template. So, you will be able to use the below line of code to get the name of the process template. var processTemplateName = processTemplateDetails.Where(pt => pt.Name == "Process Template").Select(pt => pt.Value).FirstOrDefault();   However, if the process template does not contain the property “Process Template” then you will need to add it. So, the question becomes how do i add the Name property to the Process Template. Download the Process Template from the Process Template Manager on your local        Once you have downloaded the Process Template to your local machine, navigate to the Classification folder with in the template       From the classification folder open Classification.xml        Add a new property <property name=”Process Template” value=”MSF for CMMI Process Improvement v5.0” />           4. Putting it all together… using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.VersionControl.Client; using Microsoft.TeamFoundation.Server; using System.Diagnostics; using Microsoft.TeamFoundation.WorkItemTracking.Client; namespace TfsAPIDemoProcessTemplate { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private TfsTeamProjectCollection _tfs; private string _selectedTeamProject; private void btnConnect_Click(object sender, EventArgs e) { TeamProjectPicker tfsPP = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPP.ShowDialog(); this._tfs = tfsPP.SelectedTeamProjectCollection; this._selectedTeamProject = tfsPP.SelectedProjects[0].Name; var processTemplateDetails = GetProcessTemplateDetailsForTheSelectedProject(); listBox1.Items.Clear(); listBox1.Items.Add(String.Format("Team Project Selected => '{0}'", _selectedTeamProject)); listBox1.Items.Add(Environment.NewLine); var processTemplateName = processTemplateDetails.Where(pt => pt.Name == "Process Template") .Select(pt => pt.Value).FirstOrDefault(); if (!string.IsNullOrEmpty(processTemplateName)) { listBox1.Items.Add(Environment.NewLine); listBox1.Items.Add(String.Format("Process Template Name: {0}", processTemplateName)); } else { listBox1.Items.Add(String.Format("The Process Template does not have the 'Name' property set up")); listBox1.Items.Add(String.Format("***TIP: Download the Process Template and in Classification.xml add a new property Name, update the template then you will be able to see the Process Template Name***")); listBox1.Items.Add(String.Format(" - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -")); } } private ProjectProperty[] GetProcessTemplateDetailsForTheSelectedProject() { var vcs = _tfs.GetService<VersionControlServer>(); var ics = _tfs.GetService<ICommonStructureService>(); ProjectProperty[] ProjectProperties = null; var p = vcs.GetTeamProject(_selectedTeamProject); string ProjectName = string.Empty; string ProjectState = String.Empty; int templateId = 0; ProjectProperties = null; ics.GetProjectProperties(p.ArtifactUri.AbsoluteUri, out ProjectName, out ProjectState, out templateId, out ProjectProperties); return ProjectProperties; } } } Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Have you come across a better way of doing this, please share your experience here. Questions/Feedback/Suggestions, etc please leave a comment. Thank You! Share this post : CodeProject

    Read the article

  • Where to place unit test project

    - by Karsten
    I'm thinking about where to put the unit/integration test project. I follow the 1 test project pr. project convention I can think of 3 ways, that all seems good to me, which make it kind of hard to choose :-) Test project is put under a Tests sub folder to the project it tests. Test project is put next to the project it tests, in a "project".Tests folder. I believe this is what Roy Osherove recommends. Put all test projects in a sub folder in the root. e.g. \Tests\"project".Tests Something else? What you choose and why?

    Read the article

  • TFS and team project portal question

    - by DotnetDude
    The team explorer for my project looks like this: mytfsserver\mycollection -- Project 1 solution -- Project 2 solution -- Project 3 solution When I right click on one of the solutions and do a "Show Project Portal", I see the following hierarchy: mycollection - WSS site Project 1 site with dashboard (appears to be a MOSS site) Project 2 site with dashboard (appears to be a MOSS site) Project 3 site with dashboard (appears to be a MOSS site) Are the dashboard sites MOSS sites? If I want to create a wiki, do I have to create a subsite with the wiki template under each of the Project sites? Can someone point me to a document/video that talks about the sites that area created by TFS by default?

    Read the article

  • How much detail should be in a project plan or spec?

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • SQL SERVER – Developer Training Kit for SQL Server 2012

    - by pinaldave
    Developer Training Kit is my favorite part of any product. The reason behind is very simple because it give the single resource which gives complete overview of the product in nutshell. A developer can learn from many places – books, webcasts, tutorials, blogs, etc. However, I have found that developer training kits are the best starting point for any product. Start with them first, see what are the new features as well what is the new message a product is coming up with. Once it is learned the very next step should be to identify the right learning material to explore the preferred topic. The SQL Server 2012 Developer Training Kit includes technical content including labs, demos and presentations designed to help you learn how to develop SQL Server 2012 database and BI solutions. New and updated content will be released periodically and can be downloaded on-demand using the Web Installer. Download SQL Server 2012 Developer Training Kit Web Installer. This training kit was available earlier this year but it is never late to explore it if you have not referred it earlier. Additionally, if you do not want to download complete kit all together I suggest you refer to Wiki here. This wiki contains all the same presentations and demo notes which web installer contains. Refer to SQL Server 2012 Developer Training Kit Wiki Wiki contains following module and details about Hands On Labs Module 1: Introduction to SQL Server 2012 Module 2: Introduction to SQL Server 2012 AlwaysOn Module 3: Exploring and Managing SQL Server 2012 Database Engine Improvements Module 4: SQL Server 2012 Database Server Programmability Module 5: SQL Server 2012 Application Development Module 6: SQL Server 2012 Enterprise Information Management Module 7: SQL Server 2012 Business Intelligence Hands-On Labs: SQL Server 2012 Database Engine Hands-On Labs: Visual Studio 2010 and .NET 4.0 Hands-On Labs: SQL Server 2012 Enterprise Information Management Hands-On Labs: SQL Server 2012 Business Intelligence Hands-On LabsHands-On Labs: Windows Azure and SQL Azure As I said, if you have not downloaded this so far, it is never late to explore it. Trust me you will atleast learn one thing if you just explore the content. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQLAuthority News – SQL Server 2012 Upgrade Technical Guide – A Comprehensive Whitepaper – (454 pages – 9 MB)

    - by pinaldave
    Microsoft has just released SQL Server 2012 Upgrade Technical Guide. This guide is very comprehensive and covers the subject of upgrade in-depth. This is indeed a helpful detailed white paper. Even writing a summary of this white paper would take over 100 pages. This further proves that SQL Server 2012 is quite an important release from Microsoft. This white paper discusses how to upgrade from SQL Server 2008/R2 to SQL Server 2012. I love how it starts with the most interesting and basic discussion of upgrade strategies: 1) In-place upgrades, 2) Side by side upgrade, 3) One-server, and 4) Two-server. This whitepaper is not just pure theory but is also an excellent source for some tips and tricks. Here is an example of a good tip from the paper: “If you want to upgrade just one database from a legacy instance of SQL Server and not upgrade the other databases on the server, use the side-by-side upgrade method instead of the in-place method.” There are so many trivia, tips and tricks that make creating the list seems humanly impossible given a short period of time. My friend Vinod Kumar, an SQL Server expert, wrote a very interesting article on SQL Server 2012 Upgrade before. In that article, Vinod addressed the most interesting and practical questions related to upgrades. He started with the fundamentals of how to start backup before upgrade and ended with fail-safe strategies after the upgrade is over. He covered end-to-end concepts in his blog posts in simple words in extremely precise statements. A successful upgrade uses a cycle of: planning, document process, testing, refine process, testing, planning upgrade window, execution, verifying of upgrade and opening for business. If you are at Vinod’s blog post, I suggest you go all the way down and collect the gold mine of most important links. I have bookmarked the blog by blogging about it and I suggest that you bookmark it as well with the way you prefer. Vinod Kumar’s blog post on SQL Server 2012 Upgrade Technical Guide SQL Server 2012 Upgrade Technical Guide is a detailed resource that’s also available online for free. Each chapter was carefully crafted and explained in detail. Here is a quick list of the chapters included in the whitepaper. Before downloading the guide, beware of its size of 9 MB and 454 pages. Here’s the list of chapters: Chapter 1: Upgrade Planning and Deployment Chapter 2: Management Tools Chapter 3: Relational Databases Chapter 4: High Availability Chapter 5: Database Security Chapter 6: Full-Text Search Chapter 7: Service Broker Chapter 8: SQL Server Express Chapter 9: SQL Server Data Tools Chapter 10: Transact-SQL Queries Chapter 11: Spatial Data Chapter 12: XML and XQuery Chapter 13: CLR Chapter 14: SQL Server Management Objects Chapter 15: Business Intelligence Tools Chapter 16: Analysis Services Chapter 17: Integration Services Chapter 18: Reporting Services Chapter 19: Data Mining Chapter 20: Other Microsoft Applications and Platforms Appendix 1: Version and Edition Upgrade Paths Appendix 2: SQL Server 2012: Upgrade Planning Checklist Download SQL Server 2012 Upgrade Technical Guide [454 pages and 9 MB] Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, DBA, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Feasibility to take over a JavaMe Project by Coders who have no experience in JavaMe

    - by Stephenmjm
    As the original JavaMe team will leave to do other items. The JavaMe project will be taken over by some guys knowing nothing about JavaMe. Transition period: One month About this JavaMe project: about 3.5 million lines of code (more than 180 java file, SourceCode is 8.5KB in total) using the Polish, Proguard document: The JavaMe project itself have no document. No UML map. Difficulties I guess: familiar with the JavaMe, this should be okay In order to do the further development. We need to Read the sourceCode ---- It's not easy to read 3.5 million lines of code having not enough comment Adaptation work for more than 100 phone These are the questions, thank you! In the case of our guys have no experience in JavaMe, Is one month too hasty? In order to take the job in time . What we should ask the original JavaMe team to do . Considering we hava no experience in JavaMe. The complication we taking the Adaptation work without the original JavaMe team? Any other suggestions?

    Read the article

  • Android development in Unreal with an existing project

    - by user1238929
    I am currently using an Unreal 3 project that has been targeted for multiple devices. Originally, it was targeted for iOS and now I want to try and build it for Android. The project is capable of doing it and I am in the process of testing it. I think I have everything I need in order to build it and launch it for an android device that I have set up and connected to my PC and is recognized by the Android SDK ABD. I am currently trying to build and launch the game through the Unreal Frontend but when I try, I am getting stuck at getting the Unreal Frontend to find my Android device as a platform to debug, like it would with a PC, Xbox360, or PS3. Right now, I am just trying to launch the game to see if I can get it to simply run on an Android device, I'm going to worry about the packaging later. So I have two questions: Am I on the right track in looking at the Unreal Frontend to cook and launch the project on Android or should I look somewhere else? How do I get Unreal to recognize my Android device as a platform to launch on? I would even settle for recognizing an emulator, but that seems even harder.

    Read the article

  • Formalizing a requirements spec written in narrative English

    - by ProfK
    I have a fairly technical functionality requirements spec, expressed in English prose, produced by my project manager. It is structured as a collection of UI tabs, where the requirements for each tab are expressed as a lit of UI fields and a list of business rules for the tab. Most business rules are for UI fields on a tab, e.g: a) Must be alphanumeric, max length 20. b) Must be a dropdown, with values from table x. c) Is mandatory. d) Is mandatory under certain conditions, e.g. another field is just populated, or has a specific value. Then other business rules get a little more complex. The spec is for a job application, so the central business object (table) is the Applicant, and we have several other tables with one-to-many relationships with applicant, such as Degree, HighSchool, PreviousEmployer, Diploma, etc. e) One such complex rule says a status field can only be assigned a certain value if a many-side record exists in at least one of the many-side tables. E.g. the Applicant has at least one HighSchool or at least one Diploma record. I am looking for advice on how to codify these requirements into a more structured specification defined in terms of tables, fields, and relationships, especially for the conditional rules for fields and for the presence of related records. Any suggestions and advice will be most welcome, but I would be overjoyed if i could find an already defined system or structure for expressing things like this.

    Read the article

  • Project Corndog: Viva el caliente perro!

    - by Matt Christian
    During one of my last semesters in college we were required to take a class call Computer Graphics which tried (quite unsuccessfully) to teach us a combination of mathematics, OpenGL, and 3D rendering techniques.  The class itself was horrible, but one little gem of an idea came out of it.  See, the final project in the class was to team up and create some kind of demo or game using techniques we learned in class.  My friend Paul and I teamed up and developed a top down shooter that, given the stringent timeline, was much less of a game and much more of 3D objects floating around a screen. The idea itself however I found clever and unique and decided it was time to spend some time developing a proper version of our idea.  Project Corndog as it is tentatively named, pits you as a freshly fried corndog who broke free from the shackles of fair food slavery in a quest to escape the state fair you were born in.  Obviously it's quite a serious game with undertones of racial prejudice, immoral practices, and cheap food sold at high prices. The game itself is a top down shooter in the style of 1942 (NES).  As a delicious corndog you will have to fight through numerous enemies including hungry babies, carnies, and the corndog serial-killer himself the corndog eating champion!  Other more engaging and frighteningly realistic enemies await as the only thing between you and freedom. Project Corndog is being developed in Visual Studio 2008 with XNA Game Studio 3.1.  It is currently being hosted on Google code and will be made available as an open source engine in the coming months.

    Read the article

  • Codifying a natural language requirements spec

    - by ProfK
    I have a fairly technical functionality requirements spec, expressed in English prose, produced by my project manager. It is structured as a collection of UI tabs, where the requirements for each tab are expressed as a lit of UI fields and a list of business rules for the tab. Most business rules are for UI fields on a tab, e.g: a) Must be alphanumeric, max length 20. b) Must be a dropdown, with values from table x. c) Is mandatory. d) Is mandatory under certain conditions, e.g. another field is just populated, or has a specific value. Then other business rules get a little more complex. The spec is for a job application, so the central business object (table) is the Applicant, and we have several other tables with one-to-many relationships with applicant, such as Degree, HighSchool, PreviousEmployer, Diploma, etc. e) One such complex rule says a status field can only be assigned a certain value if a many-side record exists in at least one of the many-side tables. E.g. the Applicant has at least one HighSchool or at least one Diploma record. I am looking for advice on how to codify these requirements into a more structured specification defined in terms of tables, fields, and relationships, especially for the conditional rules for fields and for the presence of related records. Any suggestions and advice will be most welcome, but I would be overjoyed if i could find an already defined system or structure for expressing things like this.

    Read the article

  • agile as our first project management methodology [closed]

    - by Hasan Khan
    we are a small web development company that has till now been working on client projects. we employed little to no project management and that has cost us a lot. we've used only the barest of tools (wireframing, prototyping etc) but no formal project management process has been put into place. we've learnt from our mistakes and want to prevent them from happening in the future. also, we are looking to develop our own products and we understand that putting in a proper project management paradigm will help. after a lot of research, we've sort of settled on agile for a few reasons: agile seems to scale well with team size. our team is small right now and we hope to grow and agile seems to be a process that we can put in place now and grow with. agile will help us with customers who just can't seem to make up their minds and keep changing requirements. we'd appreciate the community's thoughts on this. is this a correct way to think? will agile be a good system to put into place, where there has been none till now? are there any resources that may help us in our position? pretty much all of the resources that we've found start by comparing agile to x (where x = any management methodology) and why its better than x and how agile can be implemented in place of x. we're looking for resources that can help us out in our particular situation. thanks for all your help!

    Read the article

  • Big project layout : adding new feature on multiple sub-projects

    - by Shiplu
    I want to know how to manage a big project with many components with version control management system. In my current project there are 4 major parts. Web Server Admin console Platform. The web and server part uses 2 libraries that I wrote. In total there are 5 git repositories and 1 mercurial repository. The project build script is in Platform repository. It automates the whole building process. The problem is when I add a new feature that affects multiple components I have to create branch for each of the affected repo. Implement the feature. Merge it back. My gut feeling is "something is wrong". So should I create a single repo and put all the components there? I think branching will be easier in that case. Or I just do what I am doing right now. In that case how do I solve this problem of creating branch on each repository?

    Read the article

  • Versioning and Continuous Integration with project settings files

    - by Michael Stephenson
    I came across something which was a bit of a pain in the bottom the other week. Our scenario was that we had implemented a helper style assembly which had some custom configuration implemented through the project settings. I'm sure most of you are familiar with this where you end up with a settings file which is viewable through the C# project file and you can configure some basic settings. The settings are embedded in the assembly during compilation to be part of a DefaultValue attribute. You have the ability to override the settings by adding information to your app.config and if the app.config doesn’t override the settings then the embedded default is used. All normal C# stuff so far… Where our pain started was when we implement Continuous Integration and we wanted to version all of this from our build. What I was finding was that the assembly was versioned fine but the embedded default value was maintaining the non CI build version number. I ended up getting this to work by using a build task to change the version numbers in the following files: App.config Settings.settings Settings.designer.cs I think I probably could have got away with just the settings.designer.cs, but wanted to keep them all consistent incase we had to look at the code on the build server for some reason. I think the reason this was painful was because the settings.designer.cs is only updated through Visual Studio and it writes out the code to this file including the DefaultValue attribute when the project is saved rather than as part of the compilation process. The compile just compiles the already existing C# file. As I said we got it working, and it was a bit of a pain. If anyone has a better solution for this I'd love to hear it

    Read the article

  • What is a generic term for name/identifier? (as opposed to label)

    - by d3vid
    I need to refer to a number of things that have both an identifier value (used in code and configuration), and a human-readable label. These things include: database columns dropdown items subapplications objects stored in a dictionary I want two unambiguous terms. One to refer to the identifier/value/key. One to refer to the label. As you can see, I'm pretty settled on the latter :) For the former, identifier seems best (not everything is strictly a key, and value and name could refer to the label; although, identifier usually refers only to a variable name), but I would prefer to follow an established practice if there is one. Is there an established term for this? (Please provide a source.) If not, are there any examples of a choice from a significant source (Java APIs, MSDN, a big FLOSS project)? (I wasn't sure if this should be posted here or to English Language & Usage. I thought this was the more appropriate expert audience. Happy to migrate if not.)

    Read the article

  • SQL SERVER – Free eBook Download – EPUB, MOBI, PDF Format

    - by pinaldave
    Microsoft has released recently free eBooks on various Microsoft Technology. The best part is that all these books are available in ePub, Mobi and PDF. You can download them to your local machine or eBook reader and read them. This is a great start as many important subjects are now covered and converted into an eBook. I personally read through a few of the books and found they are very comprehensive and and detailed. The goal is not to cover complete technology in a single book but rather pick a single topic and discuss it in detail. The source of the book is white paper, Technet wiki as well book online and it is clearly listed right bellow the book title. Following are the books available for SQL Server Technology and I encourage all of you to have a look at them as they are great resources. Master Data Services Capacity Guidelines Microsoft SQL Server AlwaysOn Solutions Guide for High Availability and Disaster Recovery Microsoft SQL Server Analysis Services Multidimensional Performance and Operations Guide QuickStart: Learn DAX Basics in 30 Minutes SQL Server 2012 Transact-SQL DML Reference You can download above eBooks from here. This is indeed a great attempt as each book talks about the a single subject in depth keeping author focus on the single and simple subject. I have previously written two books by focusing on the same subject and I had great pleasure writing it as well. Writing on focus subjects gives complete freedom to author to explore the a single subject without having burden to cover everything which is associated with that technology at large. Just like eBooks mentioned earlier my SQL Server Wait Stats was inspired from my article series on SQL Wait Stats. The latest book SQL Server Interview Question and Answers was derived from my article series on SQL Interview Q and A. Writing book is an absolutely different concept than writing blog posts. When I was converting my blog posts to books, I ended up writing 50% new material and end up removing many repetitive content which shows up in blog series. It was indeed fun to focused book at the same time it was a great learning experience as an individual. Reference: TechNet Wiki, Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Donkey Kong Wall Shelves [DIY Project Inspiration]

    - by Asian Angel
    Are you looking for inspiration for a geeky DIY project to get into over the holiday weekend? Then take a look at this fantastic looking set of Donkey Kong wall shelves created by artist Igor Chak! From the website: So here is a Donkey Kong wall, strong, good looking but still has its character. The wall is made out of individual sections; each section is made out of durable but light carbon fiber, anodized aluminum pixels that are joined with strong stainless steel rods and toughened glass tops. The special mounts themselves are made out of steel and can support up to 60 lbs. Igor’s notes and additional images for the project can be found approximately half way down the webpage linked below. If Donkey Kong is not your favorite game, this could still inspire a shelving project focused on the one you like best! Donkey Kong Wall [via Neatorama] HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How What Are the Windows A: and B: Drives Used For?

    Read the article

  • Live programming help

    - by frazras
    This idea has been floating around my head for a few years. I started some work on it but I just want to know if it is feasible, sensible, or if there is something else like it out there. Dont want to know I was wasting time on a solved issue. Whenever I have a programming issue, this is my sequence: Google it!: That usually brings up a lot of things: blogs, forums, stackoverflow, stackexchange, and even the official docs of the language/framework/cms. Ask on IRC: I format my question and try to get people on IRC to help me. Make a post: I create a post on forums/stackoverflow/stackexchange or shout on twitter with hashtags. Now a lot of the time I am in the middle of a project with a deadline. So I want answers NOW!!! Sometimes just 5-15 minutes worth of attention. Usually by the time I am failing at getting answers at #2, I am imagining how many people are ONLINE NOW with the skill and my exact answer but playing video games, watching youtube or idling online. However, if they were motivated, they would invest the 15 mintes helping me, that would make a world of a difference. I am even in positions where I would PAY for that 15 minutes of instant help. If your rate is as much as $100/hour (relatively good programmer) that is $25 that might save me 3 hours. This help would be live, text chat/skype/phone/screenshare. Should I continue developing this idea or is there a better alternative out there? Or is this even an unfeasible idea?

    Read the article

  • m2e-wtp proposal to move into an Eclipse project

    - by gstachni
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} I am attending EclipseCon 2012 in Reston VA this week and had a chance to attend Jason van Zyl's session on M2E. One announcement during the presentation was the plan to move m2e-wtp plugin into the m2e project at Eclipse. The project proposal was posted yesterday afternoon for those who are interested - http://www.eclipse.org/proposals/technology.m2e.m2e-wtp/. Jason said that m2e-wtp will not make it into the m2e Juno release but they expect the code to be transitioned over to Eclipse over the next six months. Getting the maven and wtp project models to play nicely together has been an issue for quite some time for our users. Hopefully this is a good step toward resolving those issues.

    Read the article

  • How to structure my GUI agnostic project?

    - by Nezreli
    I have a project which loads from database a XML file which defines a form for some user. XML is transformed into a collection of objects whose classes derive from single parent. Something like Control - EditControl - TextBox Control - ContainterControl - Panel Those classes are responsible for creation of GUI controls for three different enviroments: WinForms, DevExpress XtraReports and WebForms. All three frameworks share mostly the same control tree and have a common single parent (Windows.Forms.Control, XrControl and WebControl). So, how to do it? Solution a) Control class has abstract methods Control CreateWinControl(); XrControl CreateXtraControl(); WebControl CreateWebControl(); This could work but the project has to reference all three frameworks and the classes are going to be fat with methods which would support all three implementations. Solution b) Each framework implementation is done in separate projects and have the exact class tree like the Core project. All three implementations are connected using a interface to the Core class. This seems clean but I'm having a hard time wrapping my head around it. Does anyone have a simpler solution or a suggestion how should I approach this task?

    Read the article

  • Maintaining a main project line with satellite projects

    - by NickLarsen
    Some projects I work on have a main line of features, but are customizable per customer. Up until now those customizations have been implemented as preferences, but now there are 2 problems with the system... The settings page is getting out of control with features. There are probably some improvements that could be made to the settings UI, but regardless, it is quite cumbersome setting up new instances for new customers. Customers have started asking for customizations which would be more easily maintained as separate threads instead of having tons of customizations code. Optimally I am envisioning some kind of source control in which features are either in the main project line and customizations per customer are maintained in a repo per customer set up. The customizations per project would need to remain separate but if a bug is found and fixed in a particular project, I would need to percolate the fix back to the main line and into all of the other customer repos. The problem is I have never seen this done before, and before spending time trying to find source control that can accommodate this scenario and implement it, I figure it best to ask if anyone has something less complicated or knows of a source control product which can handle this with very little hair pulling.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >