Search Results

Search found 6441 results on 258 pages for 'schema compare'.

Page 6/258 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • SSAS Compare version 1.0 released

    - by Red Gate Software BI Tools Team
    We’re pleased to announce that SSAS Compare version 1.0 has been released as a free tool. Version 1.0 includes: Comparisons of live databases and XMLA or Analysis Services Project files MDX syntax diffs and highlighting Server comparisons Deployment wizard with summaries of scripted actions Bug fixes and engine and UI refinements We’ve tested it on as many cube configurations as we could find (not just good old AdventureWorks!), but we can’t provide support for free tools — so if you’re reliant on SSAS Compare for your cube deployment, use it at your own risk. See the user license agreement in the installer for more details. SSAS Compare’s come a long way from its humble beginnings as an internal tool first developed for Red Gate’s own BI developers. Today’s SSAS Compare is now much more stable — not to mention much easier to use — and something the team is proud to have released with Red Gate’s name on. Next: Deployment Manager We’re working on integrating SSAS Compare cube deployment with our new Deployment Manager tool, so you’ll be able to create cube deployment scripts and automate the deployment process, too.  We’re documenting the process in a white paper we’ll publish online in the next week. Thank you! Thanks to all the SSAS Compare users out there. Without your feedback, we could never have produced such a stable product so quickly. We hope you continue to find useful. See you in Deployment Manager!  

    Read the article

  • BizTalk Schema Validation

    - by Christopher House
    Perhaps this one should be filed under:  Obvious Yesterday I created a new schema that is going to be used for a WCF receive.  The schema has a bunch of restrictions in it, with the intention that we'd validate incoming messages against the schema.  I'd never done message validation with BizTalk but I knew the XmlDisassembler component had an option for validating, so I figured it would be a piece of cake.  Sadly, that was not to be the case.  I deployed my artifacts and configured my receive location's XmlDisassembler with what I thought to be the correct document spec name.  I entered My.Project.Name.SchemaTypeName for the document spec and started running unit tests.  All of them failed with the following error logged in the event log: "WcfReceivePort_BizTalkWcfService/PurchaseOrderService" URI: "/BizTalkWcfService/PurchaseOrderService.svc" Reason: No Disassemble stage components can recognize the data. I went to the receive port and turned on tracking, submitted another message, then went to the admin console and saved the message.  It looked correct, but just to be sure, I manually validated it against the schema in my project.  As expected, it validated correctly. After a bit of thinking on this, I realized that I probably needed to fully qualify my document spec name, meaning, include the assembly name, as well as the type name.  So, I went back to the receive location and changed the document spec to: My.Project.Name.SchemaTypeName, My.Project.Name,Version=1.0.0.0, Culture=neutral, PublicKeyToken=xxxxxxxxx I re-ran my unit tests and everything was working as expected.  So, note to self:  remember to include the assembly name when setting the document spec.  If you need an easy way to determine your schema name and assembly name, find your schema in the admin console and go to it's properties.  On the property screen, look at the Name and Assembly properties.  Your document spec will be "SchemaName, AssemblyName"

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • validating an XML schema with empty attributes

    - by AdRock
    I am having trouble validating my xml schema. I get these errors on the schema 113: 18 s4s-elt-invalid-content.1: The content of '#AnonType_user' is invalid. 164: 17 s4s-elt-invalid-content.1: The content of '#AnonType_festival' is invalid. Element 'sequence' is invalid, misplaced, or occurs too often. and becuase of those 2 errors, i am getting loads of the same error. This is becuase the attribute id of the festival tag may be empty becuase there is not data for that festival cvc-datatype-valid.1.2.1: '' is not a valid value for 'integer'. cvc-attribute.3: The value '' of attribute 'id' on element 'festival' is not valid with respect to its type, 'integer'. The lines in the schema causing the problems are <xs:element name="user"> <xs:complexType> <xs:attribute name="id" type="xs:integer"/> <xs:sequence> <xs:element ref="personal"/> <xs:element ref="account"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="festival"> <xs:complexType> <xs:attribute name="id" type="xs:integer" user="optional"/> <xs:sequence> <xs:element ref="event"/> <xs:element ref="contact"/> </xs:sequence> </xs:complexType> </xs:element> This is a snippet from my XML file. One user has a festival and the other doesn't <member> <user id="3"> <personal> <name>Skye Saunders</name> <sex>Female</sex> <address1>31 Anns Court</address1> <address2></address2> <city>Cirencester</city> <county>Gloucestershire</county> <postcode>GL7 1JG</postcode> <telephone>01958303514</telephone> <mobile>07260491667</mobile> <email>[email protected]</email> </personal> <account> <username>BigUndecided</username> <password>ea297847f80e046ca24a8621f4068594</password> <userlevel>2</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <festival id=""> <event> <eventname></eventname> <url></url> <datefrom></datefrom> <dateto></dateto> <location></location> <eventpostcode></eventpostcode> <coords> <lat></lat> <lng></lng> </coords> </event> <contact> <conname></conname> <conaddress1></conaddress1> <conaddress2></conaddress2> <concity></concity> <concounty></concounty> <conpostcode></conpostcode> <contelephone></contelephone> <conmobile></conmobile> <fax></fax> <conemail></conemail> </contact> </festival> </member> <member> <user id="4"> <personal> <name>Connor Lawson</name> <sex>Male</sex> <address1>12 Ash Way</address1> <address2></address2> <city>Swindon</city> <county>Wiltshire</county> <postcode>SN3 6GS</postcode> <telephone>01791928119</telephone> <mobile>07338695664</mobile> <email>[email protected]</email> </personal> <account> <username>iTuneStinker</username> <password>3a1f5fda21a07bfff20c41272bae7192</password> <userlevel>3</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <festival id="1"> <event> <eventname>Oxford Folk Festival</eventname> <url>http://www.oxfordfolkfestival.com/</url> <datefrom>2010-04-07</datefrom> <dateto>2010-04-09</dateto> <location>Oxford</location> <eventpostcode>OX19BE</eventpostcode> <coords> <lat>51.735640</lat> <lng>-1.276136</lng> </coords> </event> <contact> <conname>Stuart Vincent</conname> <conaddress1>P.O. Box 642</conaddress1> <conaddress2></conaddress2> <concity>Oxford</concity> <concounty>Bedfordshire</concounty> <conpostcode>OX13BY</conpostcode> <contelephone>01865 79073</contelephone> <conmobile></conmobile> <fax></fax> <conemail>[email protected]</conemail> </contact> </festival> </member> This is my schema <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xs:simpleType name="postcode"> <xs:restriction base="xs:string"> <xs:minLength value="6"/> <xs:maxLength value="8"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="telephone"> <xs:restriction base="xs:string"> <xs:minLength value="10"/> <xs:maxLength value="13"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="mobile"> <xs:restriction base="xs:string"> <xs:minLength value="11"/> <xs:maxLength value="11"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="password"> <xs:restriction base="xs:string"> <xs:minLength value="32"/> <xs:maxLength value="32"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="userlevel"> <xs:restriction base="xs:integer"> <xs:enumeration value="1"/> <xs:enumeration value="2"/> <xs:enumeration value="3"/> <xs:enumeration value="4"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="county"> <xs:restriction base="xs:string"> <xs:enumeration value="Bedfordshire"/> <xs:enumeration value="Berkshire"/> <xs:enumeration value="Bristol"/> <xs:enumeration value="Buckinghamshire"/> <xs:enumeration value="Cambridgeshire"/> <xs:enumeration value="Cheshire"/> <xs:enumeration value="Cleveland"/> <xs:enumeration value="Cornwall"/> <xs:enumeration value="Cumberland"/> <xs:enumeration value="Derbyshire"/> <xs:enumeration value="Devon"/> <xs:enumeration value="Dorset"/> <xs:enumeration value="Durham"/> <xs:enumeration value="East Ridings Of Yorkshire"/> <xs:enumeration value="Essex"/> <xs:enumeration value="Gloucestershire"/> <xs:enumeration value="Hampshire"/> <xs:enumeration value="Herefordshire"/> <xs:enumeration value="Hertfordshire"/> <xs:enumeration value="Huntingdonshire"/> <xs:enumeration value="Isle Of Man"/> <xs:enumeration value="Kent"/> <xs:enumeration value="Lancashire"/> <xs:enumeration value="Leicestershire"/> <xs:enumeration value="Lincolnshire"/> <xs:enumeration value="London"/> <xs:enumeration value="Middlesex"/> <xs:enumeration value="Norfolk"/> <xs:enumeration value="North Yorkshire"/> <xs:enumeration value="Northamptonshire"/> <xs:enumeration value="Northumberland"/> <xs:enumeration value="Nottinghamshire"/> <xs:enumeration value="Oxfordshire"/> <xs:enumeration value="Rutland"/> <xs:enumeration value="Shropshire"/> <xs:enumeration value="Somerset"/> <xs:enumeration value="South Yorkshire"/> <xs:enumeration value="Staffordshire"/> <xs:enumeration value="Suffolk"/> <xs:enumeration value="Surrey"/> <xs:enumeration value="Sussex"/> <xs:enumeration value="Tyne and Wear"/> <xs:enumeration value="Warwickshire"/> <xs:enumeration value="West Yorkshire"/> <xs:enumeration value="Westmorland"/> <xs:enumeration value="Wiltshire"/> <xs:enumeration value="Wirral"/> <xs:enumeration value="Worcestershire"/> <xs:enumeration value="Yorkshire"/> </xs:restriction> </xs:simpleType> <xs:element name="folktask"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" maxOccurs="unbounded" ref="member"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="member"> <xs:complexType> <xs:sequence> <xs:element ref="user" minOccurs="0" maxOccurs="unbounded"/> <xs:element ref="festival" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="user"> <xs:complexType> <xs:attribute name="id" type="xs:integer"/> <xs:sequence> <xs:element ref="personal"/> <xs:element ref="account"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="personal"> <xs:complexType> <xs:sequence> <xs:element ref="name"/> <xs:element ref="sex"/> <xs:element ref="address1"/> <xs:element ref="address2"/> <xs:element ref="city"/> <xs:element ref="county"/> <xs:element ref="postcode"/> <xs:element ref="telephone"/> <xs:element ref="mobile"/> <xs:element ref="email"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="name" type="xs:string"/> <xs:element name="sex" type="xs:string"/> <xs:element name="address1" type="xs:string"/> <xs:element name="address2" type="xs:string"/> <xs:element name="city" type="xs:string"/> <xs:element name="county" type="xs:string"/> <xs:element name="postcode" type="postcode"/> <xs:element name="telephone" type="telephone"/> <xs:element name="mobile" type="mobile"/> <xs:element name="email" type="xs:string"/> <xs:element name="account"> <xs:complexType> <xs:sequence> <xs:element ref="username"/> <xs:element ref="password"/> <xs:element ref="userlevel"/> <xs:element ref="signupdate"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="username" type="xs:string"/> <xs:element name="password" type="password"/> <xs:element name="userlevel" type="userlevel"/> <xs:element name="signupdate" type="xs:dateTime"/> <xs:element name="festival"> <xs:complexType> <xs:attribute name="id" type="xs:integer" user="optional"/> <xs:sequence> <xs:element ref="event"/> <xs:element ref="contact"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="event"> <xs:complexType> <xs:sequence> <xs:element ref="eventname"/> <xs:element ref="url"/> <xs:element ref="datefrom"/> <xs:element ref="dateto"/> <xs:element ref="location"/> <xs:element ref="eventpostcode"/> <xs:element ref="coords"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="eventname" type="xs:string"/> <xs:element name="url" type="xs:string"/> <xs:element name="datefrom" type="xs:date"/> <xs:element name="dateto" type="xs:date"/> <xs:element name="location" type="xs:string"/> <xs:element name="eventpostcode" type="postcode"/> <xs:element name="coords"> <xs:complexType> <xs:sequence> <xs:element ref="lat"/> <xs:element ref="lng"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="lat" type="xs:decimal"/> <xs:element name="lng" type="xs:decimal"/> <xs:element name="contact"> <xs:complexType> <xs:sequence> <xs:element ref="conname"/> <xs:element ref="conaddress1"/> <xs:element ref="conaddress2"/> <xs:element ref="concity"/> <xs:element ref="concounty"/> <xs:element ref="conpostcode"/> <xs:element ref="contelephone"/> <xs:element ref="conmobile"/> <xs:element ref="fax"/> <xs:element ref="conemail"/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name="conname" type="xs:string"/> <xs:element name="conaddress1" type="xs:string"/> <xs:element name="conaddress2" type="xs:string"/> <xs:element name="concity" type="xs:string"/> <xs:element name="concounty" type="xs:string"/> <xs:element name="conpostcode" type="postcode"/> <xs:element name="contelephone" type="telephone"/> <xs:element name="conmobile" type="mobile"/> <xs:element name="fax" type="telephone"/> <xs:element name="conemail" type="xs:string"/> </xs:schema>

    Read the article

  • XML Schema Header & Namespace Config

    - by zharvey
    Migrating from DTD to XSD and for some reason the transition is a bumpy one. I understand how to define the schema once I'm inside the <xs:schema> root tag, but getting past the header & namespace declaration stuff is proving to be especially confusing for me. I have been trying to follow the well-laid out tutorial on W3S but even that tutorial seems to assume a lot of knowledge up front. I guess what I'm looking for is a King's English explanation of which attributes do what, where they go, and why: xmlns xmlns:xs xmlns:xsi targetNamespace xsi:schemaLocation And in some cases I see different variations of these elements/attributes, such as xsi which seems to have two different notations like xsi:schemaLocation="..." and xs:import schemaLocation="...". I guess between all these slight variations I can't seem to make heads or tails of what each of these does. Thanks in advance for bringing any clarity to this confusion!

    Read the article

  • How to create schema that have an access as that of dbo and can be accessed by sa user

    - by Shantanu Gupta
    I am new to schema, roles and user management part in sql server. Till now I used to work with simple dbo schema but now after reading few articles I am intrested in creating schema for managing my tables in a folder fashion. At present, I want to create a schema where i want to keep my tables that have same kind of functionality. When I tries to create a schema then I faces a problem while using query, permissions etc. First of all i want to get used to of using schemas then only I want to explore it. But due to initial stages and work pressure as well i m not able to implement it yet. What can i do to start using schema with default permissions as that of dbo. Also let me know about creating roles and assigning roles on these schema. I want all this to be accessible by sa user itself at present. What is the concept behind all these things

    Read the article

  • How to Compare and fetch date in Cakephp ?

    - by delete me
    I am trying to make an availability calender and need to know how can I compare date when fetching it. My DB is id start_date end_date status Now suppose I want to fetch booking in next month, i.e. from 2010-03-01 to 2010-04-01. How should I fetch this data ? I did try comparing directly using an and condition but it didnt help. The format in DB is yyyy-mm-dd and I used the same to compare. But direct comparison does not work.

    Read the article

  • Stop node containing subnodes and text in schema

    - by AndyC
    If I have some xml like this: <mynode> <mysubnode> <mysubsubnode>hello world</mysubsubnode> some more text </mysubnode> </mynode> As you can see, mysubnode contains both a subnode and some text data. What I want to know is, is it possible to prevent this happening in a schema? I don't want nodes to contain subnodes and text, just subnodes or text. Is there an option in my xsd I can specify to force this? My program to that uses this xml is written in .NET, so I'll tag it as well incase there's anything of use in .net that I can utilise for this, though I'd much rather that the issue was fixed in the schema itself. Cheers

    Read the article

  • Compare images to find differences

    - by _simon_
    Task: I have a camera mounted on the end of our assembly line, which captures images of produced items. Let's for example say, that we produce tickets (with some text and pictures on them). So every produced ticket is photographed and saved to disk as image. Now I would like to check these saved images for anomalies (i.e. compare them to an image (a template), which is OK). So if there is a problem with a ticket on our assembly line (missing picture, a stain,...), my application should find it (because its image differs too much from my template). Question: What is the easiest way to compare pictures and find differences between them? Do I need to write my own methods, or can I use existing ones? It would be great if I just set a tolerance value (i.e. images can differ for 1%), put both images in a function and get a return value of true or false :) Tools: C# or VB.NET, Emgu.CV (.NET wrapper for OpenCV) or something similar

    Read the article

  • How to compare 2 pdf files using command line or any tool

    - by Darzen
    I have to compare 2 pdf files and check if they are same or different. I have tried my luck with WinMerge and Beyond compare. however they dont solve my issue. The issue is that there are 2 files which are exactly same except for the time in which they were saved. I have a piece of code that embeds the time of saving into the file. but i am not supposed to do any modifications to the code I have. Hence the above mentioned tools will say that the files dont match and There is just one difference, that is timestamp. Can anyone suggest me a way to handle this. Please. Thanks

    Read the article

  • Thoughts on schemas and schema proliferation

    - by jamiet
    In SQL Server 2005 Microsoft introduced user-schema separation and since then I have seen the use of schemas increase; whereas before I would typically see databases where all objects were in the [dbo] schema I now see databases that have multiple schemas, a database I saw recently had 31 (thirty one) of them. I can’t help but wonder whether this is a good thing or not – clearly 31 is an extreme case but I question whether multiple schemas create more problems than they solve? I have been involved in many discussions that go something like this: Developer #1> “I have a new function to add to the database and I’m not sure which schema to put it in” Developer #2> “What does it do?” Developer #1> “It provides data to a report in Reporting Services” Developer #2> “Ok, so put it in the [reports] schema” Developer #1> “Well I could, but the data will only be used by our Financial reporting folks so shouldn’t I put it in the [financial] schema?” Developer #2> “Maybe, yes” Developer #1> “Mind you, the data is supposed to be used for regulatory reporting to the FSA, should I put it in [regulatory]?” Developer #2> “Err….” You get the idea!!! The more schemas that exist in your database then the more chance that their supposed usages will overlap. I’m left wondering whether the use of schemas is actually necessary. I don’t view really see them as an aid to security because I generally believe that principles should be assigned permissions on objects as-needed on a case-by-case basis (and I have a stock SQL query that deciphers them all for me) so why bother using them at all? I can envisage a use where a database is used to house objects pertaining to many different business functions (which, in itself, is an ambiguous term) and in that circumstance perhaps a schema per business function would be appropriate; hence of late I have been loosely following this edict: If some objects in a database could be moved en masse to another database without the need to remove any foreign key constraints then those objects could legitimately exist in a dedicated schema. I am interested to know what other people’s thoughts are on this. If you would like to share then please do so in the comments. @Jamiet

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • Sql Server Compact - Schema Management

    - by Richard B
    I've been searching for some time for a good solution to implement the idea of managing schema on a Sql Server Compact 3.5 db. I know of several ways of managing schema on Sql Express/std/enterprise, but Compact Edition doesn't support the necessary tools required to use the same methodology. Any suggestions/tips? I should expand this to say that it is for 100+ clients with wrapperware software. As the system changes, I need to publish update scripts alongside the new binaries to the client. I was looking for a decent method by which to publish this without having to just hand the client a script file and say "Run this in SSMSE". Most clients are not capable of doing such a beast. A buddy of mine disclosed a partial script on how to handle the SQL Server piece of my task, but never worked on Compact Edition... It looks like I'll be on my own for this. What I think that I've decided to do, and it's going to need a "geek week" to accomplish, is that I'm going to write some sort of tool much like how WiX and nAnt works, so that I can just write an overzealous Xml document to handle the work. If I think that it is worthwhile, I'll publish it on CodePlex and/or CodeProject because I've used both sites a bit to gain better understanding of concepts for jobs I've done in the past, and I think it is probably worthwhile to give back a little.

    Read the article

  • PHP - Database schema: version control, branching, migrations.

    - by Billiam
    I'm trying to come up with (or find) a reusable system for database schema versioning in php projects. There are a number of Rails-style migration projects available for php. http://code.google.com/p/mysql-php-migrations/ is a good example. It uses timestamps for migration files, which helps with conflicts between branches. General problem with this kind of system: When development branch A is checked out, and you want to check out branch B instead, B may have new migration files. This is fine, migrating to newer content is straight forward. If branch A has newer migration files, you would need to migrate downwards to the nearest shared patch. If branch A and B have significantly different code bases, you may have to migrate down even further. This may mean: Check out B, determine shared patch number, check out A, migrate downwards to this patch. This must be done from A since the actual applied patches are not available in B. Then, checkout branch B, and migrate to newest B patch. Reverse process again when going from B to A. Proposed system: When migrating upwards, instead of just storing the patch version, serialize the whole patch in database for later use, though I'd probably only need the down() method. When changing branches, compare patches that have been run to patches that are available in the destination branch. Determine nearest shared patch (or oldest difference, maybe) between db table of run patches and patches in destination branch by ID or hash. Could also look for new or missing patches that are buried under a number of shared patches between the two branches. Automatically merge down to the nearest shared patch, using the db table stored down() methods, and then merge up to the branche's latest patch. My question is: Is this system too crazy and/or fraught with consequences to bother developing? My experience with database schema versioning is limited to PHP autopatch, which is an up()-only system requiring filenames with sequential IDs.

    Read the article

  • How to reuse results with a schema for end of day stock-data

    - by Vishalrix
    I am creating a database schema to be used for technical analysis like top-volume gainers, top-price gainers etc.I have checked answers to questions here, like the design question. Having taken the hint from boe100 's answer there I have a schema modeled pretty much on it, thusly: Symbol - char 6 //primary Date - date //primary Open - decimal 18, 4 High - decimal 18, 4 Low - decimal 18, 4 Close - decimal 18, 4 Volume - int Right now this table containing End Of Day( EOD) data will be about 3 million rows for 3 years. Later when I get/need more data it could be 20 million rows. The front end will be asking requests like "give me the top price gainers on date X over Y days". That request is one of the simpler ones, and as such is not too costly, time wise, I assume. But a request like " give me top volume gainers for the last 10 days, with the previous 100 days acting as baseline", could prove 10-100 times costlier. The result of such a request would be a float which signifies how many times the volume as grown etc. One option I have is adding a column for each such result. And if the user asks for volume gain in 10 days over 20 days, that would require another table. The total such tables could easily cross 100, specially if I start using other results as tables, like MACD-10, MACD-100. each of which will require its own column. Is this a feasible solution? Another option being that I keep the result in cached html files and present them to the user. I dont have much experience in web-development, so to me it looks messy; but I could be wrong ( ofc!) . Is that a option too? Let me add that I am/will be using mod_perl to present the response to the user. With much of the work on mysql database being done using perl. I would like to have a response time of 1-2 seconds.

    Read the article

  • Compare NSArray with NSMutableArray adding delta objects to NSMutableArray

    - by Hooligancat
    I have an NSMutableArray that is populated with objects of strings. For simplicity sake we'll say that the objects are a person and each person object contains information about that person. Thus I would have an NSMutableArray that is populated with person objects: person.firstName person.lastName person.age person.height And so on. The initial source of data comes from a web server and is populated when my application loads and completes it's initialization with the server. Periodically my application polls the server for the latest list of names. Currently I am creating an NSArray of the result set, emptying the NSMutableArray and then re-populating the NSMutableArray with NSArray results before destroying the NSArray object. This seems inefficient to me on a few levels and also presents me with a problem losing table row references which I can work around, but might be creating more work for myself in doing so. The inefficiency seems to be that I should be able to compare the two arrays and end up with a filtered NSArray. I could then add the filtered set to the NSMutableArray. This would mean that I can simply append new data to the NSMutableArray instead of throwing everything out and re-populating. Conversely I would need to do the same filter in reverse to see if there are records that need removing from the NSMutableArray. Is there any method to do this in a more efficient manner? Have I overlooked something in the docs some place that refers to a simpler technique? I have a problem when I empty the NSMutableArray and re-populate in that any referencing tables lose their selected row state. I can track it and re-select it, but my theory is that using some form of compare and adding objects and removing objects instead of dealing with the whole array in one block might mean I keep my row reference (assuming the item isn't deleted of course). Any suggestions or help much appreciated. Update Would it be just as fast to do a fast enumeration over each comparing each line item as I go? It seems like an expensive operation, but with the last fast enumeration code it might be pretty efficient...

    Read the article

  • Create XML File Using XML Schema

    - by metdos
    Is there any easy way to create at least a template XML file using XML Schema? My main interest is bounded by C++, but discussions of other programming languages are also welcome.By the way I also use QT framework.

    Read the article

  • Specify XML schema data type of decimal or blank

    - by Jeremy Stein
    Is there a way in an XML schema to specify that an element may contain either an empty string or a decimal? If I specify the type as xs:decimal like this: <xs:element name="Sample" type="xs:decimal" /> then a blank value would not pass validation: <Sample/> (I realize that the best way to indicate no value would be to not include the element, but I was wondering if there was a way to allow blank or decimal.)

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >