Search Results

Search found 6441 results on 258 pages for 'schema compare'.

Page 17/258 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How to build a database from an XSD schema and import XML data

    - by FreshCode
    I have a complex XSD schema and hundreds of XML files conforming to the schema. How do I automate the creation of related SQL Server tables to store the XML data? I've considered creating C# classes from the XSD schema using the xsd.exe tool and letting something like Subsonic figure out how to make a shiny database out of it, but not sure if it's the best way to approach it. Has anyone managed to elegantly import XSD files into SQL Server?

    Read the article

  • Update database schema with hibernate

    - by blow
    Hi all, with <property name="hibernate.hbm2ddl.auto">update</property> i can create my database schema, it automatically add properties, constraint, key etc... But what about UPDATE the database schema? If i remove some property from my entities, hibernate doesn't remove it, or if i change some constraint, hibernate doesn't touch constraint already created... So, there is a way to make hibernate really update the database schema? Thanks.

    Read the article

  • How to design a database schema for storing text in multiple languages?

    - by stach
    We have a PostgreSQL database. And we have several tables which need to keep certain data in several languages (the list of possible languages is thankfully system-wide defined). For example lets start with: create table blah (id serial, foo text, bar text); Now, let's make it multilingual. How about: create table blah (id serial, foo_en text, foo_de text, foo_jp text, bar_en text, bar_de text, bar_jp text); That would be good for full-text search in Postgres. Just add a tsvector column for each language. But is it optimal? Maybe we should use another table to keep the translations? Like: create table texts (id serial, colspec text, obj_id int, language text, data text); Maybe, just maybe, we should use something else - something out of the SQL world? Any help is appreciated.

    Read the article

  • Click No Browse: How to Navigate Objects Without Opening Them

    - by thatjeffsmith
    Oracle SQL Developer by default automatically opens the object editor when you click on an object in your connection tree or schema browser. For most folks this is very convenient. But if you are selecting objects to drag them to a model or to the worksheet, this can get annoying as the focus of the screen changes when you don’t want it to. The other scenario this feature might disrupt more than delight is when you want to click around the database in the tree and every time you click on an object, the object editor automatically changes to the selected object. You can disable this automatic browsing behavior in SQL Developer by modifying this preference: Tools Preferences Database ObjectViewer Open Object on Single Click Disable this if you don’t want an object to open when you click on it OK, I do realize my description of the problem may have confused the heck out of you just now. So instead of more words, how about a couple of animations of the object-click behavior with the option ON and OFF? Preference Disabled Click, no open. Double click, open. Preference Enabled (Default) As you click on objects, they are automatically opened

    Read the article

  • Migrateing to Windows Server 2008 R2 Domain Controllers - a few Questions/Issues

    - by Chris
    Ok so here's our setup: We have 2 Windows2k3 Domain Controllers. I am trying to replace them with Windows 2008 R2. The Win2k3 servers are DC01 and DC02. The Windows2k8 servers are DC1 and DC2. I prepared the Windows Server 2003 Forest Schema for a Domain Controller That Runs Windows Server 2008 or Windows Server 2008 R2. Then with both of the new servers up as member servers I dcpromo'd DC1 using the advanced option and added it successfully to my exisiting domain. Roles are GC, DNS and Active Directory Domain Services.I transferred The PDC, RID pool manager and Infrastructure master FSMO to the new DC.(DC1) The Schema master and Domain naming master are still on the old DC (DC01). The first issue I'm encountering is when i dcpromo the second DC (DC2) and select "Replicate data over the network from and existing domain controller" I select the new DC to replicate from (DC1) I get the following error: "Failed to identify the requested replica partner (dc1.xxx.org) as a valid domain controller with a machine account for (DC2$). This is likely due to either the machine account not being replicated to this domain controller because of replication latency or the domain controller not advertising the Active Directory Domain Services. Please consider retrying the operation with \dc01.xxx.org as the replica partner. "The server is unwilling to process the request." Is this because the Schema master and Domain naming master roles are still on the old DC (DC01)? And if so, if I transfer Schema master and Domain naming master roles to DC1 what is the risk or breaking my AD? I'm a little paranoid because this process HAS to be transparent. ANY down time or interruption will result in me getting a verbal ass kicking from my I.T. Director. Both of the new servers DNS point the the old DNS servers (DC01 and DC02) not themselves by the way. Thanks in Advance -Chris

    Read the article

  • Migrating to Windows Server 2008 R2 Domain Controllers - a few Questions/Issues

    - by Chris
    Ok so here's our setup: We have 2 Windows 2003 Domain Controllers. I am trying to replace them with Windows 2008 R2. The 2003 servers are named DC01 and DC02. The 2008 R2 servers are DC1 and DC2. I prepared the Windows Server 2003 Forest Schema for a Domain Controller that runs Windows Server 2008 or Windows Server 2008 R2. Then with both of the new servers up as member servers I ran dcpromo on DC1 using the advanced option and added it successfully to my existing domain. It's roles are GC, DNS and Active Directory Domain Services. I transferred The PDC Emulator, RID Pool Manager, and Infrastructure Master roles to DC1. The Schema Master and Domain Naming master are still on DC01. The first issue that I'm encountering is when I dcpromo the DC2 and select "Replicate data over the network from and existing domain controller" I select that I want to replicate from DC1 and I get the following error: Failed to identify the requested replica partner (dc1.xxx.org) as a valid domain controller with a machine account for (DC2$). This is likely due to either the machine account not being replicated to this domain controller because of replication latency or the domain controller not advertising the Active Directory Domain Services. Please consider retrying the operation with \dc01.xxx.org as the replica partner. "The server is unwilling to process the request. Is this because the Schema Master and Domain Naming Master roles are still on the old DC01? And if so, if I transfer Schema Master and Domain Naming Master roles to DC1 what is the risk or breaking my AD? I'm a little paranoid because this process HAS to be transparent. ANY down time or interruption will result in me getting a verbal ass kicking from my I.T. Director. Both of the new servers DNS point the the old DNS servers (DC01 and DC02) not themselves by the way.

    Read the article

  • Using LDAP to store customer data

    - by mechcow
    We wish to store some data in 389 Directory Server LDAP that doesn't fit that well into the standard set of schema's that come with the product. Nothing too amazing, things like: when the customer joined are they currently active customer certificate[1] which environment they are using My question is this: should we register with OID and start writing up our own custom schema OR is there a standard schema definition not provided by Directory Server that we can download and use that would fit our needs? Should we munge/hack existing attributes and store the data among there (I'm strongly opposed to this, but would be interested in arguments about why its better than extending)? [1] I know there is a field for this userCertificate but we don't want to use it to authenticate the user for the purposes of binding Using CentOS 5.5 with 389 Directory Server 8.1

    Read the article

  • Windows Server 2008 R2 Software Deployment on Active Directory - Schema Issue

    - by weedave
    We have two servers, one running Windows Server 2003 SP2 and one running Windows Server 2008 R2. Both servers have their own versions of Group Policy Management (1.0.2 on 2003 and 6.0.0.1 on 2008). We are wanting to migrate everything over to the newer 2008 server, including software deployment. However, when I try to add a new software package using a .msi file, I get the following error: "The schema for the software installation data in the Active Directory does not match the required schema." I have tried two separate software packages and get the same error on the 2008 server. However, when I do the same on the 2003 server, it adds the software package without any problems. The .msi files I am using are up-to-date - one is the most recent version of Google Chrome. Is this problem caused by the different versions of the OS, or the Group Policy Management program? How do we "upgrade" our Active Directory to allow software deployment on the 2008 server? Thanks.

    Read the article

  • How does ARM Cortex A8 compare with a modern x86 processor

    - by thomasrutter
    I was wondering how does a modern ARM chip based on ARM Cortex A8 compare, in clock-for-clock performance and capability, to a modern x86 chip such as a Core 2 Duo or Core i5? I realise due to the different instruction sets it'll depend heavily on what you're doing. To put it another way, rendering a web page in webkit on a 1GHz ARM Cortex A8 based chip should be about equivalent to doing in on a Core i5 at __ MHz? Update October 2013: Since I asked this question years ago it's become a lot more common, when reading about mobile devices, to see architecture-agnostic benchmarks that you can compare across platforms - for example, in-browser benchmarks like Sunspider in Webkit will run on just about anything and you see these in reviews all the time now. And there's things like Geekbench now.

    Read the article

  • Star schema [fact 1:n dimension]...how?

    - by Mike Gates
    I am a newcomer to data warehouses and have what I hope is an easy question about building a star schema: If I have a fact table where a fact record naturally has a one-to-many relationship with a single dimension, how can a star schema be modeled to support this? For example: Fact Table: Point of Sale entry (the measurement is DollarAmount) Dimension Table: Promotions (these are sales promotions in effect when a sale was made) The situation is that I want a single Point Of Sale entry to be associated with multiple different Promotions. These Promotions cannot be their own dimensions as there are many many many promotions. How do I do this?

    Read the article

  • How to supply parameters in URI schema?

    - by abhishekgarg
    I just started working with URI schema and successfully created one in Windows and Linux as well, but I am not able to parse any parameters to it. In Linux I am trying to open a file "test.py" in gedit, so for schema part I used these commands: gconftool -2 -t string /desktop/gnome/url-handlers/geditapp/command "gedit %s" gconftool -2 -t bool/desktop/gnome/url-handlers/geditapp/enabled true This is creating the URI protocol and I'm able to open the application with the Web-browser, but its not taking the parameters for the file I want to open, so I'm using the following command: <a href="geditapp:/opt/test/myfile.py">open</a> Which opens the gedit but without the file. Can someone please help me with this?

    Read the article

  • Configure Git to use Beyond Compare for image diff

    - by Barney
    Because we work with a number of sprites, the kind of specialised diff views provided by Beyond Compare would be ideal to see which one of 2 versions I'm after when conflicts arise. I've already configured Git to use Beyond Compare as my primary diff and merge tool as described in their integration guide — it specifically goes into how to configure TortoiseSVN to use it for images, and I've found these articles talking about .gitattributes in general and how to script interactions from a *nix shell — but it's not obvious to me how I can use the advice provided by these guides to make a simple change that would say "use the default diff & merge bindings for files determined to be images, too". For the record, I'm doing all this on Windows :P

    Read the article

  • How to compare differences between directories (linux)

    - by Phil
    I have two directories - one from earlier backup and second from newest backup. How do i compare what changes were made to files in directory from newest backup on Linux? Also how do i display changes in for example text and php files - i'm thinking about something like revision history on wikipedia where you see old version on one side of the screen and newest version on other and changes are highlighted. How do i achieve something like that? edit: How do i also compare remote dir with local?

    Read the article

  • Shell script to read value from a file and compare it to another one

    - by maneeshshetty
    I have a C program which puts one unique value inside a test file (it would be a two digit number). Now I want to run a shell script to read that number and then compare with my required number (e.g. 40). The comparison should deliver "equal to" or "greater". For example: The output of the C program is written into the file called c.txt with the value 36, and I want to compare it with the number 40. So I want that comparison to be "equal to" or "greater" and then echo the value "equal" or "greater".

    Read the article

  • Comparing two specific properties of a CSV using Compare-Object isn't giving the expected results

    - by MDMarra
    I have a list of users from two separate domains. These lists are in CSV format and I only care about the SAMAccountName, which is a field in these CSVs. The code that I'm working with is currently: $domain1 = Import-CSV C:\Scripts\Temp\domain1.xxx.org.csv | Select-Object SAMAccountName $domain2 = Import-CSV C:\Scripts\Temp\domain2.xxx.org.csv | Select-Object SAMAccountName Compare-Object ($domain1) ($domain2) This is returning only a handful of results (which aren't accurate) in this format: @{samaccountname=SomeUser} => Obviously, Compare-Object isn't evaluating the objects as strings. How do I make this work?

    Read the article

  • How to compare differences between directories (linux)

    - by Phil
    I have two directories - one from earlier backup and second from newest backup. How do i compare what changes were made to files in directory from newest backup on Linux? Also how do i display changes in for example text and php files - i'm thinking about something like revision history on wikipedia where you see old version on one side of the screen and newest version on other and changes are highlighted. How do i achieve something like that? edit: How do i also compare remote dir with local?

    Read the article

  • Why does JAXB not create a member variable in its generated code when an XML schema base type and subtype have the same element declared in them?

    - by belltower
    I have a question regarding with regard to JAXB generated classes. As you can I see, I have a complex type, DG_PaymentIdentification1, declared in my schema. Its a restriction of PaymentIdentification1. DG_PaymentIdentification1 is also identical to PaymentIdentification1. I also have a type called DG_CreditTransferTransactionInformation10 which has a base type of CreditTransferTransactionInformation10 and is identical to it. I have included the relevant XML schema snippets below. <xs:complexType name="DG_PaymentIdentification1"> <xs:complexContent> <xs:restriction base="PaymentIdentification1"> <xs:sequence> <xs:element name="InstrId" type="DG_Max35Text_REF" minOccurs="0"/> <xs:element name="EndToEndId" type="DG_Max35Text_REF" id="DG-41"/> </xs:sequence> </xs:restriction> </xs:complexContent> </xs:complexType> <xs:complexType name="PaymentIdentification1"> <xs:sequence> <xs:element name="InstrId" type="Max35Text" minOccurs="0"/> <xs:element name="EndToEndId" type="Max35Text"/> </xs:sequence> </xs:complexType> <xs:complexType name="DG_CreditTransferTransactionInformation10"> <xs:complexContent> <xs:restriction base="CreditTransferTransactionInformation10"> <xs:sequence> <xs:element name="PmtId" type="DG_PaymentIdentification1"/> <xs:simpleType name="DG_Max35Text_REF"> <xs:restriction base="DG_NotEmpty35"> <xs:pattern value="[\-A-Za-z0-9\+/\?:\(\)\.,'&#x20;]*"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="Max35Text"> <xs:restriction base="xs:string"> <xs:minLength value="1"/> <xs:maxLength value="35"/> </xs:restriction> </xs:simpleType> JAXB generates the following java class for DG_PaymentIdentification1: @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "DG_CreditTransferTransactionInformDGion10") public class DGCreditTransferTransactionInformation10 extends CreditTransferTransactionInformation10 { } My question is why doesnt the DGCreditTransferTransactionInformation10 generated class have a variable of type DG_PaymentIdentification1 in the generated code? The base class CreditTransferTransactionInformation10 does have a type PaymentIdentification1 declared in it. Is there any way of ensuring that DGCreditTransferTransactionInformation10 will have a DG_PaymentIdentification1 in it?

    Read the article

  • Workaround for datadude deployment bug - NullReferenceException

    - by jamiet
    I have come across a bug in Visual Studio 2010 Database Projects (aka datadude aka DPro aka Visual Studio Database Development Tools aka Visual Studio Team Edition for Database Professionals aka Juneau aka SQL Server Data Tools) that other people may encounter so, for the purposes of googling, I'm writing this blog post about it. Through my own googling I discovered that a Connect bug had already been raised about it (VS2010 Database project deploy - “SqlDeployTask” task failed unexpectedly, NullReferenceException), and coincidentally enough it was raised by my former colleague Tom Hunter (whom I have mentioned here before as the superhuman Tom Hunter) although it has not (at this time) received a reply from Microsoft. Tom provided a repro, namely that this syntactically valid function definition: CREATE FUNCTION [dbo].[Function1]()RETURNS TABLEASRETURN (    WITH cte AS (    SELECT 1 AS [c1]    FROM [$(Database3)].[dbo].[Table1]   )   SELECT 1 AS [c1]   FROM cte) would produce this nasty unhelpful error upon deployment: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\TeamData\Microsoft.Data.Schema.TSqlTasks.targets(120,5): Error MSB4018: The "SqlDeployTask" task failed unexpectedly.System.NullReferenceException: Object reference not set to an instance of an object.   at Microsoft.Data.Schema.Sql.SchemaModel.SqlModelComparerBase.VariableSubstitution(SqlScriptProperty propertyValue, IDictionary`2 variables, Boolean& isChanged)   at Microsoft.Data.Schema.Sql.SchemaModel.SqlModelComparerBase.ArePropertiesEqual(IModelElement source, IModelElement target, ModelPropertyClass propertyClass, ModelComparerConfiguration configuration)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareProperties(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareChildren(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareParentElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes, Boolean isComposing)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareChildren(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareParentElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes, Boolean isComposing)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareAllElementsForOneType(ModelElementClass type, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean compareOrphanedElements)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareStore(ModelStore source, ModelStore target, ModelComparerConfiguration configuration)   at Microsoft.Data.Schema.Build.SchemaDeployment.CompareModels()   at Microsoft.Data.Schema.Build.SchemaDeployment.PrepareBuildPlan()   at Microsoft.Data.Schema.Build.SchemaDeployment.Execute(Boolean executeDeployment)   at Microsoft.Data.Schema.Build.SchemaDeployment.Execute()   at Microsoft.Data.Schema.Tasks.DBDeployTask.Execute()   at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()   at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult)   Done executing task "SqlDeployTask" -- FAILED.  Done building target "DspDeploy" in project "Lloyds.UKTax.DB.UKtax.dbproj" -- FAILED. Done executing task "CallTarget" -- FAILED.Done building target "DBDeploy" in project It turns out there are a certain set of circumstances that need to be met for this error to occur: The object being deployed is an inline function  (may also exist for multistatement and scalar functions - I haven't tested that) That object includes SQLCMD variable references The object has already been deployed successfully Just to reiterate that last bullet point, the error does not occur when you deploy the function for the first time, only on the subsequent deployment.   Luckily I have a direct line into a guy on the development team so I fired off an email on Friday evening and today (Monday) I received a reply back telling me that there is a simple fix, one simply has to remove the parentheses that wrap the SQL statement. So, in the case of Tom's repro, the function definition simpy has to be changed to: CREATE FUNCTION [dbo].[Function1]()RETURNS TABLEASRETURN --(    WITH cte AS (    SELECT 1 AS [c1]    FROM [$(Database3)].[dbo].[Table1]   )   SELECT 1 AS [c1]   FROM cte--) I have commented out the offending parentheses rather than removing them just to emphasize the point. Thereafter the function will deploy fine. I tested this out on my own project this morning and can confirm that this fix does indeed work.   I have been told that the bug CAN be reproduced in the Release Candidate (RC) 0 build of SQL Server Data Tools in SQL Server 2010 so am hoping that a fix makes it in for the Release-To-Manufacturing (RTM) build. Hope this helps @jamiet

    Read the article

  • How do I restrict concurrent statistics gathering to a small set of tables from a single schema?

    - by Maria Colgan
    I got an interesting question from one of my colleagues in the performance team last week about how to restrict a concurrent statistics gather to a small subset of tables from one schema, rather than the entire schema. I thought I would share the solution we came up with because it was rather elegant, and took advantage of concurrent statistics gathering, incremental statistics, and the not so well known “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. You should note that the solution outline below with “obj_filter_list” still applies, even when concurrent statistics gathering and/or incremental statistics gathering is disabled. The reason my colleague had asked the question in the first place was because he wanted to enable incremental statistics for 5 large partitioned tables in one schema. The first time you gather statistics after you enable incremental statistics on a table, you have to gather statistics for all of the existing partitions so that a synopsis may be created for them. If the partitioned table in question is large and contains a lot of partition, this could take a considerable amount of time. Since my colleague only had the Exadata environment at his disposal overnight, he wanted to re-gather statistics on 5 partition tables as quickly as possible to ensure that it all finished before morning. Prior to Oracle Database 11g Release 2, the only way to do this would have been to write a script with an individual DBMS_STATS.GATHER_TABLE_STATS command for each partition, in each of the 5 tables, as well as another one to gather global statistics on the table. Then, run each script in a separate session and manually manage how many of this session could run concurrently. Since each table has over one thousand partitions that would definitely be a daunting task and would most likely keep my colleague up all night! In Oracle Database 11g Release 2 we can take advantage of concurrent statistics gathering, which enables us to gather statistics on multiple tables in a schema (or database), and multiple (sub)partitions within a table concurrently. By using concurrent statistics gathering we no longer have to run individual statistics gathering commands for each partition. Oracle will automatically create a statistics gathering job for each partition, and one for the global statistics on each partitioned table. With the use of concurrent statistics, our script can now be simplified to just five DBMS_STATS.GATHER_TABLE_STATS commands, one for each table. This approach would work just fine but we really wanted to get this down to just one command. So how can we do that? You may be wondering why we didn’t just use the DBMS_STATS.GATHER_SCHEMA_STATS procedure with the OPTION parameter set to ‘GATHER STALE’. Unfortunately the statistics on the 5 partitioned tables were not stale and enabling incremental statistics does not mark the existing statistics stale. Plus how would we limit the schema statistics gather to just the 5 partitioned tables? So we went to ask one of the statistics developers if there was an alternative way. The developer told us the advantage of the “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. The “obj_filter_list” parameter allows you to specify a list of objects that you want to gather statistics on within a schema or database. The parameter takes a collection of type DBMS_STATS.OBJECTTAB. Each entry in the collection has 5 feilds; the schema name or the object owner, the object type (i.e., ‘TABLE’ or ‘INDEX’), object name, partition name, and subpartition name. You don't have to specify all five fields for each entry. Empty fields in an entry are treated as if it is a wildcard field (similar to ‘*’ character in LIKE predicates). Each entry corresponds to one set of filter conditions on the objects. If you have more than one entry, an object is qualified for statistics gathering as long as it satisfies the filter conditions in one entry. You first must create the collection of objects, and then gather statistics for the specified collection. It’s probably easier to explain this with an example. I’m using the SH sample schema but needed a couple of additional partitioned table tables to get recreate my colleagues scenario of 5 partitioned tables. So I created SALES2, SALES3, and COSTS2 as copies of the SALES and COSTS table respectively (setup.sql). I also deleted statistics on all of the tables in the SH schema beforehand to more easily demonstrate our approach. Step 0. Delete the statistics on the tables in the SH schema. Step 1. Enable concurrent statistics gathering. Remember, this has to be done at the global level. Step 2. Enable incremental statistics for the 5 partitioned tables. Step 3. Create the DBMS_STATS.OBJECTTAB and pass it to the DBMS_STATS.GATHER_SCHEMA_STATS command. Here, you will notice that we defined two variables of DBMS_STATS.OBJECTTAB type. The first, filter_lst, will be used to pass the list of tables we want to gather statistics on, and will be the value passed to the obj_filter_list parameter. The second, obj_lst, will be used to capture the list of tables that have had statistics gathered on them by this command, and will be the value passed to the objlist parameter. In Oracle Database 11g Release 2, you need to specify the objlist parameter in order to get the obj_filter_list parameter to work correctly due to bug 14539274. Will also needed to define the number of objects we would supply in the obj_filter_list. In our case we ere specifying 5 tables (filter_lst.extend(5)). Finally, we need to specify the owner name and object name for each of the objects in the list. Once the list definition is complete we can issue the DBMS_STATS.GATHER_SCHEMA_STATS command. Step 4. Confirm statistics were gathered on the 5 partitioned tables. Here are a couple of other things to keep in mind when specifying the entries for the  obj_filter_list parameter. If a field in the entry is empty, i.e., null, it means there is no condition on this field. In the above example , suppose you remove the statement Obj_filter_lst(1).ownname := ‘SH’; You will get the same result since when you have specified gather_schema_stats so there is no need to further specify ownname in the obj_filter_lst. All of the names in the entry are normalized, i.e., uppercased if they are not double quoted. So in the above example, it is OK to use Obj_filter_lst(1).objname := ‘sales’;. However if you have a table called ‘MyTab’ instead of ‘MYTAB’, then you need to specify Obj_filter_lst(1).objname := ‘”MyTab”’; As I said before, although we have illustrated the usage of the obj_filter_list parameter for partitioned tables, with concurrent and incremental statistics gathering turned on, the obj_filter_list parameter is generally applicable to any gather_database_stats, gather_dictionary_stats and gather_schema_stats command. You can get a copy of the script I used to generate this post here. +Maria Colgan

    Read the article

  • Utility to LOGICALLY compare two xml files?

    - by Matthew
    Right now we are attempting to build golden configurations for our environment. One piece of software that we use relies on large XML files to contain the bulk of its configuration. We want tot ake our lab environment, catalog it as our "golden configuration" and then be able to audit against that configuration in the future. Since diff is bytewise comparison and NOT logical comparison, we can't use it to compare files in this case (XML is unordered, so it won't work). What I am looking for is something that can parse the two XML files, and compare them element by element. So far we have yet to find any utilities that can do this. OS doesn't matter, I can do it on anything where it will work. The preference is something off the shelf. Any ideas? Edit: One issue we have run into is one vendor's config files will occasionally mention the same element several times, each time with different attributes. Whatever diff utility we use would need to be able to identify either the set of attributes or identify them all as part of one element. Tall order :)

    Read the article

  • Tungsten for MySQL: Online Schema Upgrade

    - by Jason
    In a recent presentation the Continuent folks claim that they support "daylight maintenance" or online schema upgrades. See Clustering for the Masses: A Gentle Introduction to Tungsten for MySQL, especially pages 22-28. Is anyone using Tungsten for MySQL in this way? It sounds too good to be true. I also wonder if the Community Edition supports all of the features discussed in the presentation. They say elsewhere that it is not crippleware but in their own productization table "Zero downtime upgrade" appears to only be available in the more advanced versions. So I'm skeptical. Community support seems rather non-existent so a commercial license with support is probably warranted (they do not disclose pricing). I have not contacted them directly yet as I prefer community vetting but this solution, despite its value proposition and power to make an admin's life easier just doesn't seem to get the kind of attention it might warrant. If not Tungsten for MySQL, how do you handle online schema upgrades? MySQL Cluster (NDBENGINE) is not well-suited to web applications. Cheers

    Read the article

  • Java generic Comparable where subclasses can't compare to eachother

    - by dege
    public abstract class MyAbs implements Comparable<MyAbs> This would work but then I would be able to compare class A and B with each other if they both extend MyAbs. What I want to accomplish however is the exact opposite. So does anyone know a way to get the generic type to be the own class? Seemed like such a simple thing at first... Edit: To explain it a little further with an example. Say you have an abstract class animals, then you extend it with Dogs and ants. I wouldn't want to compare ants with Dogs but I however would want to compare one dog with another. The dog might have a variable saying what color it is and that is what I want to use in the compareTo method. However when it comes to ants I would rather want to compare ant's size than their color. Hope that clears it up. Could possibly be a design flaw however.

    Read the article

  • Getting table schema from a query

    - by Appu
    As per MSDN, SqlDataReader.GetSchemaTable returns column metadata for the query executed. I am wondering is there a similar method that will give table metadata for the given query? I mean what tables are involved and what aliases it has got. In my application, I get the query and I need to append the where clause programically. Using GetSchemaTable(), I can get the column metadata and the table it belongs to. But even though table has aliases, it still return the real table name. Is there a way to get the aliase name for that table? Following code shows getting the column metadata. const string connectionString = "your_connection_string"; string sql = "select c.id as s,c.firstname from contact as c"; using(SqlConnection connection = new SqlConnection(connectionString)) using(SqlCommand command = new SqlCommand(sql, connection)) { connection.Open(); SqlDataReader reader = command.ExecuteReader(CommandBehavior.KeyInfo); DataTable schema = reader.GetSchemaTable(); foreach (DataRow row in schema.Rows) { foreach (DataColumn column in schema.Columns) { Console.WriteLine(column.ColumnName + " = " + row[column]); } Console.WriteLine("----------------------------------------"); } Console.Read(); } This will give me details of columns correctly. But when I see BaseTableName for column Id, it is giving contact rather than the alias name c. Is there any way to get the table schema and aliases from a query like the above? Any help would be great!

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >