Search Results

Search found 12714 results on 509 pages for 'db schema'.

Page 2/509 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Choosing between Berkeley DB Core and Berkeley DB JE

    - by zokier
    I'm designing a Java based web-app and I need a key-value store. Berkeley DB seems fitting enough for me, but there appears to be TWO Berkeley DBs to choose from: Berkeley DB Core which is implemented in C, and Berkeley DB Java Edition which is implemented in pure Java. The question is, how to choose which one to use? With web-apps scalability and performance is quite important (who knows, maybe my idea will become the next Youtube), and I couldn't find easily any meaningful benchmarks between the two. I have yet to familiarize with Cores Java API, but I find it hard to believe that it could be much worse than Java Editions, which seems to be quite nice. If some other key-value store would be much better, feel free to recommend that too. I'm storing smallish binary blobs, and keys probably will be hashes of the data, or some other unique id.

    Read the article

  • New Release of Oracle Berkeley DB

    - by Eric Jensen
    We are pleased to announce that a new release of Oracle Berkeley DB, version 11.2.5.2.28, is available today. Our latest release includes yet more value added features for SQLite users, as well as several performance enhancements and new customer-requested features to the key-value pair API.  We continue to provide technology leadership, features and performance for SQLite applications.  This release introduces additional features that are not available in native SQLite, and adds functionality allowing customers to create richer, more scalable, more concurrent applications using the Berkeley DB SQL API. This release is compelling to Oracle’s customers and partners because it: delivers a complete, embeddable SQL92 database as a library under 1MB size drop-in API compatible with SQLite version 3 no-oversight, zero-touch database administration industrial quality, battle tested Berkeley DB B-TREE for concurrent transactional data storage New Features Include: MVCC support for even higher concurrency direct SQL support for HA/replication transactionally protected Sequence number generation functions lower memory requirements, shared memory regions and faster/smaller memory on startup easier B-TREE page size configuration with new ''db_tuner" utility New Key-Value API Features Include: HEAP access method for constrained disk-space applications (key-value API) faster QUEUE access method operations for highly concurrent applications -- up 2-3X faster! (key-value API) new X/open compliant XA resource manager, easily integrated with Oracle Tuxedo (key-value API) additional HA/replication management and communication options (key-value API) and a lot more! BDB is hands-down the best edge, mobile, and embedded database available to developers. Downloads available today on the Berkeley DB download pageProduct Documentation

    Read the article

  • Database with "Open Schema" - Good or Bad Idea?

    - by Claudiu
    The co-founder of Reddit gave a presentation on issues they had while scaling to millions of users. A summary is available here. What surprised me is point 3: Instead, they keep a Thing Table and a Data Table. Everything in Reddit is a Thing: users, links, comments, subreddits, awards, etc. Things keep common attribute like up/down votes, a type, and creation date. The Data table has three columns: thing id, key, value. There’s a row for every attribute. There’s a row for title, url, author, spam votes, etc. When they add new features they didn’t have to worry about the database anymore. They didn’t have to add new tables for new things or worry about upgrades. This seems like a terrible idea to me, but it seems to have worked out for Reddit. Is it a good idea in general, though? Or is it a peculiarity of Reddit that happened to work out for them?

    Read the article

  • JavaOne 2012 session slides: "Dev Berkeley DB & DB Mobile Server for Java Embedded Tech"

    - by hinkmond
    The latest JavaOne 2012 slides are available on the Web. Here's the presentation that Eric Jensen and I did on "Developing Berkeley DB & DB Mobile Server for Java Embedded Technology". Enjoy! See: Click here for the slides in a new window It was fun to present this talk at JavaOne 2012 with Eric. We had some good questions from the audience. Let me know in the Comments if you have any further questions. I'll pass all the good questions to Eric and keep the bad questions for myself. Hinkmond

    Read the article

  • Why my test xml is failing with very simple XSD Schema?

    - by JSteve
    Hi all, I am a bit novice in xml schema. I would be grateful if somebody help me out to understand why my xml is not being validated with the schema: Here is my Schema: <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.example.org/testSchema" xmlns="http://www.example.org/testSchema"> <xs:element name="Employee"> <xs:complexType> <xs:sequence> <xs:element name="Name"> <xs:complexType> <xs:sequence> <xs:element name="FirstName" /> <xs:element name="LastName" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> Here is my test xml: <?xml version="1.0" encoding="UTF-8"?> <Employee xmlns="http://www.example.org/testSchema"> <Name> <FirstName>John</FirstName> <LastName>Smith</LastName> </Name> </Employee> I am getting following error by Eclipse xml editor/validator: cvc-complex-type.2.4.a: Invalid content was found starting with element 'Name'. One of '{Name}' is expected. I could not understand what is wrong with this schema or my xml.

    Read the article

  • Deleting Multiple Rows with Zend DB Table Problem

    - by davykiash
    I have this data in my db Col1 Col2 DA Data1 DA Data2 DA Data3 DA Data4 DA Data5 I would like to delete all the values WHERE col1 = DA using my Zend DB Table adapter. The code below does not seem to work for multiple rows public function delete($key) { $this->delete('Col1 = "'.$key.'"'); } How can I adjust it so that I can delete multiple rows?

    Read the article

  • Announcing Berkeley DB Java Edition Major Release

    - by Eric Jensen
    Berkeley DB Java Edition 5.0 was just released. There are a number of new features, enhancements, and options in there that our users have been asking for. Chief among them is a new class called DiskOrderedCursor, which greatly increases performance of systems using spinning platter magnetic hard drives. A number of users expressed interest in this feature, including Alex Feinberg of LinkedIn. Berkeley DB Java Edition is part of Project Voldemort, a distributed key/value database used by LinkedIn. There have been many other improvements and optimizations. Concurrency is significantly improved, as is the performance of update and delete operations. New and interesting methods include Environment.preload, which allows multiple databases to be preloaded simultaneously. New Cursor methods enable for more effective searching through the database. We continue to enhance Berkeley DB Java Edition’s High Availability as well. One new feature is the ability to open a replicated node read-only when the master is unavailable. This can allow critical systems to continue offering some functionality, even during a network or master node failure. There’s a lot more in release 5.0. I encourage you to take a look at the extensive changelog yourself. As always, you can download the new release and try it out here: http://www.oracle.com/technetwork/database/berkeleydb/downloads/index.html

    Read the article

  • Practical mysql schema advice for eCommerce store - Products & Attributes

    - by Gravy
    I am currently planning my first eCommerce application (mySQL & Laravel Framework). I have various products, which all have different attributes. Describing products very simply, Some will have a manufacturer, some will not, some will have a diameter, others will have a width, height, depth and others will have a volume. Option 1: Create a master products table, and separate tables for specific product types (polymorphic relations). That way, I will not have any unnecessary null fields in the products table. Option 2: Create a products table, with all possible fields despite the fact that there will be a lot of null rows Option 3: Normalise so that each attribute type has it's own table. Option 4: Create an attributes table, as well as an attribute_values table with the value being varchar regardless of the actual data-type. The products table would have a many:many relationship with the attributes table. Option 5: Common attributes to all or most products put in the products table, and specific attributes to a particular category of product attached to the categories table. My thoughts are that I would like to be able to allow easy product filtering by these attributes and sorting. I would also want the frontend to be fast, less concern over the performance of the inserting and updating of product records. Im a bit overwhelmed with the vast implementation options, and cannot find a suitable answer in terms of the best method of approach. Could somebody point me in the right direction? In an ideal world, I would like to offer the following kind of functionality - http://www.glassesdirect.co.uk/products/ to my eCommerce store. As can be seen, in the sidebar, you can select an attribute the glasses to filter them. e.g. male / female or plastic / metal / titanium etc... Alternatively, should I just dump the mySql relational database idea and learn mongodb?

    Read the article

  • Declraing namespace schema with prefix in XSD/XML

    - by user1493537
    I am new to XML and I have a couple of questions about prefix. I need to "add the root schema element and insert the declaration for the XML schema namespace using the xc prefix. Set the default namespace and target of the schema to the URI test.com/test1" I am doing: <xc:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://test.com/test1" targetNamespace="http://test.com/test1"> </xc:schema> Is this correct? The next one is: "insert the root schema element, declaring the XML schema namespace with the xc prefix. Declare the library namespace using the lib prefix and the contributors namespace using the cont prefix. Set the default namespace and the schema target to URI test.com/test2" The library URI is http://test.com/library and contributor URI is test.com/contributor I am doing: <xc:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:lib="http://test.com/library" xmlns:clist="http://test.com/contributor" targetNamespace="http://test.com/test2"> </xc:schema> Does this look right? I am confused with prefix and all. Thanks for the help.

    Read the article

  • In-document schema declarations and lxml

    - by shylent
    As per the official documentation of lxml, if one wants to validate a xml document against a xml schema document, one has to construct the XMLSchema object (basically, parse the schema document) construct the XMLParser, passing the XMLSchema object as its schema argument parse the actual xml document (instance document) using the constructed parser There can be variations, but the essense is pretty much the same no matter how you do it, - the schema is specified 'externally' (as opposed to specifying it inside the actual xml document). If you follow this procedure, the validation occurs, sure enough, but if I understand it correctly, that completely ignores the whole idea of the schemaLocation and noNamespaceSchemaLocation attributes from xsi. This introduces a whole bunch of limitations, starting with the fact, that you have to deal with instance<-schema relation all by yourself (either store it externally or write some hack to retrieve the schema location from the root element of the instance document), you can not validate the document using multiple schemata (say, when each schema governs its own namespace) and so on. So the question is: maybe I am missing something completely trivial or doing it wrong? Or are my statements about lxml's limitations regarding schema validation true? To recap, I'd like to be able to: have the parser use the schema location declarations in the instance document at parse/validation time use multiple schemata to validate a xml document declare schema locations on non-root elements (not of extreme importance) Maybe I should look for a different library? Although, that'd be a real shame, - lxml is a de-facto xml processing library for python and is regarded by everyone as the best one in terms of performace/features/convenience (and rightfully so, to a certain extent)

    Read the article

  • Yammer, Berkeley DB, and the 3rd Platform

    - by Eric Jensen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:major-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:major-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:major-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:major-bidi; mso-bidi-language:EN-US;} If you read the news, you know that the latest high-profile social media acquisition was just confirmed. Microsoft has agreed to acquire Yammer for 1.2 billion. Personally, I believe that Yammer’s amazing success can be mainly attributed to their wise decision to use Berkeley DB Java Edition as their backend data store. :-) I’m only kidding, of course. However, as Ryan Kennedy points out in the video I recently blogged about, BDB JE did provide the right feature set that allowed them to reliably grow their business. Which in turn allowed them to focus on their core value add. As it turns out, their ‘add’ is quite valuable! This actually makes sense to me, a lot more sense than certain other recent social acquisitions, and here’s why. Last year, IDC declared that we are entering a new computing era, the era of the “3rd Platform.” In case you’re curious, the first 2 were terminal computing and client/server computing, IIRC. Anyway, this 3rd one is more complicated. This year, IDC refined the concept further. It now involves 4 distinct buzzwords: cloud, social, mobile, and big data. Yammer is a social media platform that runs in the cloud, designed to be used from mobile devices. Their approach, using Berkeley DB Java Edition with High Availability, qualifies as big data. This means that Yammer is sitting right smack in the center if IDC’s new computing era. Another way to put it is: the folks at Yammer were prescient enough to predict where things were headed, and get there first. They chose Berkeley DB to handle their data. Maybe you should too!

    Read the article

  • Oracle Schema Design: Seperate Schema with I/O Overhead?

    - by Guru
    We are designing database schema for a new system based on Oracle 11gR1. We have identified a main schema which would have close to 100 tables, these will be accessed from the front end Java application. We have a requirement to audit the values which got changed in close to 50 tables, this has to be done every row. Which means, it is possible that, for a single row in MYSYS.T1 there might be 50 (or more) rows in MYSYS_AUDIT.T1_AUD table. We might be having old values of every column entry and new values available from T1. DBA gave an observation, advising against this method, because he said, separate schema meant an extra I/O for every operation. Basically AUDIT schema would be used only to do some analyse and enter values (thus SELECT and INSERT). Is it true that, "a separate schema means an extra I/O" ? I could not find justification. It appears logical to me, as the AUDIT data should not be tampered with, so a separate schema. Also, we designed a separate schema for archiving some tables from MYSYS. From MYSYS_ARC the table might be backed up into tapes or deleted after sufficient time. Few stats: Few tables (close to 20, 30) in MYSYS schema could grow to around 50M rows. We have asked for a total disk space of 4 TB. MYSYS_AUDIT schema might be having 10 times that of MYSYS but we wont keep them more than 3 months. Questions Given all these, can you suggest me any improvements? Separate schema affects disc I/O? (one extra I/O for every schema ?) Any general suggestions? Figure: +-------------------+ +-------------------+ | MYSYS | | MYSYS_AUDIT | | | | | | 1. T1 | | 1. T1_AUD | | 2. T2 | | 2. T2_AUD | | 3. T3 |--------->| 3. T3_AUD | | 4. T4 |(SELECT, | 4. T4_AUD | | . | INSERT) | . | | . | | . | | . | | . | | 100. T100 | | 50. T50_AUD | +-------------------+ +-------------------+ | | | | |(INSERT) | | | * +-------------------+ | MYSYS_ARC | | | | 1. T1_ARC | | 2. T2_ARC | | 3. T3_ARC | | 4. T4_ARC | | . | | . | | . | | 100. T100_ARC | +-------------------+ Apart from this, we have two more schemas with only read only rights, but mainly they are for adhoc purpose and we dont mind the performance on them.

    Read the article

  • Schema changes with replication

    - by Even Mien
    What are the steps to make a schema change to a SQL Server 2005 database using transactional replication? I'm trying to add a database column. I thought if I removed the article for the table, made the schema change, and then added the article for the table back that the schema change would replicate. I am now getting the following error every minute or so: SQL Server errors Replication-Replication Distribution Subsystem: agent [jobname] failed. Invalid column name 'NewColumn'.

    Read the article

  • Mongodb using db.help() on a particular db command

    - by user1325696
    When I type db.help() It returns DB methods: db.addUser(username, password[, readOnly=false]) db.auth(username, password) ... ... db.printShardingStatus() ... ... db.fsyncLock() flush data to disk and lock server for backups db.fsyncUnock() unlocks server following a db.fsyncLock() I'd like to find out how to get more detailed help for the particular command. The problem was with the printShardingStatus as it returned "too many chunks to print, use verbose if you want to print" mongos> db.printShardingStatus() --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3 } shards: { "_id" : "shard0000", "host" : "localhost:10001" } { "_id" : "shard0001", "host" : "localhost:10002" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "dbTest", "partitioned" : true, "primary" : "shard0000" } dbTest.things chunks: shard0001 12 shard0000 19 too many chunks to print, use verbose if you want to for ce print I found that for that particular command I can specify boolean parameter db.printShardingStatus(true) which wasn't shown using db.help().

    Read the article

  • add schema to path in postgresql

    - by veilig
    I'm the process of moving applications over from all in the public schema to each having their own schema. for each application, I have a small script that will create the schema and then create the tables,functions,etc... to that schema. Is there anyway to automatically add a newly created schema to the search_path? Currently, the only way I see is to find the users current path SHOW search_path; and then add the new schema to it SET search_path to xxx,yyy,zzz; I would like some way to just say, append schema zzz to the users_search path. is this possible?

    Read the article

  • How to retrieve all the records from a Berkeley DB in Ruby

    - by Federico Builes
    I'd like to be able to get all the key-values stored in a Berkeley DB using the Ruby bindings from http://github.com/mattbauer/bdb/tree/master but I'm not sure how to proceed. Any pointers will be appreciated. UPDATE Here's a small script that loops over the keys and prints them. Based on Pax' answer: require 'rubygems' require 'bdb' env = Bdb::Env.new(0) env.open('foo', Bdb::DB_CREATE,0) db = env.db db.open(nil, 'db1.db', nil, Bdb::Db::BTREE, Bdb::DB_CREATE,0) db.put(nil, 'key', 'value', 0) db.put(nil, 'key1', 'value1', 0) db.put(nil, 'key2', 'value2', 0) dbc = db.cursor(nil,0) key,val = dbc.get(nil,nil,Bdb::DB_FIRST) while key p key,val key,val = dbc.get(nil,nil,Bdb::DB_NEXT) end dbc.close db.close(0) env.close

    Read the article

  • Zend DB returning NULL value

    - by davykiash
    I have the following query that extends from Zend_DB_Table_Abstract $select = $this->select() ->from('expense_details', array('SUM(expense_details_amount) AS total')) ->where('YEAR(expense_details_date) = ?', '2010') ->where('MONTH(expense_details_date) = ?', '01') ->where('expense_details_linkemail = ?', '[email protected]'); However it returning a NULL value despite its "equivalent" returning the desired value SELECT SUM(expense_details_amount) AS total FROM expense_details WHERE YEAR(expense_details_date) = '2010' AND MONTH(expense_details_date) = '01' AND expense_details_linkemail = '[email protected]' Is my Zend_DB_Table construct above correct?

    Read the article

  • Trouble with Berkeley DB JE Base API Secondary Databases and Sequences

    - by milosz
    I have a class Document which consists of Id (int) and Url (String). I would like to have a primary index on Id and secondary index on Url. I would also like to have a sequence for Id auto-incrementation. So I create a SecondaryDatabase and then I create a Sequence. During initialisation of the Sequence I get an exception: Exception in thread "main" java.lang.IllegalArgumentException at com.sleepycat.util.UtfOps.getCharLength(UtfOps.java:137) at com.sleepycat.util.UtfOps.bytesToString(UtfOps.java:259) at com.sleepycat.bind.tuple.TupleInput.readString(TupleInput.java:152) at pl.edu.mimuw.zbd.berkeley.zadanie.rozwiazanie.MyDocumentBiding.entryToObject(MyDocumentBiding.java:12) at pl.edu.mimuw.zbd.berkeley.zadanie.rozwiazanie.MyDocumentBiding.entryToObject(MyDocumentBiding.java:1) at com.sleepycat.bind.tuple.TupleBinding.entryToObject(TupleBinding.java:76) at pl.edu.mimuw.zbd.berkeley.zadanie.rozwiazanie.UrlKeyCreator.createSecondaryKey(UrlKeyCreator.java:20) at com.sleepycat.je.SecondaryDatabase.updateSecondary(SecondaryDatabase.java:835) at com.sleepycat.je.SecondaryTrigger.databaseUpdated(SecondaryTrigger.java:42) at com.sleepycat.je.Database.notifyTriggers(Database.java:2004) at com.sleepycat.je.Cursor.putNotify(Cursor.java:1692) at com.sleepycat.je.Cursor.putInternal(Cursor.java:1616) at com.sleepycat.je.Cursor.putNoOverwrite(Cursor.java:663) at com.sleepycat.je.Sequence.<init>(Sequence.java:188) at com.sleepycat.je.Database.openSequence(Database.java:546) at pl.edu.mimuw.zbd.berkeley.zadanie.rozwiazanie.MyFullTextSearchEngine.init(MyFullTextSearchEngine.java:131) at pl.edu.mimuw.zbd.berkeley.zadanie.testy.MyFullTextSearchEngineTest.main(MyFullTextSearchEngineTest.java:18) It seems that during the initialisation of the sequence the secondary database is forced to update. When I debug the entryToObject method of MyDocumentBiding the bytes that it tries to convert to object seem random. What am I doing wrong?

    Read the article

  • Sqlite catalog and schema settings (for MyBatis generator)

    - by user1769754
    I have been trying to use the MyBatis generator on a Sqlite database but can't seem to find what settings to use for the XML attributes catalog and schema. The generator errors with "Generation Warnings Occured: Table configuration with catalog main, schema sqlite_master, and table testTable did not resolve to any tables" I can't find much from the sqlite website other than this which gives something like catalog = main, schema = sqlite or sqlite_master My generator XML file is <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE generatorConfiguration PUBLIC "-//mybatis.org//DTD MyBatis Generator Configuration 1.0//EN" "http://mybatis.org/dtd/mybatis-generator-config_1_0.dtd" > <generatorConfiguration > <context id="context"> <jdbcConnection driverClass="org.sqlite.JDBC" connectionURL="jdbc:sqlite:testDB.sqlite" userId="" password="" ></jdbcConnection> <javaModelGenerator targetPackage="model" targetProject="test/src" ></javaModelGenerator> <sqlMapGenerator targetPackage="model" targetProject="test/src" ></sqlMapGenerator> <javaClientGenerator targetPackage="model" targetProject="test" type="XMLMAPPER" ></javaClientGenerator> <table catalog="main" schema="sqlite_master" tableName="testTable" > <property name="useActualColumnNames" value="true"/> </table> </context> </generatorConfiguration> I have also tried other combinations like <table catalog="main" schema="sqlite" tableName="testTable" > <table schema="sqlite" tableName="testTable" > <table schema="sqlite_master" tableName="testTable" > <table schema="main.testTable" tableName="testTable" > Anyone here know the right settings?

    Read the article

  • VB.Net Validate an xml against a schema (strange problem)

    - by Apeksha
    I have written a small XML validator, that takes in an XML file and an XML schema and validates the XML files against that schema. It works well, except for an XML file, with this content: <?xml version="1.0" encoding="utf-8"?> <xc:program xmlns:xc="http:\\www.something.com\Schema\XC10" xc:version="4.0.22.0" > <xc:namespaceDecls> <xc:namespaceDecl xc:namespaceDeclURI="urn:swift:xsd:abc"> <xc:namespaceDeclPrefix>n</xc:namespaceDeclPrefix> </xc:namespaceDecl> </xc:namespaceDecls> </xc:program> I tried to validate this XML file against a bunch of different schemas. No matter which schema I select, this XML file comes out as valid. What is it that I am missing? Here is the relevant piece of code: 'Create a schema cache and add the given schema to it. Dim schemaCache As New Schema.XmlSchemaSet schemaCache.Add(targetNamespace, schemaFilename) 'Create an XML DOMDocument object. Dim xmlDom As New XmlDocument 'Assign the schema cache to the DOM document. 'schemas collection. xmlDom.Schemas = schemaCache 'Load selected file as the DOM document. xmlDom.Load(xmlFilename) xmlDom.Validate(AddressOf ValidationCallBack)

    Read the article

  • Automated Oracle Schema Migration Tool

    - by Dave Jarvis
    What are some tools (commercial or OSS) that provide a GUI-based mechanism for creating schema upgrade scripts? To be clear, here are the tool responsibilities: Obtain connection to recent schema version (called "source"). Obtain connection to previous schema version (called "target"). Compare all schema objects between source and target. Create a script to make the target schema equivalent to the source schema ("upgrade script"). Create a rollback script to revert the source schema, used if the upgrade script fails (at any point). Create individual files for schema objects. The software must: Use ALTER TABLE instead of DROP and CREATE for renamed columns. Work with Oracle 10g or greater. Create scripts that can be batch executed (via command-line). Trivial installation process. (Bonus) Create scripts that can be executed with SQL*Plus. Here are some examples (from StackOverflow, ServerFault, and Google searches): Change Manager Oracle SQL Developer Software that does not meet the criteria, or cannot be evaluated, includes: TOAD PL/SQL Developer - Invalid SQL*Plus statements. Does not produce ALTER statements. SQL Fairy - No installer. Complex installation process. Poorly documented. DBDiff - Crippled data set evaluation, poor customer support. OrbitDB - Crippled data set evaluation. SchemaCrawler - No easily identifiable download version for Oracle databases. SQL Compare - SQL Server, not Oracle. LiquiBase - Requires changing the development process. No installer. Manually edit config files. Does not recognize its own baseUrl parameter. The only acceptable crippling of the evaluation version is by time. Crippling by restricting the number of tables and views hides possible bugs that are only visible in the software during the attempt to migrate hundreds of tables and views.

    Read the article

  • Testing for Active Directory Schema modification (not upgrade)

    - by Darktux
    I am trying to test a schema modification. That is i need to add one of the attributes to global catalog by modifying schema , initially in a lab which is exact replica.My questions are below; - What tests need to be done post schema change to determine if its safe for production? - Apart from measuring changes in DIT size post change, is there a way to find the whole size increase for adding an attribute to GC pre change? please let me know if any extra questions or info required.

    Read the article

  • JPA 2.1 Schema Generation (TOTD #187)

    - by arungupta
    This blog explained some of the key features of JPA 2.1 earlier. Since then Schema Generation has been added to JPA 2.1. This Tip Of The Day (TOTD) will provide more details about this new feature in JPA 2.1. Schema Generation refers to generation of database artifacts like tables, indexes, and constraints in a database schema. It may or may not involve generation of a proper database schema depending upon the credentials and authorization of the user. This helps in prototyping of your application where the required artifacts are generated either prior to application deployment or as part of EntityManagerFactory creation. This is also useful in environments that require provisioning database on demand, e.g. in a cloud. This feature will allow your JPA domain object model to be directly generated in a database. The generated schema may need to be tuned for actual production environment. This usecase is supported by allowing the schema generation to occur into DDL scripts which can then be further tuned by a DBA. The following set of properties in persistence.xml or specified during EntityManagerFactory creation controls the behaviour of schema generation. Property Name Purpose Values javax.persistence.schema-generation-action Controls action to be taken by persistence provider "none", "create", "drop-and-create", "drop" javax.persistence.schema-generation-target Controls whehter schema to be created in database, whether DDL scripts are to be created, or both "database", "scripts", "database-and-scripts" javax.persistence.ddl-create-script-target, javax.persistence.ddl-drop-script-target Controls target locations for writing of scripts. Writers are pre-configured for the persistence provider. Need to be specified only if scripts are to be generated. java.io.Writer (e.g. MyWriter.class) or URL strings javax.persistence.ddl-create-script-source, javax.persistence.ddl-drop-script-source Specifies locations from which DDL scripts are to be read. Readers are pre-configured for the persistence provider. java.io.Reader (e.g. MyReader.class) or URL strings javax.persistence.sql-load-script-source Specifies location of SQL bulk load script. java.io.Reader (e.g. MyReader.class) or URL string javax.persistence.schema-generation-connection JDBC connection to be used for schema generation javax.persistence.database-product-name, javax.persistence.database-major-version, javax.persistence.database-minor-version Needed if scripts are to be generated and no connection to target database. Values are those obtained from JDBC DatabaseMetaData. javax.persistence.create-database-schemas Whether Persistence Provider need to create schema in addition to creating database objects such as tables, sequences, constraints, etc. "true", "false" Section 11.2 in the JPA 2.1 specification defines the annotations used for schema generation process. For example, @Table, @Column, @CollectionTable, @JoinTable, @JoinColumn, are used to define the generated schema. Several layers of defaulting may be involved. For example, the table name is defaulted from entity name and entity name (which can be specified explicitly as well) is defaulted from the class name. However annotations may be used to override or customize the values. The following entity class: @Entity public class Employee {    @Id private int id;    private String name;     . . .     @ManyToOne     private Department dept; } is generated in the database with the following attributes: Maps to EMPLOYEE table in default schema "id" field is mapped to ID column as primary key "name" is mapped to NAME column with a default VARCHAR(255). The length of this field can be easily tuned using @Column. @ManyToOne is mapped to DEPT_ID foreign key column. Can be customized using JOIN_COLUMN. In addition to these properties, couple of new annotations are added to JPA 2.1: @Index - An index for the primary key is generated by default in a database. This new annotation will allow to define additional indexes, over a single or multiple columns, for a better performance. This is specified as part of @Table, @SecondaryTable, @CollectionTable, @JoinTable, and @TableGenerator. For example: @Table(indexes = {@Index(columnList="NAME"), @Index(columnList="DEPT_ID DESC")})@Entity public class Employee {    . . .} The generated table will have a default index on the primary key. In addition, two new indexes are defined on the NAME column (default ascending) and the foreign key that maps to the department in descending order. @ForeignKey - It is used to define foreign key constraint or to otherwise override or disable the persistence provider's default foreign key definition. Can be specified as part of JoinColumn(s), MapKeyJoinColumn(s), PrimaryKeyJoinColumn(s). For example: @Entity public class Employee {    @Id private int id;    private String name;    @ManyToOne    @JoinColumn(foreignKey=@ForeignKey(foreignKeyDefinition="FOREIGN KEY (MANAGER_ID) REFERENCES MANAGER"))    private Manager manager;     . . . } In this entity, the employee's manager is mapped by MANAGER_ID column in the MANAGER table. The value of foreignKeyDefinition would be a database specific string. A complete replay of Linda's talk at JavaOne 2012 can be seen here (click on CON4212_mp4_4212_001 in Media). These features will be available in GlassFish 4 promoted builds in the near future. JPA 2.1 will be delivered as part of Java EE 7. The different components in the Java EE 7 platform are tracked here. JPA 2.1 Expert Group has released Early Draft 2 of the specification. Section 9.4 and 11.2 provide all details about Schema Generation. The latest javadocs can be obtained from here. And the JPA EG would appreciate feedback.

    Read the article

  • Validating an XML document fragment against XML schema

    - by shylent
    Terribly sorry if I've failed to find a duplicate of this question. I have a certain document with a well-defined document structure. I am expressing that structure through an XML schema. That data structure is operated upon by a RESTful service, so various nodes and combinations of nodes (not the whole document, but fragments of it) are exposed as "resources". Naturally, I am doing my own validation of the actual data, but it makes sense to validate the incoming/outgoing data against the schema as well (before the fine-grained validation of the data). What I don't quite grasp is how to validate document fragments given the schema definition. Let me illustrate: Imagine, the example document structure is: <doc-root> <el name="foo"/> <el name="bar"/> </doc-root> Rather a trivial data structure. The schema goes something like this: <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <xsd:element name="doc-root"> <xsd:complexType> <xsd:sequence> <xsd:element name="el" type="myCustomType" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:complexType name="myCustomType"> <xsd:attribute name="name" use="required" /> </xsd:complexType> </xsd:schema> Now, imagine, I've just received a PUT request to update an 'el' object. Naturally, I would receive not the full document or not any xml, starting with 'doc-root' at its root, but the 'el' element itself. I would very much like to validate it against the existing schema then, but just running it through a validating parser wouldn't work, since it will expect a 'doc-root' at the root. So, again, the question is, - how can one validate a document fragment against an existing schema, or, perhaps, how can a schema be written to allow such an approach. Hope it made sense.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >