Search Results

Search found 15025 results on 601 pages for 'schema management'.

Page 10/601 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Release/Change management - best aproach

    - by Bob Rivers
    I asked this question an year ago in StackOverflow and never got a good answer. Since Programmers seems to be a better place to ask it, I'll give it a try... What is the better way to work with release management? More specifically what would be the best way to release packages? For example, assuming that you have a relatively stable system, a good quality assurance process (QA), etc. How do you prefer to release new versions? Let's assume that we are talking about a mid to large "centralized" web system (no clients), in-house development. This system can be considered "vital" for a corporate operations. I have a tendency to prefer to do this by releasing packets at regular intervals, not greater than 1 to 3 months. During this period, I will include into the package,fixes and improvements and make the implementation in production environment only once. But I've seen some people who prefer to place small changes in production, but with a greater frequency. The claim of these people is that by doing so, it is easier to identify bugs that have gone through the process of QA: in a package with 10 changes and another with only 1, it is much easier to know what caused the problem in the package with just one change... What is the opinion came from you?

    Read the article

  • apt-get failed install of libg15, all package management is failing

    - by Stifle
    I was trying to get my Logitech G510 keyboard's back-lights working so I went into the Synaptic Package Manager and marked LibG15, G15daemon, and all the other associated packages. Synaptic reported a failed install. Now all Package management is failing due to libg15 being "halfway installed." Some commands I have tried to fix the problem follow. . . root@bt:~# apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done E: The package libg15 needs to be reinstalled, but I can't find an archive for it. root@bt:~# sudo apt-get autoremove Reading package lists... Done Building dependency tree Reading state information... Done E: The package libg15 needs to be reinstalled, but I can't find an archive for it. root@bt:~# sudo apt-get -f purge libg15 Reading package lists... Done Building dependency tree Reading state information... Done E: The package libg15 needs to be reinstalled, but I can't find an archive for it. root@bt:~# sudo dpkg --configure -a dpkg: dependency problems prevent configuration of g15macro: g15macro depends on g15daemon; however: Package g15daemon is not configured yet. dpkg: error processing g15macro (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of g15stats: g15stats depends on g15daemon; however: Package g15daemon is not configured yet. dpkg: error processing g15stats (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: g15macro g15stats I'm not too computer savvy. Any help would be much appreciated!!! Note: I'm using Ubuntu 10.04 under Backtrack 5 R3.

    Read the article

  • Stakeholder Management in OUM

    - by user719921
    Where is Stakeholder Management in OUM?  Stakeholder Management typically falls into the purview of the Project Manager, which means much of the associated guidance is found in the OUM Manage Focus Area (a.k.a. Manage).  There is no process in Manage named Stakeholder Management, but this “touch point” can be found in a variety of other processes including Bid Transition (BT), Communication Management (CMM) and Organizational Change Management (OCHM). •         Stakeholder management starts in the Bid Transition process with Stakeholder Analysis •         This Stakeholder Analysis is used to build the Project Team Communication Plan in the Communication Management process. •         Stakeholder management should be executed during the Execution and Control phase.  For example, as issues are resolved, the project manager should take the action item to follow up with the affected stakeholders to ensure they are aware that the issue has been resolved. •       The broader topic of Stakeholder management is also addressed very thoroughly in the Organizational Change Management process in the Implement Focus Area, which is a touch point to the Organizational Change Management process in Manage. Check it out and let me know your thoughts!

    Read the article

  • Agile project management, agile development: early integration

    - by Matías Fidemraizer
    I believe that agile works if everything is agile. In software development area, in my opinion, if team members' code is integrated early, code will be more in sync and this has a lot of pros: Early integration helps team members to avoid painful merges. Encourages better coding habits, because everyone makes sure that they don't break co-workers' code everyday. Both developers and architects (code reviewers) may detect bad design decisions or just wrong development directions in real-time, preventing useless work. Actually I'm talking about getting the latest version of code base and checking-in your own code to the source control in a daily basis. When you start your coding day (i.e. you arrive to your work), your first action is updating your code base with the latest version from the source control. In the other hand, when you're about an hour to leave from your work and go home, your last action is checking-in your code to the source control and be sure that your day work doesn't break the project's build process. Rather than updating and checking-in your code once you finished an entire task, I believe the best approach is fixing small and flexible personal milestones and checking-in the code once you finish one of these. I really believe that this coding approach fits better in the agile project management concept. Do you know some document, blog post, wiki, article or whatever that you can suggest me that could be in sync with my opinion?. And, do you find any problem working with this approach?. Thank you in advance.

    Read the article

  • Object pools for efficient resource management

    - by GameDevEnthusiast
    How can I avoid using default new() to create each object? My previous demo had very unpleasant framerate hiccups during dynamic memory allocations (usually, when arrays are resized), and creating lots of small objects which often contain one pointer to some DirectX resource seems like an awful lot of waste. I'm thinking about: Creating a master look-up table to refer to objects by handles (for safety & ease of serialization), much like EntityList in source engine Creating a templated object pool, which will store items contiguously (more cache-friendly, fast iteration, etc.) and the stored elements will be accessed (by external systems) via the global lookup table. The object pool will use the swap-with-last trick for fast removal (it will invoke the object's ~destructor first) and will update the corresponding indices in the global table accordingly (when growing/shrinking/moving elements). The elements will be copied via plain memcpy(). Is it a good idea? Will it be safe to store objects of non-POD types (e.g. pointers, vtable) in such containers? Related post: Dynamic Memory Allocation and Memory Management

    Read the article

  • Identity Management: The New Olympic Sport

    - by Naresh Persaud
    How Virgin Media Lit Up the London Tube for the Olympics with Oracle If you are at Open World and have an interest in Identity Management, this promises to be an exciting session. Wed, October 3rd Session CON3957: Delivering Secure Wi-Fi on the Tube as an Olympics Legacy from London 2012 Session Time: 11:45am-12:45pm Session Location: Moscone West L3, Room 3003 Speakers: Perry Banton - IT Architect, Virgin Media                    Ben Bulpett - Director, aurionPro SENA In this session, Virgin Media, the U.K.'s first combined provider of broadband, TV, mobile, and home phone services, shares how it is providing free secure Wi-Fi services to the London Underground, using Oracle Virtual Directory and Oracle Entitlements Server, leveraging back-end legacy systems that were never designed to be externalized. As an Olympics 2012 legacy, the Oracle architecture will form a platform to be consumed by other Virgin Media services such as video on demand. Click here for more information.

    Read the article

  • Project Management Software / 1 maybe 2 developers

    - by Ominus
    I am looking for software that I can use to "manage" multiple projects (5 - 10). Here are the features I would like but any recommendation is welcome. Bug/Feature tracking on a per project basis. Some way to keep all documents, diagrams, specs, requirements, in one place with the project. Better yet a tool where all these things or most of them could be authored. Task management during the development phase with milestones and estimates/actuals. Git integration I have been doing contract work and i have been doing really well for myself as far as getting projects but its becoming VERY hard to manage everything in an efficient manner. I am trying to learn about best practices when it comes to software programming methodologies and the more I read the more i realize that I am just managing these projects poorly. I am getting things done but the more I take on the less "solid" everything is. I am afraid if I don't get some good solid tools/practices in place I am going to do my customers and myself a disservice. The problem is that there are SO many options that its hard to weed through them all. I was at a point today where I had decided that I would just code my own (there is some irony here)! Obviously everyone has their likes dislikes I would love to hear from some of you lone programmers and how you manage everything since our needs aren't exactly the same thing that a large team might need. I also want a solution that can scale to 2 maybe 3 developers if I end up hiring some people to help with my work load. Thanks again for your usual insights!

    Read the article

  • Quantify value for management

    - by nivlam
    We have two different legacy systems (window services in this case) that do exactly the same thing. Both of these systems have small differences for the different applications they serve. Both of these system's core functionality lies within a shared library. Most of the time, the updates occur in the shared library and we simply deploy the updated library to both of these systems. The systems themselves rarely change. Since both of these systems do essentially the same thing, our development team would like to consolidate these two systems into a single service. What can I do to convince management to allocate time for such a task? Some of the points I've noted are: Easier maintenance Decrease testing/QA time Unfortunately, this isn't enough. They would like us to provide them with hard numbers on the amount of hours this will save in the future and how this will speed up future development. Since most of the work is done in the shared library and the systems themselves never change, it's hard for us to quantify how many hours this will save. What kind of arguments can I make to justify the extra work to consolidate these systems?

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

  • Sql Server Compact - Schema Management

    - by Richard B
    I've been searching for some time for a good solution to implement the idea of managing schema on a Sql Server Compact 3.5 db. I know of several ways of managing schema on Sql Express/std/enterprise, but Compact Edition doesn't support the necessary tools required to use the same methodology. Any suggestions/tips? I should expand this to say that it is for 100+ clients with wrapperware software. As the system changes, I need to publish update scripts alongside the new binaries to the client. I was looking for a decent method by which to publish this without having to just hand the client a script file and say "Run this in SSMSE". Most clients are not capable of doing such a beast. A buddy of mine disclosed a partial script on how to handle the SQL Server piece of my task, but never worked on Compact Edition... It looks like I'll be on my own for this. What I think that I've decided to do, and it's going to need a "geek week" to accomplish, is that I'm going to write some sort of tool much like how WiX and nAnt works, so that I can just write an overzealous Xml document to handle the work. If I think that it is worthwhile, I'll publish it on CodePlex and/or CodeProject because I've used both sites a bit to gain better understanding of concepts for jobs I've done in the past, and I think it is probably worthwhile to give back a little.

    Read the article

  • PHP - Database schema: version control, branching, migrations.

    - by Billiam
    I'm trying to come up with (or find) a reusable system for database schema versioning in php projects. There are a number of Rails-style migration projects available for php. http://code.google.com/p/mysql-php-migrations/ is a good example. It uses timestamps for migration files, which helps with conflicts between branches. General problem with this kind of system: When development branch A is checked out, and you want to check out branch B instead, B may have new migration files. This is fine, migrating to newer content is straight forward. If branch A has newer migration files, you would need to migrate downwards to the nearest shared patch. If branch A and B have significantly different code bases, you may have to migrate down even further. This may mean: Check out B, determine shared patch number, check out A, migrate downwards to this patch. This must be done from A since the actual applied patches are not available in B. Then, checkout branch B, and migrate to newest B patch. Reverse process again when going from B to A. Proposed system: When migrating upwards, instead of just storing the patch version, serialize the whole patch in database for later use, though I'd probably only need the down() method. When changing branches, compare patches that have been run to patches that are available in the destination branch. Determine nearest shared patch (or oldest difference, maybe) between db table of run patches and patches in destination branch by ID or hash. Could also look for new or missing patches that are buried under a number of shared patches between the two branches. Automatically merge down to the nearest shared patch, using the db table stored down() methods, and then merge up to the branche's latest patch. My question is: Is this system too crazy and/or fraught with consequences to bother developing? My experience with database schema versioning is limited to PHP autopatch, which is an up()-only system requiring filenames with sequential IDs.

    Read the article

  • How to reuse results with a schema for end of day stock-data

    - by Vishalrix
    I am creating a database schema to be used for technical analysis like top-volume gainers, top-price gainers etc.I have checked answers to questions here, like the design question. Having taken the hint from boe100 's answer there I have a schema modeled pretty much on it, thusly: Symbol - char 6 //primary Date - date //primary Open - decimal 18, 4 High - decimal 18, 4 Low - decimal 18, 4 Close - decimal 18, 4 Volume - int Right now this table containing End Of Day( EOD) data will be about 3 million rows for 3 years. Later when I get/need more data it could be 20 million rows. The front end will be asking requests like "give me the top price gainers on date X over Y days". That request is one of the simpler ones, and as such is not too costly, time wise, I assume. But a request like " give me top volume gainers for the last 10 days, with the previous 100 days acting as baseline", could prove 10-100 times costlier. The result of such a request would be a float which signifies how many times the volume as grown etc. One option I have is adding a column for each such result. And if the user asks for volume gain in 10 days over 20 days, that would require another table. The total such tables could easily cross 100, specially if I start using other results as tables, like MACD-10, MACD-100. each of which will require its own column. Is this a feasible solution? Another option being that I keep the result in cached html files and present them to the user. I dont have much experience in web-development, so to me it looks messy; but I could be wrong ( ofc!) . Is that a option too? Let me add that I am/will be using mod_perl to present the response to the user. With much of the work on mysql database being done using perl. I would like to have a response time of 1-2 seconds.

    Read the article

  • ConfigServer Security and Firewall -- after setup, how much daily management required?

    - by Hope4You
    I'm looking at using ConfigServer Security and Firewall (CSF; iptables-based). After I configure it properly, how much daily ongoing management is required of me to keep my server secure? Am I going to be flooded with "alert" emails that I need to check? Or does the firewall automatically take care of most security threats for me? Note: I understand that there's more to server security than just a software firewall, but this question is specifically for CSF security management.

    Read the article

  • Notes on Oracle BPM PS6 Adaptive Case Management

    - by gcolman
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} I have recently been looking at the  latest release of the BPM Case Management feature in the Oracle BPM PS6 release. I had put together some notes to help me gain a better understanding of the context of the PS6 BPM Case Management. Hopefully, this along with the other resources will enable you to gain a clear picture of the flexibility of this feature. Oracle BPM PS6 release includes Case Management capability. This initial release aims to provide: Case Management Framework Integration of Case Management with BPM & SOA suite It is best to regard the current PS6 case management feature as a case management framework. The framework provides the building blocks for creating a case management system that is fully integrated into Oracle BPM suite. As of the current PS6 release, no UI tooling exists to help manage cases or the case lifecycle. Mark Foster has written a good blog which outlines Case Management within PS6 in the following link. I wanted to provide more context on Case Management from my perspective in this blog. PS6 Case Management - High level View BPM PS6 includes “Case” as a first class component in a SOA Suite composite. The Case components (added to the SOA Composite) are created when a BPM process is assigned to a case in JDveloper. The SOA Case component is defined and configured within JDevloper, which allows us to specify the case data structures and metadata such as stakeholders, outcomes, milestones, document stores etc. "Activities" are associated with a case, and become available to be executed via the case apis. Activities are BPM processes, Human Activities or Java call outs. The PS6 release includes some additional database tables to store the case metadata and case instance data (data object, comments, etc…). These new tables are created within the SOA_INFRA schema and the documents associated with that case into a document repository that is configured with the case. One of the main features of Case Management is the control of the case logic through case events and case business rules. A PS6 Case has an associated business rule component, which can be configured to control the availability and execution of activities within the case. The business rules component is able to act upon events that the PS6 Case Management framework generates during the lifecycle of that case. Events are fired during the lifetime of the case (e.g. Case created, activity started, activity ended, note added, document uploaded.) Internal Case state The internal state of a case is represented by the diagram below. This shows the internal states and the transition paths for a Case from one state to the next Each transition in state will create an event that can be enacted upon via the Case rules engine. The internal case state lifecycle is defined as follows Defining a case A Case is created and defined as a component of a JDeveloper BPM project. When you create a Case as part of a BPM project, JDeveloper, creates the following components within the SCA composite: Case component Case component interfaces (WSDL etc) Case Rules component (Oracle Business Rules) Adds the Case Component and Case Rules Component to the BPM SOA composite Case Configuration The following section gives a high level overview of the items that can be configured for a BPM Case. Case Activities A Case is associated with a set of activities that are to be performed as part of that Case. Case activities can be: SOA Human Tasks BPM processes Custom Task (Java Class) Case activities are created from pre-existing BPM process or human tasks, which, once defined, can be configured additionally as Case activities in JDeveloper and made available within the lifecycle of a case. I've described the following configurable components of a case (very!) briefly as: Milestones Milestones are (optional) user defined logical milestones that can be achieved within a case. No activities are associates with a milestone, but milestone attainment can be programmatically set and events raised when milestones are reached Outcomes User defined status of a completed case. An event is fired when an outcome is attained. Case Data Defines the data that will be stored with a case XML schemas define the data that is stored with the case. Case Documents Defines the location of documents that are attached to a case (e.g. WebCenter Content) User Defined Events Optional user defined events that can be fired or captured to drive case processing rules Stakeholders Defines the actors who can participate in the case (roles, users, groups) Defines permissions for individual case permissions (read case, create document etc…) Business Rules Business rules are the main component controlling the flow of a Case Each case has an associated business ruleset Rules are fired on receiving Case events (or User defined events) Life cycle events Milestone events Activity events Data events Document events Comment events User event Managing the Case Managing the lifecycle of a case is achieved in two ways: Managing case logic with Business Rules Managing the case lifecycle via the Case APIs. A BPM Case can be viewed as a set of case data & documents along with the activities that can be performed within a case and also the case lifecycle state expressed as milestones and internal lifecycle state. The management of the case life is achieved though both the configuration of business rules and the “manual” interaction with a case instance through the Case APIs. Business Rules and Case Events A key component within the Case management framework is the event model. The BPM Case Management solution internally utilizes Oracle EDN (Event Delivery Network) to publish and subscribe to events generated by the Case framework. Events are generated by the Case framework on each of the processes and stages that a case instance will travel on its lifetime. The following case events are part of the BPM Case: Life cycle events Milestone events Activity events Data events Document events Comment events User event The Case business rules are configured to listen for these events, and business logic can be coded into the Case rules component to enact upon an event being received. Case API & Interaction Along with the business rules component, Cases can be managed via the Case API interfaces. These interfaces allow for the building of custom applications to integrate into case management framework. The API’s allow for updating case comments & documents, executing case activities, updating milestones etc. As there is no in built case management UI functions within the PS6 release, Cases need to be managed via a custom built UI, interacting with selected case instances, launching case activities, closing cases etc. (There is expected to be a UI component within subsequent releases) Logical Case Flow The diagram below is intended to depict a logical view of the case steps for a typical case. A UI or other service calls the Case interface to create a Case instance The case instance is created & database data inserted A lifecycle event is raised indicating a case activity (created) event The case business rules capture the event and decide on an action to take Additionally other parties can subscribe to Case events via EDN The business rules may handle the event, e.g. configured to execute a case activity on case creation event The BPM/Human Workflow/Custom activity is executed A case activity event is raised on the execute activity A case work UI or business service can inspect the case instance and call other actions to progress that case, such as: Execute activity Add Note Add document Add case data Update Milestone Raise user defined event Suspend case Resume case Close Case Summary Having had a little time to play around with the APIs and the case configuration, I really like the flexibility and power of combining Oracle Business Rules and the BPM Case Management event model. Creating something this flexible and powerful without BPM Case Management would take a lot of time and effort. This is hopefully going to save my customers a lot of time and effort! I may make amendments to this post as my understanding of Case Management increases! Take a look at the following links for official documentation etc. http://docs.oracle.com/cd/E28280_01/doc.1111/e15176/case_mgmt_bpmpd.htm https://blogs.oracle.com/bpm/entry/just_in_case Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • Create XML File Using XML Schema

    - by metdos
    Is there any easy way to create at least a template XML file using XML Schema? My main interest is bounded by C++, but discussions of other programming languages are also welcome.By the way I also use QT framework.

    Read the article

  • Specify XML schema data type of decimal or blank

    - by Jeremy Stein
    Is there a way in an XML schema to specify that an element may contain either an empty string or a decimal? If I specify the type as xs:decimal like this: <xs:element name="Sample" type="xs:decimal" /> then a blank value would not pass validation: <Sample/> (I realize that the best way to indicate no value would be to not include the element, but I was wondering if there was a way to allow blank or decimal.)

    Read the article

  • Automatic database schema generation and migration with Perl

    - by pistacchio
    In Ror or Django or web2py you can "describe" a database (as a set of classes that remaps to tables) and the framework (having being provided with a connection string to the desired database) generates the tables, fields, relations and in the case of RoR and web2py it also keeps it up-to-date (eg, removing a class drops the table, adding a property to the class triggers an "alter table add" etc). Is there any Perl module that does the same? Eg, it takes the YAML/XML/JSON description of a database as input and modifies/generates the database schema accordingly?

    Read the article

  • Any good SQL Anywhere database schema comparison tools?

    - by Lurker Indeed
    Are there any good database schema comparison tools out there that support Sybase SQL Anywhere version 10? I've seen a litany of them for SQL Server, a few for MySQL and Oracle, but nothing that supports SQL Anywhere correctly. I tried using DB Solo, but it turned all my non-unique indexes into unique ones, and I didn't see any options to change that.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >