Search Results

Search found 9730 results on 390 pages for 'dynamic schema'.

Page 8/390 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • XML Schema Header & Namespace Config

    - by zharvey
    Migrating from DTD to XSD and for some reason the transition is a bumpy one. I understand how to define the schema once I'm inside the <xs:schema> root tag, but getting past the header & namespace declaration stuff is proving to be especially confusing for me. I have been trying to follow the well-laid out tutorial on W3S but even that tutorial seems to assume a lot of knowledge up front. I guess what I'm looking for is a King's English explanation of which attributes do what, where they go, and why: xmlns xmlns:xs xmlns:xsi targetNamespace xsi:schemaLocation And in some cases I see different variations of these elements/attributes, such as xsi which seems to have two different notations like xsi:schemaLocation="..." and xs:import schemaLocation="...". I guess between all these slight variations I can't seem to make heads or tails of what each of these does. Thanks in advance for bringing any clarity to this confusion!

    Read the article

  • How to create schema that have an access as that of dbo and can be accessed by sa user

    - by Shantanu Gupta
    I am new to schema, roles and user management part in sql server. Till now I used to work with simple dbo schema but now after reading few articles I am intrested in creating schema for managing my tables in a folder fashion. At present, I want to create a schema where i want to keep my tables that have same kind of functionality. When I tries to create a schema then I faces a problem while using query, permissions etc. First of all i want to get used to of using schemas then only I want to explore it. But due to initial stages and work pressure as well i m not able to implement it yet. What can i do to start using schema with default permissions as that of dbo. Also let me know about creating roles and assigning roles on these schema. I want all this to be accessible by sa user itself at present. What is the concept behind all these things

    Read the article

  • Stop node containing subnodes and text in schema

    - by AndyC
    If I have some xml like this: <mynode> <mysubnode> <mysubsubnode>hello world</mysubsubnode> some more text </mysubnode> </mynode> As you can see, mysubnode contains both a subnode and some text data. What I want to know is, is it possible to prevent this happening in a schema? I don't want nodes to contain subnodes and text, just subnodes or text. Is there an option in my xsd I can specify to force this? My program to that uses this xml is written in .NET, so I'll tag it as well incase there's anything of use in .net that I can utilise for this, though I'd much rather that the issue was fixed in the schema itself. Cheers

    Read the article

  • Thoughts on schemas and schema proliferation

    - by jamiet
    In SQL Server 2005 Microsoft introduced user-schema separation and since then I have seen the use of schemas increase; whereas before I would typically see databases where all objects were in the [dbo] schema I now see databases that have multiple schemas, a database I saw recently had 31 (thirty one) of them. I can’t help but wonder whether this is a good thing or not – clearly 31 is an extreme case but I question whether multiple schemas create more problems than they solve? I have been involved in many discussions that go something like this: Developer #1> “I have a new function to add to the database and I’m not sure which schema to put it in” Developer #2> “What does it do?” Developer #1> “It provides data to a report in Reporting Services” Developer #2> “Ok, so put it in the [reports] schema” Developer #1> “Well I could, but the data will only be used by our Financial reporting folks so shouldn’t I put it in the [financial] schema?” Developer #2> “Maybe, yes” Developer #1> “Mind you, the data is supposed to be used for regulatory reporting to the FSA, should I put it in [regulatory]?” Developer #2> “Err….” You get the idea!!! The more schemas that exist in your database then the more chance that their supposed usages will overlap. I’m left wondering whether the use of schemas is actually necessary. I don’t view really see them as an aid to security because I generally believe that principles should be assigned permissions on objects as-needed on a case-by-case basis (and I have a stock SQL query that deciphers them all for me) so why bother using them at all? I can envisage a use where a database is used to house objects pertaining to many different business functions (which, in itself, is an ambiguous term) and in that circumstance perhaps a schema per business function would be appropriate; hence of late I have been loosely following this edict: If some objects in a database could be moved en masse to another database without the need to remove any foreign key constraints then those objects could legitimately exist in a dedicated schema. I am interested to know what other people’s thoughts are on this. If you would like to share then please do so in the comments. @Jamiet

    Read the article

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

  • Dynamic VPN tunneling technologies

    - by Adam
    Ok, so I'm asking a more specific question this time. I'm writing a paper about Cisco's DMVPN and one of the tasks I have is to make the analysis of available network solutions which use dynamic VPN tunnels. Because the paper is about DMVPN, I have to compare those solutions to it. I know there are a lot of dynamic tunneling technologies but I'm looking for ones that can be compared to DMVPN. So the question is: are there any technologies which use dynamic VPN tunnels (not necessarily using crypto) that can be compared to DMVPN? What are those technologies?

    Read the article

  • Sql Server Compact - Schema Management

    - by Richard B
    I've been searching for some time for a good solution to implement the idea of managing schema on a Sql Server Compact 3.5 db. I know of several ways of managing schema on Sql Express/std/enterprise, but Compact Edition doesn't support the necessary tools required to use the same methodology. Any suggestions/tips? I should expand this to say that it is for 100+ clients with wrapperware software. As the system changes, I need to publish update scripts alongside the new binaries to the client. I was looking for a decent method by which to publish this without having to just hand the client a script file and say "Run this in SSMSE". Most clients are not capable of doing such a beast. A buddy of mine disclosed a partial script on how to handle the SQL Server piece of my task, but never worked on Compact Edition... It looks like I'll be on my own for this. What I think that I've decided to do, and it's going to need a "geek week" to accomplish, is that I'm going to write some sort of tool much like how WiX and nAnt works, so that I can just write an overzealous Xml document to handle the work. If I think that it is worthwhile, I'll publish it on CodePlex and/or CodeProject because I've used both sites a bit to gain better understanding of concepts for jobs I've done in the past, and I think it is probably worthwhile to give back a little.

    Read the article

  • PHP - Database schema: version control, branching, migrations.

    - by Billiam
    I'm trying to come up with (or find) a reusable system for database schema versioning in php projects. There are a number of Rails-style migration projects available for php. http://code.google.com/p/mysql-php-migrations/ is a good example. It uses timestamps for migration files, which helps with conflicts between branches. General problem with this kind of system: When development branch A is checked out, and you want to check out branch B instead, B may have new migration files. This is fine, migrating to newer content is straight forward. If branch A has newer migration files, you would need to migrate downwards to the nearest shared patch. If branch A and B have significantly different code bases, you may have to migrate down even further. This may mean: Check out B, determine shared patch number, check out A, migrate downwards to this patch. This must be done from A since the actual applied patches are not available in B. Then, checkout branch B, and migrate to newest B patch. Reverse process again when going from B to A. Proposed system: When migrating upwards, instead of just storing the patch version, serialize the whole patch in database for later use, though I'd probably only need the down() method. When changing branches, compare patches that have been run to patches that are available in the destination branch. Determine nearest shared patch (or oldest difference, maybe) between db table of run patches and patches in destination branch by ID or hash. Could also look for new or missing patches that are buried under a number of shared patches between the two branches. Automatically merge down to the nearest shared patch, using the db table stored down() methods, and then merge up to the branche's latest patch. My question is: Is this system too crazy and/or fraught with consequences to bother developing? My experience with database schema versioning is limited to PHP autopatch, which is an up()-only system requiring filenames with sequential IDs.

    Read the article

  • How to reuse results with a schema for end of day stock-data

    - by Vishalrix
    I am creating a database schema to be used for technical analysis like top-volume gainers, top-price gainers etc.I have checked answers to questions here, like the design question. Having taken the hint from boe100 's answer there I have a schema modeled pretty much on it, thusly: Symbol - char 6 //primary Date - date //primary Open - decimal 18, 4 High - decimal 18, 4 Low - decimal 18, 4 Close - decimal 18, 4 Volume - int Right now this table containing End Of Day( EOD) data will be about 3 million rows for 3 years. Later when I get/need more data it could be 20 million rows. The front end will be asking requests like "give me the top price gainers on date X over Y days". That request is one of the simpler ones, and as such is not too costly, time wise, I assume. But a request like " give me top volume gainers for the last 10 days, with the previous 100 days acting as baseline", could prove 10-100 times costlier. The result of such a request would be a float which signifies how many times the volume as grown etc. One option I have is adding a column for each such result. And if the user asks for volume gain in 10 days over 20 days, that would require another table. The total such tables could easily cross 100, specially if I start using other results as tables, like MACD-10, MACD-100. each of which will require its own column. Is this a feasible solution? Another option being that I keep the result in cached html files and present them to the user. I dont have much experience in web-development, so to me it looks messy; but I could be wrong ( ofc!) . Is that a option too? Let me add that I am/will be using mod_perl to present the response to the user. With much of the work on mysql database being done using perl. I would like to have a response time of 1-2 seconds.

    Read the article

  • Win 8: Adding a boot volume to an MBR dynamic disk [NOT about changing to basic disks]

    - by Stilez
    (This is NOT aiming to convert to basic disk. In this question, the disk stays dynamic but becomes bootable) There doesn't seem to be a clear, well stated answer I can find, for the question "What are the criteria for Windows 8 to successfully boot from an MBR dynamic disk", or "how do I fix a dynamic MBR partition that's failing boot"? I've tried to educate myself but can't find crucial information to clear it all up. My existing HDD/SSD setup: DISK 0 ~ 60GB SSD/MBR/basic: (350MB recovery)(60GB windows 8 bootable) DISK 1 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB unallocated)(410GB mirrored data) DISK 2 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB unallocated)(410GB mirrored data) DISKS 3, 4, 5: (ignored for simplicity: 2xHDD RAID1 + caching SSD) I'm heavy duty on data crunching and virtualisation, just maxxed out 32GB RAM @ 2133 and moved to 4960X + 64GB. Disk 0 is a pure system disk of little value, and virtualisations runs off mirrored SSDs (Samsung 840 Pro 512 x 2) for double speed reading and so they snapshot in reasonable time. I'm using 4 SATA3 ports and the board only has two decent Intel ports (onboard Marvell are poorer quality). I'm wary of choosing between LSI, HighPoint and other 3rd party controllers as I'm unfamiliar with the maze of decent RAID cards (that's a whole other issue!). I want to cut down my SSD needs by moving the boot volume and caching volume to the 840 pros, giving a setup with 2 fewer SSDs: DISK 0 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB boot)(410GB mirrored data) DISK 1 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(30GB cache for the ICH10R mirror)(30GB temp)(410GB mirrored data) DISKS 2, 3: (2xHDD RAID1) Intel's RST allows this, Win 8 allows booting off a MBR/dynamic disk, and the two 60GB SSDs are hardly the fastest SSDs anyway, they'll get repurposed. Moving the caching volume is easy. Moving the boot volume has me stumped. The difficulty is, I'm hitting a wall of knowledge here. I have a UEFI Asus motherboard with an previous traditional MBR/basic boot disk, and I want it to boot from a disk and volume that's MBR/dynamic. The disk copy is physically ok (Partition Wizard Server will copy to dynamic volumes) but then hits a light blue 0xc000000e boot error. No real surprise, I expected to have some boot fixing, but had expected Windows to boot-fix it (all drivers exist), or the usual manual fixes to work. Specifically, I don't know enough, to know what's got to be manually checked and perhaps corrected for the disk to boot (legacy/uefi/bios, odd partitions, boot tables, disk IDs, hidden boot files, oh my!), or if I need to change any of this secure boot/UEFI/legacy stuff in the bios, convert a 512 SSD to basic and then back to dynamic when working, or if the issue is pure OS config using "diskpart", "bootsect" and "bootrec" from the Win8 DVD. The old system disk still boots but I don't know enough to figure what to fix, to make the system boot as I want. The answers probably aren't hard but the real issue is my confusion and missing information. Thanks for helping!

    Read the article

  • How Do I Implement parameterMaps for ADF Regions and Dynamic Regions?

    - by david.giammona
    parameterMap objects defined by managed beans can help reduce the number of child <parameter> elements listed under an ADF region or dynamic region page definition task flow binding. But more importantly, the parameterMap approach also allows greater flexibility in determining what input parameters are passed to an ADF region or dynamic region. This can be especially helpful when using dynamic regions where each task flow utilized can provide an entirely different set of input parameters. The parameterMap is specified within an ADF region or dynamic region page definition task flow binding as shown below: <taskFlow id="checkoutflow1" taskFlowId="/WEB-INF/checkout-flow.xml#checkout-flow" activation="deferred" xmlns="http://xmlns.oracle.com/adf/controller/binding" parametersMap="#{pageFlowScope.userInfoBean.parameterMap}"/> The parameter map object must implement the java.util.Map interface. The keys it specifies match the names of input parameters defined by the task flows utilized within the task flow binding. An example parameterMap object class is shown below: import java.util.HashMap; import java.util.Map; public class UserInfoBean { private Map<String, Object> parameterMap = new HashMap<String, Object>(); public Map getParameterMap() { parameterMap.put("isLoggedIn", getSecurity().isAuthenticated()); parameterMap.put("principalName", getSecurity().getPrincipalName()); return parameterMap; }

    Read the article

  • NGINX: dynamic locations stored in DB

    - by chimpanzee
    Is there a possibility to store nginx locations in DB instead of the config to serve them dynamically? The task is to create dynamic URLs for video files based on user's IP and video ID. The idea is when the user visits my website such an dynamic URL is created and added to the db as a new nginx location that exists just for this user and not for others. Or nginx doesn't fit my task and I need to use another tool? Thanks.

    Read the article

  • Create XML File Using XML Schema

    - by metdos
    Is there any easy way to create at least a template XML file using XML Schema? My main interest is bounded by C++, but discussions of other programming languages are also welcome.By the way I also use QT framework.

    Read the article

  • Specify XML schema data type of decimal or blank

    - by Jeremy Stein
    Is there a way in an XML schema to specify that an element may contain either an empty string or a decimal? If I specify the type as xs:decimal like this: <xs:element name="Sample" type="xs:decimal" /> then a blank value would not pass validation: <Sample/> (I realize that the best way to indicate no value would be to not include the element, but I was wondering if there was a way to allow blank or decimal.)

    Read the article

  • Automatic database schema generation and migration with Perl

    - by pistacchio
    In Ror or Django or web2py you can "describe" a database (as a set of classes that remaps to tables) and the framework (having being provided with a connection string to the desired database) generates the tables, fields, relations and in the case of RoR and web2py it also keeps it up-to-date (eg, removing a class drops the table, adding a property to the class triggers an "alter table add" etc). Is there any Perl module that does the same? Eg, it takes the YAML/XML/JSON description of a database as input and modifies/generates the database schema accordingly?

    Read the article

  • Any good SQL Anywhere database schema comparison tools?

    - by Lurker Indeed
    Are there any good database schema comparison tools out there that support Sybase SQL Anywhere version 10? I've seen a litany of them for SQL Server, a few for MySQL and Oracle, but nothing that supports SQL Anywhere correctly. I tried using DB Solo, but it turned all my non-unique indexes into unique ones, and I didn't see any options to change that.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >