Search Results

Search found 4849 results on 194 pages for 'schema migration'.

Page 39/194 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Migration of .NET COM object to 64 bit.

    - by Victor Ronin
    Hi, We have C++ application which uses several COM object. COM object are .NET based (using COM Interop). I need to migrate application to 64 bit. I specifically need C++ application to be 64 bit. I don't want to recompile all of .NET com object to 64 bit and deliver two sets of DLL's (32 bit and 64 bit). I was investigating and found that I can load 32 bit COM Dll's in 32 bit surrogate process using (DllSurrogate in registry). I know how to do that, but it means that all COM objects will become out of process. In the C++ I had the code: CoCreateInstance(CLSID_SomeClass, NULL, CLSCTX_INPROC_SERVER, IID_SomeInterface, (void**)&pobj); It worked fine, but as soon as I switch to CLSCTX_LOCAL_SERVER (and add registry keys for DllSurrogate), it can't find interfaces (error 0x80004002). I checked registry and found out that when .NET COM DLL is registered, it adds ClsID registry keys, but doesn't add Interface and TypeLib registry key. The question is, how to create these registry keys for .NET COM? Regards, Victor

    Read the article

  • Can ActiveRecord create tables outside of a migration?

    - by Munkymorgy
    I am working on a non Rails web app, so no migrations script by default. The Sequel ORM lets me create tables easily in a script: #!/usr/bin/env ruby require 'rubygems' require 'sequel' ## Connect to the database DB = Sequel.sqlite('./ex1.db') unless DB.table_exists? :posts DB.create_table :posts do primary_key :id varchar :title text :body end end Is there a way todo this with ActiveRecord outside of migrations?

    Read the article

  • Database Schema Managing Framework/Library

    - by Karol Kolenda
    I'm looking for any database framework/library for .net which will act as a unified layer between an application and databases. Please note that I'm not interested in querying/updating data (there is plenty of DALs for that) but rather a framework which allows me to manage table schemas and indexes in a managed fashion (without using database specific SQL). I'm particularly interested in a library which supports Oracle, SQL Server and PostreSQL.

    Read the article

  • I need some help optimizing my database schema

    - by Steffan
    Here's a layout of my data: Heading 1: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 2: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 3: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 4: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 5: Sub heading Sub heading Sub heading Sub heading Sub heading These headings need to have a 'Completion Status' boolean value which gets linked to a user Id. Currently, this is how my table looks: id | userID | field_1 | field_2 | field_3 | field_4 | etc... ----------------------------------------------------------------------- 1 | 1 | 0 | 0 | 1 | 0 | ----------------------------------------------------------------------- 2 | 2 | 1 | 0 | 1 | 1 | Each field represents one Sub Heading. Having this many columns in my table looks awfully inefficient... How can I go about optimizing this? I can't think of any way to neaten it up :/

    Read the article

  • Easiest way to retrofit retry logic on LINQ to SQL migration to SQL Azure

    - by Pat James
    I have a couple of existing ASP .NET web forms and MVC applications that currently use LINQ to SQL with a SQL Server 2008 Express database on a Windows VPS: one VPS for both IIS and SQL. I am starting to outgrow the VPS's ability to effectively host both SQL and IIS and am getting ready to split them up. I am considering migrating the database to SQL Azure and keeping IIS on the VPS. After doing initial research it sounds like implementing retry logic in the data access layer is a must-do when adopting SQL Azure. I suspect this is even more critical to implement in my situation where IIS will be on a VPS outside of the Azure infrastructure. I am looking for pointers on how to do this with the least effort and impact on my existing code base. Is there a good retry pattern that can be applied once at the LINQ to SQL data access layer, as opposed to having to wrap all of my LINQ to SQL operations in try/catch/wait/retry logic?

    Read the article

  • What does this web.xml error mean?

    - by jacekn
    <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_5.xsd"> Referenced file contains errors (http://java.sun.com/xml/ns/j2ee/web-app_2_5.xsd). For more information, right click on the message in the Problems View and select "Show Details..." The errors below were detected when validating the file "web-app_2_5.xsd" via the file "web.xml". In most cases these errors can be detected by validating "web-app_2_5.xsd" directly. However it is possible that errors will only occur when web-app_2_5.xsd is validated in the context of web.xml. In details, I see a bunch of these: s4s-elt-character: Non-whitespace characters are not allowed in schema elements other than xs:appinfo and xs:documentation. Saw 'var _U="undefined";'

    Read the article

  • IIS 7 Website Migration & Configuration

    - by Adam
    Hi - I am in the process of migrating an existing webserver running IIS 6 to IIS 7. I have setup the new websites on the new server but cant seem to test them as once I have entered the domain name when I selec t "browse" from within IIS 7 I get the site on my original server. How can I test the configuration of my new sites on my new server before migrating the domain names (eg updating the DNS records etc.)? Any help much appreciated.

    Read the article

  • database schema explanation of the cakedc tags plugin

    - by Gaurav Sharma
    Hello everyone, I found an awesome tags plugin on cakedc site. This plugin makes your tagging concerns very easy and is able to make anything taggable. Has anyone used it? I find it a bit difficult to understand few things listed below: difference between the name and keyname columns of the tags table. the use of columns 'identifier', 'weight' in tags table Thanks

    Read the article

  • Oracle Schema designer

    - by mehmetserif
    I don't know the real name of that application but what i want to do is so simple, i have an oracle database with more than 50 tables. I want to get their names also their field names, so i thought that it would be nice to use a designer or something like mssql has. Then i can get the field names and table names easily. How can i do that? Thanks for the help, Mehmet Serif Tozlu

    Read the article

  • XSD file, where to get xmlns argument?

    - by Daok
    <?xml version="1.0" encoding="utf-8"?> <xs:schema id="abc" targetNamespace="http://schemas.businessNameHere.com/SoftwareNameHere" elementFormDefault="qualified" xmlns="http://schemas.businessNameHere.com/SoftwareNameHere" xmlns:mstns="http://schemas.businessNameHere.com/SoftwareNameHere" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="..." type="..." /> <xs:complexType name="..."> I am working on a project using XSD to generate .cs file. My question is concerning the string "http://schemas.businessNameHere.com/SoftwareNameHere" If I change it, it doesn't work. But the http:// is not a valid one... what is the logic behind and where can I can information about what to put there or how to change it?

    Read the article

  • Rails migration: t.references with alternative name?

    - by marienbad
    So I have a create_table like this for Courses at a School: create_table :courses do |t| t.string :name t.references :course t.timestamps end but I want it to reference TWO other courses like: has_many :transferrable_as #a Course has_many :same_as #another Course can I say t.references :transferrable_as, :as= :course ?

    Read the article

  • Replacing ORM schema without dropping the entire data

    - by Udi
    Hey, I'm using OpenJPA as a JPA provider. Is there a way in which I can recreate the database tables (When an entity changes) without dropping the entire data? When an entity changes, I drop and create every table in the store, and obviously lose the data within. Is there a tool or product to keep the data somehow? Thanks, Udi

    Read the article

  • to understand the code- how the heap is written in process migration in solaris

    - by akshay
    hi guys i need help understanding what this piece of code actually does as it is a part of my project i am stuck here. the code is from libckpt on solaris. /********************************** * function: write_heap * args: map_fd -- file descriptor for map file * data_fd -- file descriptor for data file * returns: no. of chunks written on success, -1 on failure * side effects: writes all included segments of the heap to ckpt files * misc.: If we are forking and copyonwrite is set, we will write the heap from bottom to top, moving the brk pointer up each time so that we don't get a page copied if the * called from: take_ckpt() ***********************************/ static int write_heap(int map_fd, int data_fd) { Dlist curptr, endptr; int no_chunks=0, pn; long size; caddr_t stop, addr; if(ckptflags.incremental){ /-- incremental checkpointing on? --/ endptr = ckptglobals.inc_list-main-flink; /*-- for each included chunk of the heap --*/ for(curptr = ckptglobals.inc_list->main->blink->blink; curptr != endptr; curptr = curptr->blink){ /*-- write out the last page in the included chunk --*/ stop = curptr->addr; pn = ((long)curptr->stop - (long)sys.DATASTART) / PAGESIZE; if(isdirty(pn)){ addr = (caddr_t)max((long)curptr->addr, (long)((pn * PAGESIZE) + sys.DATASTART)); size = (long)curptr->stop - (long)addr; debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x, pn = %d\n", addr, addr+size, pn); if(write_chunk(addr, size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } /*-- write out all the whole pages in the middle of the chunk --*/ for(pn--; pn * PAGESIZE + sys.DATASTART >= stop; pn--){ if(isdirty(pn)){ addr = (caddr_t)((pn * PAGESIZE) + sys.DATASTART); debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x, pn = %d\n", addr, addr+PAGESIZE, pn); if(write_chunk(addr, PAGESIZE, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } /*-- write out the first page in the included chunk --*/ addr = curptr->addr; size = ((pn+1) * PAGESIZE + sys.DATASTART) - addr; if(size > 0 && (isdirty(pn))){ debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x\n", addr, addr+size); if(write_chunk(addr, size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } } else{ /-- incremental checkpointing off! --/ endptr = ckptglobals.inc_list-main-blink; /*-- for each included chunk of the heap --*/ for(curptr = ckptglobals.inc_list->main->flink->flink; curptr != endptr; curptr = curptr->flink){ debug(stderr, "DEBUG: saving memory from 0x%x to 0x%x\n", curptr->addr, curptr->addr+curptr->size); if(write_chunk(curptr->addr, curptr->size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } return no_chunks; }

    Read the article

  • acts_as_revisable 'loses' previous models created before migration

    - by cbrulak
    I have the following the standard, regular post sample app. I created some posts then decided to introduce acts_as_revisable After following the instructions at http://github.com/rich/acts_as_revisable I see that the previous posts are not appearing in the Post.all call. However, if I use the console and do Post.find_by_sql("SELECT * FROM Post WHERE ID=1") the post shows up. Any ideas? Thanks

    Read the article

  • sql server: import operation won't copy full schema

    - by P a u l
    I recall that the import tool in sql server 2000 would copy indexes, relationships, etc. In sql server 2005/2008 the import tool in SSMS will only create the tables, copy the data, but the keys, indexes, relationships are missing. I can find no option in the import wizard to enable this? What am I missing here? Is this not possible anymore for any good reason?

    Read the article

  • ASMX schema varies when using WCF Service

    - by Lijo
    Hi, I have a client (created using ASMX "Add Web Reference"). The service is WCF. The signature of the methods varies for the client and the Service. I get some unwanted parameteres to the method. Note: I have used IsRequired = true for DataMember. Service: [OperationContract] int GetInt(); Client: proxy.GetInt(out requiredResult, out resultBool); Could you please help me to make the schame non-varying in both WCF clinet and non-WCF cliet? Do we have any best practices for that? using System.ServiceModel; using System.Runtime.Serialization; namespace SimpleLibraryService { [ServiceContract(Namespace = "http://Lijo.Samples")] public interface IElementaryService { [OperationContract] int GetInt(); [OperationContract] int SecondTestInt(); } public class NameDecorator : IElementaryService { [DataMember(IsRequired=true)] int resultIntVal = 1; int firstVal = 1; public int GetInt() { return firstVal; } public int SecondTestInt() { return resultIntVal; } } } Binding = "basicHttpBinding" using NonWCFClient.WebServiceTEST; namespace NonWCFClient { class Program { static void Main(string[] args) { NonWCFClient.WebServiceTEST.NameDecorator proxy = new NameDecorator(); int requiredResult =0; bool resultBool = false; proxy.GetInt(out requiredResult, out resultBool); Console.WriteLine("GetInt___"+requiredResult.ToString() +"__" + resultBool.ToString()); int secondResult =0; bool secondBool = false; proxy.SecondTestInt(out secondResult, out secondBool); Console.WriteLine("SecondTestInt___" + secondResult.ToString() + "__" + secondBool.ToString()); Console.ReadLine(); } } } Please help.. Thanks Lijo

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • User Management: Managing users in user-defined "groups", database schema and logistics

    - by Kevin Brown
    I'm a noob, development wise and logistically-wise. I'm developing a site that lets people take a test... My client wants the ability for a user with the roll/privledge "admin" (a step below a super-admin) to be allowed to create users and only see/edit the users that they create... The users created in that "category" or group need some information that their superior provides. For example, I log in as a "manager", I have the ability to invite people to take the test, and manage those people. Before adding those people, I will have filled out a short survey about myself... Right now, the users that are invited will be asked some of the same questions as the manager. I'd like to cut down the redundancy by using the information put into the database by the manager and apply it to the invited users. How do I set up my database to work with this criterion? I'm a little confused about how to do this! Let me know if I can add more details... (This is a mysql and php app)

    Read the article

  • Automatic database generation / migration with perl

    - by pistacchio
    Hi, In Ror or Django or web2py you can "describe" a database (as a set of classes that remaps to tables) and the framework (having being provided with a connection string to the desired database) generates the tables, fields, relations and in the case of RoR and web2py it also keeps it up-to-date (eg, removing a class drops the table, adding a property to the class triggers an "alter table add" etc). Is there any perl module that does the same? Eg, it takes the YAML / XML / JSON description of a database as input and modifies / generates the database accordingly? Thanks in advance.

    Read the article

  • Duplicate partitioning key performance impact

    - by Anshul
    I've read in some posts that having duplicate partitioning key can have a performance impact. I've two tables like: CREATE TABLE "Test1" ( CREATE TABLE "Test2" ( key text, key text, column1 text, name text, value text, age text, PRIMARY KEY (key, column1) ... ) PRIMARY KEY (key, name,age) ) In Test1 column1 will contain column name and value will contain its corresponding value.The main advantage of Test1 is that I can add any number of column/value pairs to it without altering the table by just providing same partitioning key each time. Now my question is how will each of these table schema's impact the read/write performance if I've millions of rows and number of columns can be upto 50 in each row. How will it impact the compaction/repair time if I'm writing duplicate entries frequently?

    Read the article

  • Fact table with multiple facts

    - by Jeff Meatball Yang
    I have a dimension (SiteItem) has two important facts: perUserClicks perBrowserClicks however, within this dimension, I have groups of dimensions based on an attribute column (let's call the groups AboveFoldItems, LeftNavItems, OnTheFlyItems, etc.) each have more facts that are specific to that group: AboveFoldItems: eyeTime, loadTime LeftNavItems: mouseOverTime OnTheFlyItems: doesn't have any extra, but may in the future Is the following fact table schema ok? DateKey SessionKey SiteItemKey perUserClicks perBrowserClicks eyeTime loadTime mouseOverTime It seems a little wasteful since only some columns pertain to some dimension keys (the irrelevant facts are left NULL). But... this seems like it would be a common problem, so there should be a common solution for this, right?

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >