Search Results

Search found 2384 results on 96 pages for 'vb6 migration'.

Page 32/96 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • mvc2 migration issue

    - by Sefer KILIÇ
    i migrate my mvc1 project to mvc2. my jquery json result function does not work anymore. have any idea ? aspx $.getJSON('Customer/GetWarningList/0', function(jsonResult) { $.each(jsonResult, function(i, val) { $('#LastUpdates').prepend(jsonResult[i].Url); }); }); controller public JsonResult GetWarningList(string id) { List<WarningList> OldBck = new List<WarningList>(); return this.Json(OldBck); }

    Read the article

  • SQL Server to PostgreSQL - Migration and design concerns

    - by youwhut
    Currently migrating from SQL Server to PostgreSQL and attempting to improve a couple of key areas on the way: I have an Articles table: CREATE TABLE [dbo].[Articles]( [server_ref] [int] NOT NULL, [article_ref] [int] NOT NULL, [article_title] [varchar](400) NOT NULL, [category_ref] [int] NOT NULL, [size] [bigint] NOT NULL ) Data (comma delimited text files) is dumped on the import server by ~500 (out of ~1000) servers on a daily basis. Importing: Indexes are disabled on the Articles table. For each dumped text file Data is BULK copied to a temporary table. Temporary table is updated. Old data for the server is dropped from the Articles table. Temporary table data is copied to Articles table. Temporary table dropped. Once this process is complete for all servers the indexes are built and the new database is copied to a web server. I am reasonably happy with this process but there is always room for improvement as I strive for a real-time (haha!) system. Is what I am doing correct? The Articles table contains ~500 million records and is expected to grow. Searching across this table is okay but could be better. i.e. SELECT * FROM Articles WHERE server_ref=33 AND article_title LIKE '%criteria%' has been satisfactory but I want to improve the speed of searching. Obviously the "LIKE" is my problem here. Suggestions? SELECT * FROM Articles WHERE article_title LIKE '%criteria%' is horrendous. Partitioning is a feature of SQL Server Enterprise but $$$ which is one of the many exciting prospects of PostgreSQL. What performance hit will be incurred for the import process (drop data, insert data) and building indexes? Will the database grow by a huge amount? The database currently stands at 200 GB and will grow. Copying this across the network is not ideal but it works. I am putting thought into changing the hardware structure of the system. The thought process of having an import server and a web server is so that the import server can do the dirty work (WITHOUT indexes) while the web server (WITH indexes) can present reports. Maybe reducing the system down to one server would work to skip the copying across the network stage. This one server would have two versions of the database: one with the indexes for delivering reports and the other without for importing new data. The databases would swap daily. Thoughts? This is a fantastic system, and believe it or not there is some method to my madness by giving it a big shake up. UPDATE: I am not looking for help with relational databases, but hoping to bounce ideas around with data warehouse experts.

    Read the article

  • How to create migration in subdirectory with Rails?

    - by Adrian Serafin
    Hi! I'm writing SaaS model application. My application database consist of two logic parts: application tables - such as user, roles... user defined tables (he can generate them from ui level) that can be different for each application instance All tables are created by rails migrations mechanism. I would like to put user defined tables in another directory: db/migrations - application tables db/migrations/custom - tables generated by user so i can do svn:ignore on db/migrations/custom, and when I do updates of my app on clients servers it would only update application tables migrations. Is there any way to achieve this in rails?

    Read the article

  • Data migration from site5 to heroku

    - by Denis
    Hi, I've a Rails 2.1.2 App hosted on site 5. This App is running since 2 years and I want to migrate the site on heroku. No pb to install the site on heroku, but what about the data?! What is the best strategy to export from site 5 (I've phpMyAdmin) and then import to heroku? Thanks

    Read the article

  • Migration of virtual machines

    - by Friedrich
    Are there tools for migrating from one virtual machine type to another? E.g let's say I have some Xen virtual machine and like to make it run under KVM. I know that qeumu has tools which can be used to "migrate" such machines, but how about: Xen - Kvm Kvm - Xen Xen - VMware (server?)

    Read the article

  • Interspire to Magento migration

    - by patrikas
    Hello, I recently started with Magento and decided to migrate Interspire shopping cart I already made time ago to it. At first look Magento seems a very huge beast - lots of options, maybe lack of simplicity resulting in some performance loss. I've got user guide from which I am not getting much of benefit since there're just descriptions of very ordinary tasks that I could easily discover myself by poking around frontend/backend. So my first tasks are category and product export. Interspire seems to be exporting ONLY products in three available formats: Default MYOB Peachtree accounting I did some searching on Magento's product importing and found a blog post which says that I should create a few sample products with all the necessary attributes myself and then start the import. But what should I do with categories ? Is it possible to import them or instruct Magento to automatically create categories when importing product file if unknown category is encountered ? Thanks

    Read the article

  • Migration of virtual Maschines

    - by Friedrich
    I wonder if there are tools for migrating from one virtual machine type to another. E.g let's say I have some Xen virtual maschine and like to make it run under KVM. I know that e.g in qeumu are tools which can be used to "migrate" such machines but how about e.g Xen - Kvm Kvm - Xen Xen - Vmware (server?)

    Read the article

  • Automatic database schema generation and migration with Perl

    - by pistacchio
    In Ror or Django or web2py you can "describe" a database (as a set of classes that remaps to tables) and the framework (having being provided with a connection string to the desired database) generates the tables, fields, relations and in the case of RoR and web2py it also keeps it up-to-date (eg, removing a class drops the table, adding a property to the class triggers an "alter table add" etc). Is there any Perl module that does the same? Eg, it takes the YAML/XML/JSON description of a database as input and modifies/generates the database schema accordingly?

    Read the article

  • Migration of .NET COM object to 64 bit.

    - by Victor Ronin
    Hi, We have C++ application which uses several COM object. COM object are .NET based (using COM Interop). I need to migrate application to 64 bit. I specifically need C++ application to be 64 bit. I don't want to recompile all of .NET com object to 64 bit and deliver two sets of DLL's (32 bit and 64 bit). I was investigating and found that I can load 32 bit COM Dll's in 32 bit surrogate process using (DllSurrogate in registry). I know how to do that, but it means that all COM objects will become out of process. In the C++ I had the code: CoCreateInstance(CLSID_SomeClass, NULL, CLSCTX_INPROC_SERVER, IID_SomeInterface, (void**)&pobj); It worked fine, but as soon as I switch to CLSCTX_LOCAL_SERVER (and add registry keys for DllSurrogate), it can't find interfaces (error 0x80004002). I checked registry and found out that when .NET COM DLL is registered, it adds ClsID registry keys, but doesn't add Interface and TypeLib registry key. The question is, how to create these registry keys for .NET COM? Regards, Victor

    Read the article

  • Can ActiveRecord create tables outside of a migration?

    - by Munkymorgy
    I am working on a non Rails web app, so no migrations script by default. The Sequel ORM lets me create tables easily in a script: #!/usr/bin/env ruby require 'rubygems' require 'sequel' ## Connect to the database DB = Sequel.sqlite('./ex1.db') unless DB.table_exists? :posts DB.create_table :posts do primary_key :id varchar :title text :body end end Is there a way todo this with ActiveRecord outside of migrations?

    Read the article

  • Easiest way to retrofit retry logic on LINQ to SQL migration to SQL Azure

    - by Pat James
    I have a couple of existing ASP .NET web forms and MVC applications that currently use LINQ to SQL with a SQL Server 2008 Express database on a Windows VPS: one VPS for both IIS and SQL. I am starting to outgrow the VPS's ability to effectively host both SQL and IIS and am getting ready to split them up. I am considering migrating the database to SQL Azure and keeping IIS on the VPS. After doing initial research it sounds like implementing retry logic in the data access layer is a must-do when adopting SQL Azure. I suspect this is even more critical to implement in my situation where IIS will be on a VPS outside of the Azure infrastructure. I am looking for pointers on how to do this with the least effort and impact on my existing code base. Is there a good retry pattern that can be applied once at the LINQ to SQL data access layer, as opposed to having to wrap all of my LINQ to SQL operations in try/catch/wait/retry logic?

    Read the article

  • IIS 7 Website Migration & Configuration

    - by Adam
    Hi - I am in the process of migrating an existing webserver running IIS 6 to IIS 7. I have setup the new websites on the new server but cant seem to test them as once I have entered the domain name when I selec t "browse" from within IIS 7 I get the site on my original server. How can I test the configuration of my new sites on my new server before migrating the domain names (eg updating the DNS records etc.)? Any help much appreciated.

    Read the article

  • Rails migration: t.references with alternative name?

    - by marienbad
    So I have a create_table like this for Courses at a School: create_table :courses do |t| t.string :name t.references :course t.timestamps end but I want it to reference TWO other courses like: has_many :transferrable_as #a Course has_many :same_as #another Course can I say t.references :transferrable_as, :as= :course ?

    Read the article

  • to understand the code- how the heap is written in process migration in solaris

    - by akshay
    hi guys i need help understanding what this piece of code actually does as it is a part of my project i am stuck here. the code is from libckpt on solaris. /********************************** * function: write_heap * args: map_fd -- file descriptor for map file * data_fd -- file descriptor for data file * returns: no. of chunks written on success, -1 on failure * side effects: writes all included segments of the heap to ckpt files * misc.: If we are forking and copyonwrite is set, we will write the heap from bottom to top, moving the brk pointer up each time so that we don't get a page copied if the * called from: take_ckpt() ***********************************/ static int write_heap(int map_fd, int data_fd) { Dlist curptr, endptr; int no_chunks=0, pn; long size; caddr_t stop, addr; if(ckptflags.incremental){ /-- incremental checkpointing on? --/ endptr = ckptglobals.inc_list-main-flink; /*-- for each included chunk of the heap --*/ for(curptr = ckptglobals.inc_list->main->blink->blink; curptr != endptr; curptr = curptr->blink){ /*-- write out the last page in the included chunk --*/ stop = curptr->addr; pn = ((long)curptr->stop - (long)sys.DATASTART) / PAGESIZE; if(isdirty(pn)){ addr = (caddr_t)max((long)curptr->addr, (long)((pn * PAGESIZE) + sys.DATASTART)); size = (long)curptr->stop - (long)addr; debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x, pn = %d\n", addr, addr+size, pn); if(write_chunk(addr, size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } /*-- write out all the whole pages in the middle of the chunk --*/ for(pn--; pn * PAGESIZE + sys.DATASTART >= stop; pn--){ if(isdirty(pn)){ addr = (caddr_t)((pn * PAGESIZE) + sys.DATASTART); debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x, pn = %d\n", addr, addr+PAGESIZE, pn); if(write_chunk(addr, PAGESIZE, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } /*-- write out the first page in the included chunk --*/ addr = curptr->addr; size = ((pn+1) * PAGESIZE + sys.DATASTART) - addr; if(size > 0 && (isdirty(pn))){ debug(stderr, "DEBUG: Writing heap from 0x%x to 0x%x\n", addr, addr+size); if(write_chunk(addr, size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } } else{ /-- incremental checkpointing off! --/ endptr = ckptglobals.inc_list-main-blink; /*-- for each included chunk of the heap --*/ for(curptr = ckptglobals.inc_list->main->flink->flink; curptr != endptr; curptr = curptr->flink){ debug(stderr, "DEBUG: saving memory from 0x%x to 0x%x\n", curptr->addr, curptr->addr+curptr->size); if(write_chunk(curptr->addr, curptr->size, map_fd, data_fd) == -1){ return -1; } if((int)addr > (int)(&end) && ckptflags.enhanced_fork){ brk(addr); } no_chunks++; } } return no_chunks; }

    Read the article

  • acts_as_revisable 'loses' previous models created before migration

    - by cbrulak
    I have the following the standard, regular post sample app. I created some posts then decided to introduce acts_as_revisable After following the instructions at http://github.com/rich/acts_as_revisable I see that the previous posts are not appearing in the Post.all call. However, if I use the console and do Post.find_by_sql("SELECT * FROM Post WHERE ID=1") the post shows up. Any ideas? Thanks

    Read the article

  • Automatic database generation / migration with perl

    - by pistacchio
    Hi, In Ror or Django or web2py you can "describe" a database (as a set of classes that remaps to tables) and the framework (having being provided with a connection string to the desired database) generates the tables, fields, relations and in the case of RoR and web2py it also keeps it up-to-date (eg, removing a class drops the table, adding a property to the class triggers an "alter table add" etc). Is there any perl module that does the same? Eg, it takes the YAML / XML / JSON description of a database as input and modifies / generates the database accordingly? Thanks in advance.

    Read the article

  • Flex Builder AS3 Project migration

    - by Fahim Akhter
    Hi, I am developing a Flash game using as3, I chose flex Builder with an AS3 project. Now I am thinking that if selecting the project to be a flex project instead of as3 project I would have a lot of flex functionality like a swf loader,preloaders and the popup manager etc. The graphic components would obviously have been made in flash and used through the swc (avoiding the heavy mxml components). Need to know what other developers think of this approach.

    Read the article

  • PHP/MySQL time zone migration

    - by El Yobo
    I have an application that currently stores timestamps in MySQL DATETIME and TIMESTAMP values. However, the application needs to be able to accept data from users in multiple time zones and show the timestamps in the time zone of other users. As such, this is how I plan to amend the application; I would appreciate any suggestions to improve the approach. Database modifications All TIMESTAMPs will be converted to DATETIME values; this is to ensure consistency in approach and to avoid having MySQL try to do clever things and convert time zones (I want to keep the conversion in PHP, as it involves less modification to the application, and will be more portable when I eventually manage to escape from MySQL). All DATETIME values will be adjusted to convert them to UTC time (currently all in Australian EST) Query modifications All usage of NOW() to be replaced with UTC_TIMESTAMP() in queries, triggers, functions, etc. Application modifications The application must store the time zone and preferred date format (e.g. US vs the rest of the world) All timestamps will be converted according to the user settings before being displayed All input timestamps will be converted to UTC according to the user settings before being input Additional notes Converting formats will be done at the application level for several main reasons The approach to converting time zones varies from DB to DB, so handing it there will be non-portable (and I really hope to be migrating away from MySQL some time in the not-to-distant future). MySQL TIMESTAMPs have limited ranges to the permitted dates (~1970 to ~2038) MySQL TIMESTAMPs have other undesirable attributes, including bizarre auto-update behaviour (if not carefully disabled) and sensitivity to the server zone settings (and I suspect I might screw these up when I migrate to Amazon later in the year). Is there anything that I'm missing here, or does anyone have better suggestions for the approach?

    Read the article

  • CoreData: migration from 2 models

    - by Yola
    I have general app model, after was released any body can do plug-in for it, this plug-in can determine new db parts which merged with my general db. After some time i 'll release new db version^ and plug-in writers may release new version of their dbs. So i need map old version of merged db into new version. How i can do this?

    Read the article

  • Windows Azure:broken logging after migration to the new SDK 1.3

    - by cloud.dev
    Hi, pls, help. I've migrated to new SDK 1. (Full-IIS mode) I use the following logging: case TraceLevel.Error: Trace.TraceError(message); break; case TraceLevel.Warning: Trace.TraceWarning(message); break; case TraceLevel.Info: Trace.TraceInformation(message); break; case TraceLevel.Verbose: Trace.WriteLine(message); break; it worked fine until I migrated to the new SDK. now, logging works only for Worker Roles. Web-Role can log only inside OnStart-method of WebRole.cs in other cases: logged nothing I understand that Full-IIS means different domains. so, I must call someway WaIIS.exe from w3wp.exe or ...?

    Read the article

  • Rake migration aborted

    - by user2537714
    I'm running Ruby 2.0.0 and I installed it correctly. Just loaded up a gem 'devise' and as I tried to migrate my database changes, it wouldn't work: $ rake db:migrate rake aborted! attr_accessible is extracted out of Rails into a gem. Please use new recommended protection model for params(strong_parameters) or add protected_attributes to your Gemfile to use old one. Then, following another Stackoverflow post, they recommended installing Bundler. I did that successfully and got this: $ bundle exec rake db:migrate rake aborted! attr_accessible is extracted out of Rails into a gem. Please use new recommended protection model for params(strong_parameters) or add protected_attributes to your Gemfile to use old one. Is anyone up to the challenge to help?

    Read the article

  • SSIS Migration - Pulling IDs from dest DB?

    - by TheSciz
    So I'm working on migrating some data to a new server. In the new server, each entry in the MAIN table is assigned a new GUID when the transfer takes place. A few other tables must be migrated, and their records must link to the GUID in the MAIN table. Example... WorksheetID --- GUID 1245677903 --- 1 AccidentID --- WorksheetID --- Guid 12121412 --- 1245677903 --- 1 The guid is used moreso for versioning purposes, but my question is this. In SSIS, is there any way to pull the Worksheet's GUID from the destination database and assign it directly to the entries in the 'Accident' table? Or do I have to just dump the data into the source DB and run some scripts to get everything nicely referenced? Any help would be greatly appreciated.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >