Search Results

Search found 8929 results on 358 pages for 'multi touch'.

Page 12/358 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • How to Test a Multi-Tenant App with support for multiple domains

    - by asifch
    HI, we are building a multi-tenant application, which will support that each tenant can have a unique top level domain, the application is build using the asp.net 3.5 and SQL servr 2005, while each tenant will have different database. I have seen a number of questions about the similar applications on the StackOverFlow, but none of them is related to the Testing, I want to know is how one can test the application in a development environment, specially How can we test that each customer connects to his own DB based on the URL. how can we emulate different domains on the local system. like abc.com and xyz.com all goes to dev machine's IIS. Any recommendations which might help us in the developing process of such an application.

    Read the article

  • Multi tenancy with Unity

    - by Savvas Sopiadis
    Hi everybody! I'm trying to implement this scenario using Unity and i can't figure out how this could be done: the same web application (ASP.NET MVC) should be made accessible to more than one client (multi-tenant). The URL of the web site will differentiate the client (this i know how to get). So getting the URL one could set the (let's call it) IConnectionStringProvider parameter (which will be afterward injected into IRepository and so on). Through which mechanism (using Unity) do i set the IConnectionStringProvider parameter at run time? I have done this in the past using Windsor & IHandlerSelector (see this) but it's my first attempt using Unity. Any help is deeply appreciated! Thanks in advance

    Read the article

  • Injecting tenant repositories with StructureMap in a multi-tenant MVC applicaiton

    - by FreshCode
    I'm implementing StructureMap in a multi-tenant ASP.NET MVC application to inject instances of my tenant repositories that retrieve data based on an ITenantContext interface. The Tenant in question is determined from RouteData in a base controller's OnActionExecuting. How do I tell StructureMap to construct TenantContext(tenantID); where tenantID is derived from my RouteData or some base controller property? Base Controller Given the following route: {tenant}/{controller}/{action}/{id} My base controller retrieves and stores the correct Tenant based on the {tenant} URL parameter. Using Tenant, a repository with an ITenantContext can be constructed to retrieve only data that is relevant to that tenant. Based on the other DI questions, could AbstractFactory be a solution?

    Read the article

  • Injecting multi-tenant repositories with StructureMap in ASP.NET MVC

    - by FreshCode
    I'm implementing StructureMap in a multi-tenant ASP.NET MVC application to inject instances of my tenant repositories that retrieve data based on an ITenantContext interface. The Tenant in question is determined from RouteData in a base controller's OnActionExecuting. How do I tell StructureMap to construct TenantContext(tenantID); where tenantID is derived from my RouteData or some base controller property? Base Controller Given the following route: {tenant}/{controller}/{action}/{id} My base controller retrieves and stores the correct Tenant based on the {tenant} URL parameter. Using Tenant, a repository with an ITenantContext can be constructed to retrieve only data that is relevant to that tenant. Based on the other DI questions, could AbstractFactory be a solution?

    Read the article

  • How can I create and manage a multi-tenant ASP MVC application

    - by Wizzarding
    Hi, I want to create a multi-tenant application that uses the hostname to determine the customer. For example: CustomerOne.myapp.com AnotherCo.myapp.com AndOneMore.myapp.com ... I can do the database and security side with no problems, I can also get the hostname from the URL, but what I am struggling to find out is how to create the basic plumbing that would allow a new customer to sign up online, provide their company name, and for the application to create the new URL, ready to be used straight away. Can anyone help? Thanks, Rob.

    Read the article

  • performance monitoring tools for multi-tenant web application

    - by Anton
    We have a need to monitor performance of our java web app. We are looking for some tolls which can help us with this task. The major difficulty is that we are SaaS provider with multi-tenant server architecture with hundreds of customers running on the same hardware. So far we tried commercial products like DynaTrace and Coradinat but unfortunately they don't get the job done so far. What we need is a simple report which would tell us if we had performance problems on each customer site in a specified period of time. Mostly it will be response time per customer but also we will need some more specifics based on the URLs. please let me know if someone had any experience with setting up such monitoring. Thanks!

    Read the article

  • Data-separation in a Symfony Multi-tenant app using Doctrine

    - by Prasad
    I am trying to implement a multi-tenant application, that is - data of all clients in a single database - each shared table has a tenant_id field to separate data I wish to achieve data separation by adding where('tenant_id = ', $user->getTenantID()) {pseudoc-code} to all SELECT queries I could not find any solution up-front, but here are possible approaches I am considering. 1) crude approach: customizing all fetchAll and fetchOne functions in every class (I will go mad!) 2) using listeners: possibly coding for the preDqlSelect event and adding the 'where' to all queries 3) override buildQuery(): could not find an example of this for front-end 4) implement contentformfilter: again need a pointer Would appreciate if someone could validate these & comment on efficieny, suitability. Also, if anyone has achieved multitenancy using another strategy, pl share. Thanks

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • Core Data multi-threading

    - by JK
    My app starts by presenting a tableview whose datasource is a Core Data SQLite store. When the app starts, a secondary thread with its own store controller and context is created to obtain updates from the web for data in the store. However, any resulting changes to the store are not notified to the fetchedresults controller (I presume because it has its own coordinator) and consequently the table is not updated with store changes. What would be the most efficient way to refresh the context on the main thread? I am considering tracking the objectIDs of any objects changed on the secondary thread, sending those to the main thread when the secondary thread completes and invoking "[context refreshObject:....] Any help would be greatly appreciated.

    Read the article

  • coco2d tab view for multi player

    - by godzilla
    I am currently developing a card game for the iphone using cocas2d. I am currently in need of a tab view with each tab representing a player and his / her set of cards. Currently i have a single view representing just one player. It seems as though cocas2s is not really built do have multiple views, to do this and it would require some serious amount of hacking around with the code. What would be the most efficient way to accomplish this?

    Read the article

  • Multi tenant membership provider ASP.NET MVC

    - by Masna
    Hello, I'm building a multi-tenant app with ASP.NET MVC and have a problem with validating users. Situation I have: -a table with User(ID, Name, FirstName, Email) This table is made, so that a users who is registered in two tenants doesn't need to login again. -a table with Tentantuser(ID, TenantID, UserID (FK to table User), UserName, Loginname, Password, Active) This table contains de login en password for one tenant. Example: UserX is registered in TenantA and TenantB UserX logs in on TenantA, with his login and password for TenantA System verifies or login and password are correct in the table TenantUser System validates UserX which userID corresponds to the Id in the table User UserX goes to TenantB and is automatically logged in My problem: How can I create a custom Provider so I can check the login & password in a tenant? For example: public abstract bool ValidateUser(string username,string password); How can I say to my provider on which tenant the user is? How can I change this in something like: public overrides bool ValidateUser(string username,string password, string tenant); ? Or what is another way to solve this issue?

    Read the article

  • Kohana multi language website

    - by Sobek
    .I'm trying to set up a multi language website with kohana v3, following this tutorial: http://kerkness.ca/wiki/doku.php?id=example_of_a_multi-language_website Routing to a controller or action within i.e. website/controller/action seems to work as the url is properly redirected to website/lang/controller/action. However this is not working for ajax request calls. I have to manually edit the url with the appropriate language, to successfully retrieve the data. This also applies for anchors on the html page. In addition to this problem, the overflow parameter 'id' also doesn't work. It takes the 'lang' variable as its parameter. I have setup my default route just like in the tutorial i.e.: Route::set('default', '((<lang>)(/)(<controller>)(/<action>(/<id>)))', array('lang' => "({$langs_abr})",'id'=>'.+')) ->defaults(array('lang' => $default_lang,'controller' => welcome', 'action' => 'index')); Any help is much appreciated ! Cheers

    Read the article

  • Multi-tenant Access Control: Repository or Service layer?

    - by FreshCode
    In a multi-tenant ASP.NET MVC application based on Rob Conery's MVC Storefront, should I be filtering the tenant's data in the repository or the service layer? 1. Filter tenant's data in the repository: public interface IJobRepository { IQueryable<Job> GetJobs(short tenantId); } 2. Let the service filter the repository data by tenant: public interface IJobService { IList<Job> GetJobs(short tenantId); } My gut-feeling says to do it in the service layer (option 2), but it could be argued that each tenant should in essence have their own "virtual repository," (option 1) where this responsibility lies with the repository. Which is the most elegant approach: option 1, option 2 or is there a better way? Update: I tried the proposed idea of filtering at the repository, but the problem is that my application provides the tenant context (via sub-domain) and only interacts with the service layer. Passing the context all the way to the repository layer is a mission. So instead I have opted to filter my data at the service layer. I feel that the repository should represent all data physically available in the repository with appropriate filters for retrieving tenant-specific data, to be used by the service layer. Final Update: I ended up abandoning this approach due to the unnecessary complexities. See my answer below.

    Read the article

  • How to avoid mouse move on Touch

    - by VirtualBlackFox
    I have a WPF application that is capable of being used both with a mouse and using Touch. I disable all windows "enhancements" to just have touch events : Stylus.IsPressAndHoldEnabled="False" Stylus.IsTapFeedbackEnabled="False" Stylus.IsTouchFeedbackEnabled="False" Stylus.IsFlicksEnabled="False" The result is that a click behave like I want except on two points : The small "touch" cursor (little white star) appears where clicked an when dragging. Completely useless as the user finger is already at this location no feedback is required (Except my element potentially changing color if actionable). Elements stay in the "Hover" state after the movement / Click ends. Both are the consequences of the fact that while windows transmit correctly touch events, he still move the mouse to the last main-touch-event. I don't want windows to move the mouse at all when I use touch inside my application. Is there a way to completely avoid that? Notes: Handling touch events change nothing to this. Using SetCursorPos to move the mouse away make the cursor blink and isn't really user-friendly. Disabling the touch panel to act as an input device completely disable all events (And I also prefer an application-local solution, not system wide). I don't care if the solution involve COM/PInvoke or is provided in C/C++ i'll translate. If it is necessary to patch/hook some windows dlls so be it, the software will run on a dedicated device anyway. I'm investigating the surface SDK but I doubt that it'll show any solution. As a surface is a pure-touch device there is no risk of bad interaction with the mouse.

    Read the article

  • Core i7 on linux loses its multithreading capability after suspend

    - by rafak
    On my debian-linux system, with a core i7 920 , each time I resume after the command "pm-suspend" (suspend to RAM), mutlithreading capabilities almost disappear. More specifically, two distinct programs can use 2 distinct cores at full rate, but a single program is limited to only one core (for one instance of a multithreaded program as well as multiple instances of a monothreaded program, e.g. "make -j 4" for gcc). So I end up rebooting the system. Any help appreciated!

    Read the article

  • Silverlight TV 19: Hidden Gems from MIX10, UFC's Multi-Touch App

    John ran into Silverlight MVP Ward Bell of IdeaBlade while at MIX10 (how could anyone miss him!). Ward was kind enough to sit and talk with John to show off the multi-touch application his company wrote for UFC using Silverlight. It uses multi-touch, Caliburn, MVVM, and, of course, Silverlight! Relevant links: John's Blog Ward's Blog Silverlight 4 RC Features (or download here) Follow us on Twitter @SilverlightTV Learn more about Silverlight with the new Silverlight Training Course...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Le Touch Pack de Windows 7 est disponible, Microsoft offre ses applications multitouch

    Le Touch Pack de Windows 7 est disponible, Microsoft offre ses applications multitouch Microsoft vient de rendre disponible le Touch Pack de Windows 7. Il s'agit de plusieurs applications gratuites utilisant les capacités multitouch du système. Voici la composition du pack : Surface Globe : Application utilisant le moteur Virtual Earth 3D et permettant de visiter la planète sur le bout des doigts. Surface Collage : Permet de sélectionner un dossier pour en disposer et présenter les images à l'envie. Surface Lagoon : Ecran de veille mais avec lequel on peut interagir, provoquer des remous dans l'eau, attirer les poissons, etc. Blackboard : Jeu ...

    Read the article

  • Java ME Tech Holiday Gift Idea #3: Kindle Touch Wi-Fi

    - by hinkmond
    Here's a Java ME tech-enabled device holiday gift idea: The venerable Amazon Kindle Touch with built-in Wi-Fi. Niiiice! See: Java ME Tech Gift Idea #3 Here's a quote: + Most-advanced E Ink display, now with multi-touch + New sleek design - 8% lighter, 11% smaller, holds 3,000 books + Only e-reader with text-to-speech, audiobooks and mp3 support + Built in Wi-Fi - Get books in 60 seconds If you want to give someone special a cool device, you want to give something with Java ME technology. Give only the best this holiday season! Hinkmond

    Read the article

  • Optimum number of threads while multitasking

    - by Gun Deniz
    I know similar questions have been asked but I think my case is a little bit diffrent. Let's say I have a computer with 8 cores and infinite memory with a Linux OS. I have a calculation software called Gaussian that can take advantage of multithreading. So I set its thread count to 8 for a single calculation for maximum speed. However I really can't decide what to do when I need to do run for instance 8 calculations simultaneously. In that case should I set the thread count to 1(total 8 threads spawned in 8 processes) or keep it 8(total 64 threads spawned in 8 processes) for each job? Does it really matter much? A related question is does the OS automatically does the core-parking to diffrent cores for each thread?

    Read the article

  • Touch Event Not Work With PinchZoom

    - by Siddharth
    I was implemented pinch zoom functionality for my tower of defense game. I manage different entity to display all the towers. From that entity game player select the tower and drag to the actual position where he want to plot the tower. I set the entity in HUD also, so user scroll and zoom the region, the tower become visible all the time. Basically I have created the different entity to show and hide towers only so I can manage it easily. My problem is when I have not perform the scroll and zoom the tower touch and the dragging easily done but when I zoom and scroll the scene at that time the tower touch event does not call so the player can not able to drag and drop it to actual position. So anybody please help me to come out of it.

    Read the article

  • executing a script from maven inside a multi module project

    - by Roman
    Hi everyone. I have this multi-module project. In the beginning of each build I would like to run some bat file. So i did the following: <profile> <id>deploy-db</id> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1.1</version> </plugin> </plugins> <pluginManagement> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1.1</version> <executions> <execution> <phase>validate</phase> <goals> <goal>exec</goal> </goals> <inherited>false</inherited> </execution> </executions> <configuration> <executable>../database/schemas/import_databases.bat</executable> </configuration> </plugin> </plugins> </pluginManagement> </build> </profile> when i run the mvn verify -Pdeploy-db from the root I get this script executed over and over again in each of my modules. I want it to be executed only once, in the root module. What is there that I am missing ? Thanks

    Read the article

  • iPhone - Track three touches

    - by Striker
    Suppose you have three points of contact on the iPhone screen and one of those touches moves... The touchesMoved method will be invoked and the [[event touchesForView:self] count] will be equal to '3' because there are three touches for the event, but how can you distinguish between the touches? For example - find out whether it was the first, second, or third touch which moved? Thanks.

    Read the article

  • Stop Saying "Multi-Channel!"

    - by David Dorf
    I keep hearing the term "multi-channel" in our industry, but its time to move on. It kinda reminds me of the term "ECR" or electronic cash register. Long ago ECR was a leading-edge term, but nowadays its rarely used because its table-stakes. After all, what cash register today isn't electronic? The same logic applies to multi-channel, at least when we're talking about tier-1 and tier-2 retailers. If you're still talking about multi-channel retailing, you're in big trouble. Some have switched over to the term "cross-channel," and that's a step in the right direction but still falls short. Its kinda like saying, "I upgraded my ECR to accept debit cards!" Yawn. Who hasn't? Today's retailers need to focus on omni-channel, which I first heard from my friends over at RSR but was originally coined at IDC. First retailers added e-commerce to their store and catalog channels yielding multi-channel retailing. Consumers could use the channel that worked best for them. Then some consumers wanted to combine channels with features like buy-on-the-Web, pickup-in-the-store. Thus began the cross-channel initiatives to breakdown the silos and enable the channels to communicate with each other. But the multi-channel architecture is full of duplication that thwarts efforts of providing a consistent experience. Each has its own cart, its own pricing, and often its own CRM. This was an outcrop of trying to bring the independent channels to market quickly. Rather than reusing and rebuilding existing components to meet the new demands, silos were created that continue to exist today. Today's consumers want omni-channel retailing. They want to interact with brands in a consistent manner that is channel transparent, yet optimized for that particular interaction. The diagram below, from the soon-to-be-released NRF Mobile Blueprint v2, shows this progression. For retailers to provide an omni-channel experience, there needs to be one logical representation of products, prices, promotions, and customers across all channels. The only thing that varies is the presentation of the content based on the delivery mechanism (e.g. shelf labels, mobile phone, web site, print, etc.) and often these mechanisms can be combined in various ways. I'm looking forward to the day in which I can use my phone to scan QR-codes in a catalog to create a shopping cart of items. Then do some further research on the retailer's Web site and be told about related items that might interest me. Be able to easily solicit opinions and reviews from social sites, and finally enter the store to pickup my items, knowing that any applicable coupons have been applied. In this scenario, I the consumer are dealing with a single brand that is aware of me and my needs throughout the entire transaction. Nirvana.

    Read the article

  • Creating a multi-tenant application using PostgreSQL's schemas and Rails

    - by ramon.tayag
    Stuff I've already figured out I'm learning how to create a multi-tenant application in Rails that serves data from different schemas based on what domain or subdomain is used to view the application. I already have a few concerns answered: How can you get subdomain-fu to work with domains as well? Here's someone that asked the same question which leads you to this blog. What database, and how will it be structured? Here's an excellent talk by Guy Naor, and good question about PostgreSQL and schemas. I already know my schemas will all have the same structure. They will differ in the data they hold. So, how can you run migrations for all schemas? Here's an answer. Those three points cover a lot of the general stuff I need to know. However, in the next steps I seem to have many ways of implementing things. I'm hoping that there's a better, easier way. Finally, to my question When a new user signs up, I can easily create the schema. However, what would be the best and easiest way to load the structure that the rest of the schemas already have? Here are some questions/scenarios that might give you a better idea. Should I pass it on to a shell script that dumps the public schema into a temporary one, and imports it back to my main database (pretty much like what Guy Naor says in his video)? Here's a quick summary/script I got from the helpful #postgres on freenode. While this will probably work, I'm gonna have to do a lot of stuff outside of Rails, which makes me a bit uncomfortable.. which also brings me to the next question. Is there a way to do this straight from Ruby on Rails? Like create a PostgreSQL schema, then just load the Rails database schema (schema.rb - I know, it's confusing) into that PostgreSQL schema. Is there a gem/plugin that has these things already? Methods like "create_pg_schema_and_load_rails_schema(the_new_schema_name)". If there's none, I'll probably work at making one, but I'm doubtful about how well tested it'll be with all the moving parts (especially if I end up using a shell script to create and manage new PostgreSQL schemas). Thanks, and I hope that wasn't too long! UPDATE May 11, 2010 11:26 GMT+8 Since last night I've been able to get a method to work that creates a new schema and loads schema.rb into it. Not sure if what I'm doing is correct (seems to work fine, so far) but it's a step closer at least. If there's a better way please let me know. module SchemaUtils def self.add_schema_to_path(schema) conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{schema}, #{conn.schema_search_path}" end def self.reset_search_path conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{conn.schema_search_path}" end def self.create_and_migrate_schema(schema_name) conn = ActiveRecord::Base.connection schemas = conn.select_values("select * from pg_namespace where nspname != 'information_schema' AND nspname NOT LIKE 'pg%'") if schemas.include?(schema_name) tables = conn.tables Rails.logger.info "#{schema_name} exists already with these tables #{tables.inspect}" else Rails.logger.info "About to create #{schema_name}" conn.execute "create schema #{schema_name}" end # Save the old search path so we can set it back at the end of this method old_search_path = conn.schema_search_path # Tried to set the search path like in the methods above (from Guy Naor) # conn.execute "SET search_path TO #{schema_name}" # But the connection itself seems to remember the old search path. # If set this way, it works. conn.schema_search_path = schema_name # Directly from databases.rake. # In Rails 2.3.5 databases.rake can be found in railties/lib/tasks/databases.rake file = "#{Rails.root}/db/schema.rb" if File.exists?(file) Rails.logger.info "About to load the schema #{file}" load(file) else abort %{#{file} doesn't exist yet. It's possible that you just ran a migration!} end Rails.logger.info "About to set search path back to #{old_search_path}." conn.schema_search_path = old_search_path end end

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >