Search Results

Search found 5809 results on 233 pages for 'curl multi'.

Page 18/233 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Injecting tenant repositories with StructureMap in a multi-tenant MVC applicaiton

    - by FreshCode
    I'm implementing StructureMap in a multi-tenant ASP.NET MVC application to inject instances of my tenant repositories that retrieve data based on an ITenantContext interface. The Tenant in question is determined from RouteData in a base controller's OnActionExecuting. How do I tell StructureMap to construct TenantContext(tenantID); where tenantID is derived from my RouteData or some base controller property? Base Controller Given the following route: {tenant}/{controller}/{action}/{id} My base controller retrieves and stores the correct Tenant based on the {tenant} URL parameter. Using Tenant, a repository with an ITenantContext can be constructed to retrieve only data that is relevant to that tenant. Based on the other DI questions, could AbstractFactory be a solution?

    Read the article

  • Multi tenancy with Unity

    - by Savvas Sopiadis
    Hi everybody! I'm trying to implement this scenario using Unity and i can't figure out how this could be done: the same web application (ASP.NET MVC) should be made accessible to more than one client (multi-tenant). The URL of the web site will differentiate the client (this i know how to get). So getting the URL one could set the (let's call it) IConnectionStringProvider parameter (which will be afterward injected into IRepository and so on). Through which mechanism (using Unity) do i set the IConnectionStringProvider parameter at run time? I have done this in the past using Windsor & IHandlerSelector (see this) but it's my first attempt using Unity. Any help is deeply appreciated! Thanks in advance

    Read the article

  • Injecting multi-tenant repositories with StructureMap in ASP.NET MVC

    - by FreshCode
    I'm implementing StructureMap in a multi-tenant ASP.NET MVC application to inject instances of my tenant repositories that retrieve data based on an ITenantContext interface. The Tenant in question is determined from RouteData in a base controller's OnActionExecuting. How do I tell StructureMap to construct TenantContext(tenantID); where tenantID is derived from my RouteData or some base controller property? Base Controller Given the following route: {tenant}/{controller}/{action}/{id} My base controller retrieves and stores the correct Tenant based on the {tenant} URL parameter. Using Tenant, a repository with an ITenantContext can be constructed to retrieve only data that is relevant to that tenant. Based on the other DI questions, could AbstractFactory be a solution?

    Read the article

  • How can I create and manage a multi-tenant ASP MVC application

    - by Wizzarding
    Hi, I want to create a multi-tenant application that uses the hostname to determine the customer. For example: CustomerOne.myapp.com AnotherCo.myapp.com AndOneMore.myapp.com ... I can do the database and security side with no problems, I can also get the hostname from the URL, but what I am struggling to find out is how to create the basic plumbing that would allow a new customer to sign up online, provide their company name, and for the application to create the new URL, ready to be used straight away. Can anyone help? Thanks, Rob.

    Read the article

  • performance monitoring tools for multi-tenant web application

    - by Anton
    We have a need to monitor performance of our java web app. We are looking for some tolls which can help us with this task. The major difficulty is that we are SaaS provider with multi-tenant server architecture with hundreds of customers running on the same hardware. So far we tried commercial products like DynaTrace and Coradinat but unfortunately they don't get the job done so far. What we need is a simple report which would tell us if we had performance problems on each customer site in a specified period of time. Mostly it will be response time per customer but also we will need some more specifics based on the URLs. please let me know if someone had any experience with setting up such monitoring. Thanks!

    Read the article

  • Data-separation in a Symfony Multi-tenant app using Doctrine

    - by Prasad
    I am trying to implement a multi-tenant application, that is - data of all clients in a single database - each shared table has a tenant_id field to separate data I wish to achieve data separation by adding where('tenant_id = ', $user->getTenantID()) {pseudoc-code} to all SELECT queries I could not find any solution up-front, but here are possible approaches I am considering. 1) crude approach: customizing all fetchAll and fetchOne functions in every class (I will go mad!) 2) using listeners: possibly coding for the preDqlSelect event and adding the 'where' to all queries 3) override buildQuery(): could not find an example of this for front-end 4) implement contentformfilter: again need a pointer Would appreciate if someone could validate these & comment on efficieny, suitability. Also, if anyone has achieved multitenancy using another strategy, pl share. Thanks

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • Multi tenant membership provider ASP.NET MVC

    - by Masna
    Hello, I'm building a multi-tenant app with ASP.NET MVC and have a problem with validating users. Situation I have: -a table with User(ID, Name, FirstName, Email) This table is made, so that a users who is registered in two tenants doesn't need to login again. -a table with Tentantuser(ID, TenantID, UserID (FK to table User), UserName, Loginname, Password, Active) This table contains de login en password for one tenant. Example: UserX is registered in TenantA and TenantB UserX logs in on TenantA, with his login and password for TenantA System verifies or login and password are correct in the table TenantUser System validates UserX which userID corresponds to the Id in the table User UserX goes to TenantB and is automatically logged in My problem: How can I create a custom Provider so I can check the login & password in a tenant? For example: public abstract bool ValidateUser(string username,string password); How can I say to my provider on which tenant the user is? How can I change this in something like: public overrides bool ValidateUser(string username,string password, string tenant); ? Or what is another way to solve this issue?

    Read the article

  • Kohana multi language website

    - by Sobek
    .I'm trying to set up a multi language website with kohana v3, following this tutorial: http://kerkness.ca/wiki/doku.php?id=example_of_a_multi-language_website Routing to a controller or action within i.e. website/controller/action seems to work as the url is properly redirected to website/lang/controller/action. However this is not working for ajax request calls. I have to manually edit the url with the appropriate language, to successfully retrieve the data. This also applies for anchors on the html page. In addition to this problem, the overflow parameter 'id' also doesn't work. It takes the 'lang' variable as its parameter. I have setup my default route just like in the tutorial i.e.: Route::set('default', '((<lang>)(/)(<controller>)(/<action>(/<id>)))', array('lang' => "({$langs_abr})",'id'=>'.+')) ->defaults(array('lang' => $default_lang,'controller' => welcome', 'action' => 'index')); Any help is much appreciated ! Cheers

    Read the article

  • Multi-tenant Access Control: Repository or Service layer?

    - by FreshCode
    In a multi-tenant ASP.NET MVC application based on Rob Conery's MVC Storefront, should I be filtering the tenant's data in the repository or the service layer? 1. Filter tenant's data in the repository: public interface IJobRepository { IQueryable<Job> GetJobs(short tenantId); } 2. Let the service filter the repository data by tenant: public interface IJobService { IList<Job> GetJobs(short tenantId); } My gut-feeling says to do it in the service layer (option 2), but it could be argued that each tenant should in essence have their own "virtual repository," (option 1) where this responsibility lies with the repository. Which is the most elegant approach: option 1, option 2 or is there a better way? Update: I tried the proposed idea of filtering at the repository, but the problem is that my application provides the tenant context (via sub-domain) and only interacts with the service layer. Passing the context all the way to the repository layer is a mission. So instead I have opted to filter my data at the service layer. I feel that the repository should represent all data physically available in the repository with appropriate filters for retrieving tenant-specific data, to be used by the service layer. Final Update: I ended up abandoning this approach due to the unnecessary complexities. See my answer below.

    Read the article

  • Core i7 on linux loses its multithreading capability after suspend

    - by rafak
    On my debian-linux system, with a core i7 920 , each time I resume after the command "pm-suspend" (suspend to RAM), mutlithreading capabilities almost disappear. More specifically, two distinct programs can use 2 distinct cores at full rate, but a single program is limited to only one core (for one instance of a multithreaded program as well as multiple instances of a monothreaded program, e.g. "make -j 4" for gcc). So I end up rebooting the system. Any help appreciated!

    Read the article

  • how to install npm if couldn't resolve npmjs.org

    - by Rahul Mehta
    when m doing curl it says could not resolve host what can i do ? curl http://npmjs.org/install.sh | sudo sh curl: (6) Couldn't resolve host 'npmjs.org' http://npmjs.org/ /etc/resolv.conf search x1 nameserver x2 nameserver 8.8.8.8 nameserver 8.8.4.4 nslookup result nslookup google.com Server: x1 Address: x1#53 Non-authoritative answer: *** Can't find google.com: No answer Non-authoritative answer: * Can't find google.com: No answer

    Read the article

  • is the AOC e2239fwt supported for multi touch on any ubuntu distro?

    - by HybriDPjT
    as the title says i have the e2239fwt monitor and ive tried ubuntu 10.04, 10.10, 11.04, 12.04 and now 13.04 and i cant get it to work. i should state that the single point touch seems to work ok but thats all. ive already tried looking and found no answers so here i am asking the peeps in the know :) i am currently running 13.04 and possibly going back to 10.04 if i cant get it to work or find that this monitor is in fact not supported.. hybridpjt@Unicorn:~$ lsusb Bus 002 Device 002: ID 05e3:0610 Genesys Logic, Inc. 4-port hub Bus 003 Device 002: ID 045e:0780 Microsoft Corp. Bus 003 Device 003: ID 06a3:0cc3 Saitek PLC Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 003: ID 0408:3001 Quanta Computer, Inc. Optical Touch Screen

    Read the article

  • Optimum number of threads while multitasking

    - by Gun Deniz
    I know similar questions have been asked but I think my case is a little bit diffrent. Let's say I have a computer with 8 cores and infinite memory with a Linux OS. I have a calculation software called Gaussian that can take advantage of multithreading. So I set its thread count to 8 for a single calculation for maximum speed. However I really can't decide what to do when I need to do run for instance 8 calculations simultaneously. In that case should I set the thread count to 1(total 8 threads spawned in 8 processes) or keep it 8(total 64 threads spawned in 8 processes) for each job? Does it really matter much? A related question is does the OS automatically does the core-parking to diffrent cores for each thread?

    Read the article

  • Is it possible to get dragging working on a Macbook multi-touch touch pad?

    - by lhahne
    I have a Macbook 5,1. That is to say that it is the only 13 inch aluminium Macbook as the later revisions were renamed Macbook Pro. Two-finger scrolling seems to work fine but dragging doesn't work. In OsX this works so that you point an object, click and keep your finger pressed on the touch pad while slide another finger to move the cursor. This causes weird and undefined behavior in Ubuntu as it seems the driver doesn't recognize this as dragging. Any ideas?

    Read the article

  • php curl: how can i emulate a get request exactly like a web browser?

    - by ufk
    Hi. there are websites that when i open specific ajax request on browser i get the resulted page, but when i try to load them with curl, i receive an error from the server. how can i properly emulate a get request to the server that will simulate a browser ? that's what i'm doing: $url="https://new.aol.com/productsweb/subflows/ScreenNameFlow/AjaxSNAction.do?s=username&f=firstname&l=lastname"; ini_set('user_agent', 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.0.3705; .NET CLR 1.1.4322)'); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL,$url); $result=curl_exec($ch); print $result;

    Read the article

  • Best practice -- Content Tracking Remote Data (cURL, file_get_contents, cron, et. al)?

    - by user322787
    I am attempting to build a script that will log data that changes every 1 second. The initial thought was "Just run a php file that does a cURL every second from cron" -- but I have a very strong feeling that this isn't the right way to go about it. Here are my specifications: There are currently 10 sites I need to gather data from and log to a database -- this number will invariably increase over time, so the solution needs to be scalable. Each site has data that it spits out to a URL every second, but only keeps 10 lines on the page, and they can sometimes spit out up to 10 lines each time, so I need to pick up that data every second to ensure I get all the data. As I will also be writing this data to my own DB, there's going to be I/O every second of every day for a considerably long time. Barring magic, what is the most efficient way to achieve this? it might help to know that the data that I am getting every second is very small, under 500bytes.

    Read the article

  • How to transfer a post request in curl into a ruby script?

    - by 0x90
    I have this post request: curl -i -X POST \ -H "Accept:application/json" \ -H "content-type:application/x-www-form-urlencoded" \ -d "disambiguator=Document&confidence=-1&support=-1&text=President%20Obama%20called%20Wednesday%20on%20Congress%20to%20extend%20a%20tax%20break%20for%20students%20included%20in%20last%20year%27s%20economic%20stimulus%20package" \ http://spotlight.dbpedia.org/dev/rest/annotate/ How can I write it in ruby? I tried this as Kyle told me: require 'rubygems' require 'net/http' require 'uri' uri = URI.parse('http://spotlight.dbpedia.org/rest/annotate') http = Net::HTTP.new(uri.host, uri.port) request = Net::HTTP::Post.new(uri.request_uri) request.set_form_data({ "disambiguator" => "Document", "confidence" => "0.3", "support" => "0", "text" => "President Obama called Wednesday on Congress to extend a tax break for students included in last year's economic stimulus package" }) request.add_field("Accept", "application/json") request.add_field("Content-Type", "application/x-www-form-urlencoded") response = http.request(request) puts response.inspect but got this error: #<Net::HTTPInternalServerError 500 Internal Error readbody=true>

    Read the article

  • How to get response from secure site (https) using curl in php?

    - by Chirag
    Hi, i have used below code for reading the response from HTTTPS secured site using curl but it shows an error like 'Page can't be displayed" curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_ANY); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, true); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, true); curl_setopt($ch, CURLOPT_URL, $gatewayURI); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt ($ch, CURLOPT_POST, 1); curl_setopt ($curl_ch, CURLOPT_CAINFO, dirname(__FILE__)."/tempcert.pem"); any help??

    Read the article

  • How do I Parse the response I get back from CURL?

    - by Daelan
    I'm sending some data to an external URL using Curl. The server sends me back a response in a string like this: trnApproved=0&trnId=10000002&messageId=7&messageText=DECLINE I can assign this string to a variable like this: $txResult = curl_exec( $ch ); echo "Result:<BR>"; echo $txResult; But how do I use the data that is sent back? I need a way to get the value of each variable sent back so that I can use it in my PHP script. Any help would be much appreciated. Thanks.

    Read the article

  • executing a script from maven inside a multi module project

    - by Roman
    Hi everyone. I have this multi-module project. In the beginning of each build I would like to run some bat file. So i did the following: <profile> <id>deploy-db</id> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1.1</version> </plugin> </plugins> <pluginManagement> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1.1</version> <executions> <execution> <phase>validate</phase> <goals> <goal>exec</goal> </goals> <inherited>false</inherited> </execution> </executions> <configuration> <executable>../database/schemas/import_databases.bat</executable> </configuration> </plugin> </plugins> </pluginManagement> </build> </profile> when i run the mvn verify -Pdeploy-db from the root I get this script executed over and over again in each of my modules. I want it to be executed only once, in the root module. What is there that I am missing ? Thanks

    Read the article

  • Stop Saying "Multi-Channel!"

    - by David Dorf
    I keep hearing the term "multi-channel" in our industry, but its time to move on. It kinda reminds me of the term "ECR" or electronic cash register. Long ago ECR was a leading-edge term, but nowadays its rarely used because its table-stakes. After all, what cash register today isn't electronic? The same logic applies to multi-channel, at least when we're talking about tier-1 and tier-2 retailers. If you're still talking about multi-channel retailing, you're in big trouble. Some have switched over to the term "cross-channel," and that's a step in the right direction but still falls short. Its kinda like saying, "I upgraded my ECR to accept debit cards!" Yawn. Who hasn't? Today's retailers need to focus on omni-channel, which I first heard from my friends over at RSR but was originally coined at IDC. First retailers added e-commerce to their store and catalog channels yielding multi-channel retailing. Consumers could use the channel that worked best for them. Then some consumers wanted to combine channels with features like buy-on-the-Web, pickup-in-the-store. Thus began the cross-channel initiatives to breakdown the silos and enable the channels to communicate with each other. But the multi-channel architecture is full of duplication that thwarts efforts of providing a consistent experience. Each has its own cart, its own pricing, and often its own CRM. This was an outcrop of trying to bring the independent channels to market quickly. Rather than reusing and rebuilding existing components to meet the new demands, silos were created that continue to exist today. Today's consumers want omni-channel retailing. They want to interact with brands in a consistent manner that is channel transparent, yet optimized for that particular interaction. The diagram below, from the soon-to-be-released NRF Mobile Blueprint v2, shows this progression. For retailers to provide an omni-channel experience, there needs to be one logical representation of products, prices, promotions, and customers across all channels. The only thing that varies is the presentation of the content based on the delivery mechanism (e.g. shelf labels, mobile phone, web site, print, etc.) and often these mechanisms can be combined in various ways. I'm looking forward to the day in which I can use my phone to scan QR-codes in a catalog to create a shopping cart of items. Then do some further research on the retailer's Web site and be told about related items that might interest me. Be able to easily solicit opinions and reviews from social sites, and finally enter the store to pickup my items, knowing that any applicable coupons have been applied. In this scenario, I the consumer are dealing with a single brand that is aware of me and my needs throughout the entire transaction. Nirvana.

    Read the article

  • Creating a multi-tenant application using PostgreSQL's schemas and Rails

    - by ramon.tayag
    Stuff I've already figured out I'm learning how to create a multi-tenant application in Rails that serves data from different schemas based on what domain or subdomain is used to view the application. I already have a few concerns answered: How can you get subdomain-fu to work with domains as well? Here's someone that asked the same question which leads you to this blog. What database, and how will it be structured? Here's an excellent talk by Guy Naor, and good question about PostgreSQL and schemas. I already know my schemas will all have the same structure. They will differ in the data they hold. So, how can you run migrations for all schemas? Here's an answer. Those three points cover a lot of the general stuff I need to know. However, in the next steps I seem to have many ways of implementing things. I'm hoping that there's a better, easier way. Finally, to my question When a new user signs up, I can easily create the schema. However, what would be the best and easiest way to load the structure that the rest of the schemas already have? Here are some questions/scenarios that might give you a better idea. Should I pass it on to a shell script that dumps the public schema into a temporary one, and imports it back to my main database (pretty much like what Guy Naor says in his video)? Here's a quick summary/script I got from the helpful #postgres on freenode. While this will probably work, I'm gonna have to do a lot of stuff outside of Rails, which makes me a bit uncomfortable.. which also brings me to the next question. Is there a way to do this straight from Ruby on Rails? Like create a PostgreSQL schema, then just load the Rails database schema (schema.rb - I know, it's confusing) into that PostgreSQL schema. Is there a gem/plugin that has these things already? Methods like "create_pg_schema_and_load_rails_schema(the_new_schema_name)". If there's none, I'll probably work at making one, but I'm doubtful about how well tested it'll be with all the moving parts (especially if I end up using a shell script to create and manage new PostgreSQL schemas). Thanks, and I hope that wasn't too long! UPDATE May 11, 2010 11:26 GMT+8 Since last night I've been able to get a method to work that creates a new schema and loads schema.rb into it. Not sure if what I'm doing is correct (seems to work fine, so far) but it's a step closer at least. If there's a better way please let me know. module SchemaUtils def self.add_schema_to_path(schema) conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{schema}, #{conn.schema_search_path}" end def self.reset_search_path conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{conn.schema_search_path}" end def self.create_and_migrate_schema(schema_name) conn = ActiveRecord::Base.connection schemas = conn.select_values("select * from pg_namespace where nspname != 'information_schema' AND nspname NOT LIKE 'pg%'") if schemas.include?(schema_name) tables = conn.tables Rails.logger.info "#{schema_name} exists already with these tables #{tables.inspect}" else Rails.logger.info "About to create #{schema_name}" conn.execute "create schema #{schema_name}" end # Save the old search path so we can set it back at the end of this method old_search_path = conn.schema_search_path # Tried to set the search path like in the methods above (from Guy Naor) # conn.execute "SET search_path TO #{schema_name}" # But the connection itself seems to remember the old search path. # If set this way, it works. conn.schema_search_path = schema_name # Directly from databases.rake. # In Rails 2.3.5 databases.rake can be found in railties/lib/tasks/databases.rake file = "#{Rails.root}/db/schema.rb" if File.exists?(file) Rails.logger.info "About to load the schema #{file}" load(file) else abort %{#{file} doesn't exist yet. It's possible that you just ran a migration!} end Rails.logger.info "About to set search path back to #{old_search_path}." conn.schema_search_path = old_search_path end end

    Read the article

  • Accessing curl executable from Program files?

    - by Kaido
    I have built an application which uses user executable through WScript com object. Now the main program is located in C:\Program files\Myapp Curl executable is located in C:\Curl But it looks like my application is unable to execute curl if main application is in Program files. If i move it to another location it can execute curl nicely. Problem occurs only on Windows Xp on Vista and Win7 it works perfectly. Is there any special permissions i have to give to my app or what?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >