Search Results

Search found 15670 results on 627 pages for 'multi level'.

Page 7/627 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Level-order in Haskell

    - by brain_damage
    I have a structure for a tree and I want to print the tree by levels. data Tree a = Nd a [Tree a] deriving Show type Nd = String tree = Nd "a" [Nd "b" [Nd "c" [], Nd "g" [Nd "h" [], Nd "i" [], Nd "j" [], Nd "k" []]], Nd "d" [Nd "f" []], Nd "e" [Nd "l" [Nd "n" [Nd "o" []]], Nd "m" []]] preorder (Nd x ts) = x : concatMap preorder ts postorder (Nd x ts) = (concatMap postorder ts) ++ [x] But how to do it by levels? "levels tree" should print ["a", "bde", "cgflm", "hijkn", "o"]. I think that "iterate" would be suitable function for the purpose, but I cannot come up with a solution how to use it. Would you help me, please?

    Read the article

  • XHTML validating block level element as a link

    - by Matty F
    I need a way to make an entire DL element clickable with only one anchor tag, that validates as XHTML. As in: <a> <dl> <dt>Data term</dt> <dd>Data definition</dd> </dl> </a> This currently doesn't validate as XHTML as the anchor tag cannot contain the DL. The only way I can get it to validate is if I make two anchor tags and place them inside DT and DD. As in: <dl> <dt><a>Data term</a></dt> <dd><a>Data definition</a></dt> </dl> I'm trying to avoid this, as it would result in two href attributes requiring maintenance, introducing the possibility they could become out of sync. Suggestions?

    Read the article

  • Problem with Eclipse and a Maven multi-module project

    - by earth
    I have created a Maven project with the following structure: + root-project pom.xml (pom) + sub-projectA (jar) + sub-projectB (jar) I have done the following steps: mvn archetype:create –DgroupId=my.group.id –DartifactId=root-project mvn archetype:create –DgroupId=my.group.id –DartifactId=sub-projectA mvn archetype:create –DgroupId=my.group.id –DartifactId=sub-projectB So I have, obviously, in the top-level pom.xml the following elements: <modules> <module>sub-projectA</module> <module>sub-projectB</module> </modules> The last step was: mvn eclipse:clean eclipse:eclipse Now if I import the root-project in Eclipse, it seems to look at my projects as resources and not like java projects. However if I import each of child projects sub-projectA and sub-projectB, it looks them like java projects. This is a big problem for me because I have a deeper hierarchy. Any help would be appreciated!

    Read the article

  • Multi tenant membership provider ASP.NET MVC

    - by Masna
    Hello, I'm building a multi-tenant app with ASP.NET MVC and have a problem with validating users. Situation I have: -a table with User(ID, Name, FirstName, Email) This table is made, so that a users who is registered in two tenants doesn't need to login again. -a table with Tentantuser(ID, TenantID, UserID (FK to table User), UserName, Loginname, Password, Active) This table contains de login en password for one tenant. Example: UserX is registered in TenantA and TenantB UserX logs in on TenantA, with his login and password for TenantA System verifies or login and password are correct in the table TenantUser System validates UserX which userID corresponds to the Id in the table User UserX goes to TenantB and is automatically logged in My problem: How can I create a custom Provider so I can check the login & password in a tenant? For example: public abstract bool ValidateUser(string username,string password); How can I say to my provider on which tenant the user is? How can I change this in something like: public overrides bool ValidateUser(string username,string password, string tenant); ? Or what is another way to solve this issue?

    Read the article

  • Kohana multi language website

    - by Sobek
    .I'm trying to set up a multi language website with kohana v3, following this tutorial: http://kerkness.ca/wiki/doku.php?id=example_of_a_multi-language_website Routing to a controller or action within i.e. website/controller/action seems to work as the url is properly redirected to website/lang/controller/action. However this is not working for ajax request calls. I have to manually edit the url with the appropriate language, to successfully retrieve the data. This also applies for anchors on the html page. In addition to this problem, the overflow parameter 'id' also doesn't work. It takes the 'lang' variable as its parameter. I have setup my default route just like in the tutorial i.e.: Route::set('default', '((<lang>)(/)(<controller>)(/<action>(/<id>)))', array('lang' => "({$langs_abr})",'id'=>'.+')) ->defaults(array('lang' => $default_lang,'controller' => welcome', 'action' => 'index')); Any help is much appreciated ! Cheers

    Read the article

  • Multi-tenant Access Control: Repository or Service layer?

    - by FreshCode
    In a multi-tenant ASP.NET MVC application based on Rob Conery's MVC Storefront, should I be filtering the tenant's data in the repository or the service layer? 1. Filter tenant's data in the repository: public interface IJobRepository { IQueryable<Job> GetJobs(short tenantId); } 2. Let the service filter the repository data by tenant: public interface IJobService { IList<Job> GetJobs(short tenantId); } My gut-feeling says to do it in the service layer (option 2), but it could be argued that each tenant should in essence have their own "virtual repository," (option 1) where this responsibility lies with the repository. Which is the most elegant approach: option 1, option 2 or is there a better way? Update: I tried the proposed idea of filtering at the repository, but the problem is that my application provides the tenant context (via sub-domain) and only interacts with the service layer. Passing the context all the way to the repository layer is a mission. So instead I have opted to filter my data at the service layer. I feel that the repository should represent all data physically available in the repository with appropriate filters for retrieving tenant-specific data, to be used by the service layer. Final Update: I ended up abandoning this approach due to the unnecessary complexities. See my answer below.

    Read the article

  • Core i7 on linux loses its multithreading capability after suspend

    - by rafak
    On my debian-linux system, with a core i7 920 , each time I resume after the command "pm-suspend" (suspend to RAM), mutlithreading capabilities almost disappear. More specifically, two distinct programs can use 2 distinct cores at full rate, but a single program is limited to only one core (for one instance of a multithreaded program as well as multiple instances of a monothreaded program, e.g. "make -j 4" for gcc). So I end up rebooting the system. Any help appreciated!

    Read the article

  • Suggestions for implementing a dynamic 2D level

    - by Wouter
    I am working on a game that needs a level that is completely generated. Currently my approach is to draw textures for the levels pixel by pixel during the game (in XNA with SpriteBatch). This is too intensive unfortunately. The game has frame drops even when I only draw 1 level texture each draw cycle. Here is an example of the current prototype. It is a simple sidescroller with the avatar swimming through a cave. The shape of this cave will alter throughout the level (textures and physics collision shapes). You can clearly see the boundaries of the level tiles in the screenshot below. These are generated just before they move into camera view. For inspiration I looked at PixelJunk Shooter 2. These levels are obviously not generated, but some of the levels have movement. How do you guys think they implemented it? My guess is that the level and other objects in the game are actually flat 3d models, but I am not sure..

    Read the article

  • Problems with Level Architect, Citrus Engine, Flash

    - by Idan
    I am using the Citrus Engine to make a Flash game, and the Level Architect doesn't work well for me. Firstly, when I first launch it and open my project and my level, nothing is shown, no assets and not anything I have previously done with my level. To fix it, I open another project. The other project works fine, meaning I can see the assets and the level. Then I go back to the actual project I am working on, and the problem is fixed, only it does not fix the second problem: I can't add my own assests. I follow the manual and add tags like this: [Property(value="0")] But it doesn't change a thing in the level architect window (even after I close and reopen it). Any ideas? Thanks! Here's the code of the class I want to be shown in the Level Architect: package { import com.citrusengine.objects.PhysicsObject; import com.citrusengine.objects.platformer.Sensor; import flash.utils.clearTimeout; import flash.utils.setTimeout; /** * @author Aymeric */ public class Teleporter extends Sensor { [Property(value="0")] public var endX:Number=0; [Property(value="0")] public var endY:Number=0; public var object:PhysicsObject; [Property(value="0")] public var time:Number = 0; public var needToTeleport:Boolean = false; protected var _teleporting:Boolean = false; private var _teleportTimeoutID:uint; public function Teleporter(name:String, params:Object = null) { super(name, params); } override public function destroy():void { clearTimeout(_teleportTimeoutID); super.destroy(); } override public function update(timeDelta:Number):void { super.update(timeDelta); if (needToTeleport) { _teleporting = true; _teleportTimeoutID = setTimeout(_teleport, time); needToTeleport = false; } _updateAnimation(); } protected function _teleport():void { _teleporting = false; object.x = endX; object.y = endY; clearTimeout(_teleportTimeoutID); } protected function _updateAnimation():void { if (_teleporting) { _animation = "teleport"; } else { _animation = "normal"; } } } }

    Read the article

  • is the AOC e2239fwt supported for multi touch on any ubuntu distro?

    - by HybriDPjT
    as the title says i have the e2239fwt monitor and ive tried ubuntu 10.04, 10.10, 11.04, 12.04 and now 13.04 and i cant get it to work. i should state that the single point touch seems to work ok but thats all. ive already tried looking and found no answers so here i am asking the peeps in the know :) i am currently running 13.04 and possibly going back to 10.04 if i cant get it to work or find that this monitor is in fact not supported.. hybridpjt@Unicorn:~$ lsusb Bus 002 Device 002: ID 05e3:0610 Genesys Logic, Inc. 4-port hub Bus 003 Device 002: ID 045e:0780 Microsoft Corp. Bus 003 Device 003: ID 06a3:0cc3 Saitek PLC Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 003: ID 0408:3001 Quanta Computer, Inc. Optical Touch Screen

    Read the article

  • OPN Diamond Level Criteria Update

    - by Cinzia Mascanzoni
    On June 1, 2013, the criteria for Oracle PartnerNetwork members to attain the prestigious Diamond level will change and all members at the Diamond level at that point will be required to meet the new criteria. This change underscores the requirement for these elite partners to engage across Oracle’s broad product portfolio. Refer to the Diamond Level Requirements on the OPN Portal here for more detail.

    Read the article

  • Change Logging Level for SOA 11g

    - by James Taylor
    I’m sure there are many blogs out there that have this solution. But I seem to get asked this question a lot so I thought I would post it here for my convenience. Login to Enterprise Manager, e.g. http://localhost:7001/em Expand the SOA folder and right-click the soa-infra(soa_server1) folder and select Logs – Log Configuration Navigate to the component you want to monitor and change the log level. It is possible to change at a parent level if required It is not recommended that you set the level to FINIEST at a parent level as it will generate a lot of logging. Make sure you apply the change to take affect. Simple as that.

    Read the article

  • How difficult is it to change from Embedded programming to a high level programming [on hold]

    - by anudeep shetty
    I have a background in Computer Science. I worked on Embedded programming on Linux file systems, after I finished my Bachelor's degree, for over a year. After that I pursued my masters where most of my course choices involved working on web, java and databases. Now I have an offer to work with a company that is offering a job to work on the OS level. The company is pretty good but I am feeling that my Masters has gone to waste. I wanted to know is it common that a Computer Science major works on low-level coding and is there a possibility that I can work in this company for some years and then move onto an opportunity where I can work on high-level coding? Also is working on low-level programming a safe choice in terms of job opportunities?

    Read the article

  • Optimum number of threads while multitasking

    - by Gun Deniz
    I know similar questions have been asked but I think my case is a little bit diffrent. Let's say I have a computer with 8 cores and infinite memory with a Linux OS. I have a calculation software called Gaussian that can take advantage of multithreading. So I set its thread count to 8 for a single calculation for maximum speed. However I really can't decide what to do when I need to do run for instance 8 calculations simultaneously. In that case should I set the thread count to 1(total 8 threads spawned in 8 processes) or keep it 8(total 64 threads spawned in 8 processes) for each job? Does it really matter much? A related question is does the OS automatically does the core-parking to diffrent cores for each thread?

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • How should I load level data in java?

    - by Matthew G.
    I'm setting up my engine for a certain action/arcade game to have a set of commands that would look something like this. Set landscape to grass Create rocks at ... Create player at X, Y Set goal to "Get to point X Y" Spawn enemy at X, Y I'd then have each object knowing what it has to do, and acting on its own. I've been thinking about how to store this data. External data files could be parsed by a level class, and certain objects can be spawned through that. I could also create a base level class and extend it for each level, but that'd create a large amount of classes. Another idea is to have one level parser class, but have a case for each level. This would be extremely silly and bulky, but I mention it because I found that I did this at 2 AM last night. I'm finally getting why I have to plan out my inheritances, though. RIP project. I might be completely missing another option.

    Read the article

  • Is it possible to get dragging working on a Macbook multi-touch touch pad?

    - by lhahne
    I have a Macbook 5,1. That is to say that it is the only 13 inch aluminium Macbook as the later revisions were renamed Macbook Pro. Two-finger scrolling seems to work fine but dragging doesn't work. In OsX this works so that you point an object, click and keep your finger pressed on the touch pad while slide another finger to move the cursor. This causes weird and undefined behavior in Ubuntu as it seems the driver doesn't recognize this as dragging. Any ideas?

    Read the article

  • Shared Object Not saving the level Progress

    - by user3536228
    I am making a flash game in which i have a variable levelState that describes the current level in which user has entered I am using SharedObject to save the progress but it does not do so first i declred a clas level variable private var levelState:Number = 1; private var mySaveData:SharedObject = SharedObject.getLocal("levelSave"); in the Main function i am checking if it is a first run of the game like below if (mySaveData.data.levelsComplete == null) { mySaveData.data.levelsComplete = 1; } and in a function where the winning condition is checked so that levelState could be increased i am usin this sharedobject to hold the value of levelState if (/*winniing condition*/) levelState++; mySaveData.data.levelsComplete = levelState; mySaveData.flush(); setNewLevel(levelState); } but when i play the game clear a level and again run the game it does not start from that level it starts from beginning.

    Read the article

  • Making a level editor for my game

    - by Sherif Maher Eaid
    I am doing a 2D sprite based game in XNA for WP7, The game logic is simple, you start at some point, you want to avoid obstacles and reach a certain goal. obviously I need to make many levels for the game to be challenging and funny. I am considering making a level editor for my game where I can be able to design the level using some kind of GUI then it translates that to a .lvl or something that the game can read and interpret that to a playable level. I am asking for an already made level editor for XNA/WP7.

    Read the article

  • executing a script from maven inside a multi module project

    - by Roman
    Hi everyone. I have this multi-module project. In the beginning of each build I would like to run some bat file. So i did the following: <profile> <id>deploy-db</id> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1.1</version> </plugin> </plugins> <pluginManagement> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.1.1</version> <executions> <execution> <phase>validate</phase> <goals> <goal>exec</goal> </goals> <inherited>false</inherited> </execution> </executions> <configuration> <executable>../database/schemas/import_databases.bat</executable> </configuration> </plugin> </plugins> </pluginManagement> </build> </profile> when i run the mvn verify -Pdeploy-db from the root I get this script executed over and over again in each of my modules. I want it to be executed only once, in the root module. What is there that I am missing ? Thanks

    Read the article

  • Stop Saying "Multi-Channel!"

    - by David Dorf
    I keep hearing the term "multi-channel" in our industry, but its time to move on. It kinda reminds me of the term "ECR" or electronic cash register. Long ago ECR was a leading-edge term, but nowadays its rarely used because its table-stakes. After all, what cash register today isn't electronic? The same logic applies to multi-channel, at least when we're talking about tier-1 and tier-2 retailers. If you're still talking about multi-channel retailing, you're in big trouble. Some have switched over to the term "cross-channel," and that's a step in the right direction but still falls short. Its kinda like saying, "I upgraded my ECR to accept debit cards!" Yawn. Who hasn't? Today's retailers need to focus on omni-channel, which I first heard from my friends over at RSR but was originally coined at IDC. First retailers added e-commerce to their store and catalog channels yielding multi-channel retailing. Consumers could use the channel that worked best for them. Then some consumers wanted to combine channels with features like buy-on-the-Web, pickup-in-the-store. Thus began the cross-channel initiatives to breakdown the silos and enable the channels to communicate with each other. But the multi-channel architecture is full of duplication that thwarts efforts of providing a consistent experience. Each has its own cart, its own pricing, and often its own CRM. This was an outcrop of trying to bring the independent channels to market quickly. Rather than reusing and rebuilding existing components to meet the new demands, silos were created that continue to exist today. Today's consumers want omni-channel retailing. They want to interact with brands in a consistent manner that is channel transparent, yet optimized for that particular interaction. The diagram below, from the soon-to-be-released NRF Mobile Blueprint v2, shows this progression. For retailers to provide an omni-channel experience, there needs to be one logical representation of products, prices, promotions, and customers across all channels. The only thing that varies is the presentation of the content based on the delivery mechanism (e.g. shelf labels, mobile phone, web site, print, etc.) and often these mechanisms can be combined in various ways. I'm looking forward to the day in which I can use my phone to scan QR-codes in a catalog to create a shopping cart of items. Then do some further research on the retailer's Web site and be told about related items that might interest me. Be able to easily solicit opinions and reviews from social sites, and finally enter the store to pickup my items, knowing that any applicable coupons have been applied. In this scenario, I the consumer are dealing with a single brand that is aware of me and my needs throughout the entire transaction. Nirvana.

    Read the article

  • Creating a multi-tenant application using PostgreSQL's schemas and Rails

    - by ramon.tayag
    Stuff I've already figured out I'm learning how to create a multi-tenant application in Rails that serves data from different schemas based on what domain or subdomain is used to view the application. I already have a few concerns answered: How can you get subdomain-fu to work with domains as well? Here's someone that asked the same question which leads you to this blog. What database, and how will it be structured? Here's an excellent talk by Guy Naor, and good question about PostgreSQL and schemas. I already know my schemas will all have the same structure. They will differ in the data they hold. So, how can you run migrations for all schemas? Here's an answer. Those three points cover a lot of the general stuff I need to know. However, in the next steps I seem to have many ways of implementing things. I'm hoping that there's a better, easier way. Finally, to my question When a new user signs up, I can easily create the schema. However, what would be the best and easiest way to load the structure that the rest of the schemas already have? Here are some questions/scenarios that might give you a better idea. Should I pass it on to a shell script that dumps the public schema into a temporary one, and imports it back to my main database (pretty much like what Guy Naor says in his video)? Here's a quick summary/script I got from the helpful #postgres on freenode. While this will probably work, I'm gonna have to do a lot of stuff outside of Rails, which makes me a bit uncomfortable.. which also brings me to the next question. Is there a way to do this straight from Ruby on Rails? Like create a PostgreSQL schema, then just load the Rails database schema (schema.rb - I know, it's confusing) into that PostgreSQL schema. Is there a gem/plugin that has these things already? Methods like "create_pg_schema_and_load_rails_schema(the_new_schema_name)". If there's none, I'll probably work at making one, but I'm doubtful about how well tested it'll be with all the moving parts (especially if I end up using a shell script to create and manage new PostgreSQL schemas). Thanks, and I hope that wasn't too long! UPDATE May 11, 2010 11:26 GMT+8 Since last night I've been able to get a method to work that creates a new schema and loads schema.rb into it. Not sure if what I'm doing is correct (seems to work fine, so far) but it's a step closer at least. If there's a better way please let me know. module SchemaUtils def self.add_schema_to_path(schema) conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{schema}, #{conn.schema_search_path}" end def self.reset_search_path conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{conn.schema_search_path}" end def self.create_and_migrate_schema(schema_name) conn = ActiveRecord::Base.connection schemas = conn.select_values("select * from pg_namespace where nspname != 'information_schema' AND nspname NOT LIKE 'pg%'") if schemas.include?(schema_name) tables = conn.tables Rails.logger.info "#{schema_name} exists already with these tables #{tables.inspect}" else Rails.logger.info "About to create #{schema_name}" conn.execute "create schema #{schema_name}" end # Save the old search path so we can set it back at the end of this method old_search_path = conn.schema_search_path # Tried to set the search path like in the methods above (from Guy Naor) # conn.execute "SET search_path TO #{schema_name}" # But the connection itself seems to remember the old search path. # If set this way, it works. conn.schema_search_path = schema_name # Directly from databases.rake. # In Rails 2.3.5 databases.rake can be found in railties/lib/tasks/databases.rake file = "#{Rails.root}/db/schema.rb" if File.exists?(file) Rails.logger.info "About to load the schema #{file}" load(file) else abort %{#{file} doesn't exist yet. It's possible that you just ran a migration!} end Rails.logger.info "About to set search path back to #{old_search_path}." conn.schema_search_path = old_search_path end end

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >