Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 221/319 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • How to implement automatic reflection of direct SQL Updates of the underlying database, in an ActiveRecord in Ruby on Rails ?

    - by Vadim Eisenberg
    Hello ! I am new to Ruby on Rails and I have a (maybe naive) question: I want to implement reflection of direct SQL Updates of the underlying database in an ActiveRecord (and finally in the generated html). By "direct updates" I mean updating the database bypassing the ActiveRecord methods, for example by MySQL console. I guess here MySQL triggers could be used that would call some stored procedure that would cause the appropriate ActiveRecord to be reloaded. Is there some automatic handling of this scenario in ActiveRecord/Ruby on Rails ? Did somebody implement this scenario ? Can somebody recommend using other MVC frameworks to reflect direct changes in mapped databases ?

    Read the article

  • How can I specify a single .config file for multiple EXE projects in .NET

    - by Russ
    I have a project that I am breaking up into multiple .exe projects. I still plan on publishing them, using click once, into the same location at the same time, and I would like to use the same config file. I have added the app.config to each project using the "Add link" option in Visual Studio, which is great for debugging, but in production, when I compile each exe project, the app.config is not copied into the "master project"'s bin folder. example: master.exe with master.exe.config master.exe may launch order.exe based on user settings master.exe may launch returns.exe based on user settings master, order, and returns will all reside in the same folder, and should share a single config file.

    Read the article

  • As a team should we develop locally and merge into the dev server, or develop on the dev server?

    - by CogitoErgoSum
    Hey, Recently I was tasked with writing up formal procedures for a team based development enviroment. We have several projects with multiple modules each. Right now there are only two programmers, however there are plans to expand to 4-6 programmers. Each programmer will be working on the same project and possibly pages which may cause over writing or error issues. So far the ideal solution I have thought up is: Local development (WAMP/VM or some virtual server instance on their own machine). Once a developer has finished their developments, they check it into the CVS Repository and merge it wih other fixes etc. The CVS version is then deployed to the primary dev server for testing by the devs. The MySQL DAtabases are kept on the primary dev server and users may remotely connect to it. Any Schema / Data alterations are run through a DB Admin who will notify all devs of any DB Changes (Which should be rare). Does anyone see an issue with this or have a better solution?

    Read the article

  • .NET assembly in GAC + Config

    - by Dante
    I have a .NET 3.5 assembly, a DAL, that connects to a database through Linq2SQL. I deploy this assembly in the GAC as it can be used by multiple business layers. The question is: in a dev environment I have a connection string different than the one in the production environment. Before deploying the assembly to the prod GAC I need to recompile it with the appropriate connection string. Is there any way to allow deploying the assembly to the GAC independently of the connection string, being that info read from some config? Thx in advance

    Read the article

  • Log4net rollingfile appender skipping every other day in asp.net mvc app

    - by tasty_spider_men
    Hi, I've set up some log4net logging on an asp.net mvc application i've had running for a little over a month now. I've set up rolling file (and smtp) appender on it. The application is hit several thousand times a day - alot of actions are logged. On top of these a number of batch jobs are run as part of the application that also write to the log. Now - as i've been occupied with other work, i've not attended to this application much assuming things to be working based on my test setup. Unfortunately i discover that the rolling file appender on my production server seems to be skipping every other day. Ie i have logs : 10/4/2010, 10/4/2010.1, 10/4/2010.2, 12/4/2010, 12/4/2010.1 , 14/4/2010, 14/4/2010.1 etc. Any idea what could be causing this? It is absolutely impossible that there has no action on these odd days for the lifetime of the application. Cheers

    Read the article

  • Weblogic server lib VS instance libext

    - by Sebastien Lorber
    Hello, I use weblogic 10. Its provide an Oracle JDBC driver 10.2.0.2 (in the server/lib on weblogic home). Actually someone at work put a long time ago a 10.2.0.3 driver in instance libext folder. But in production we got a jdbc driver stack (nullpointer :O) and by reverse engineering it seems we are using driver 10.2.0.2. We know that we could change the driver in the server/lib of weblogic but i want to understand. Isn't libext supposed to override server libs like META-INF libs override libext? By the way we are in a strange situation: - We have 2 EAR, and for the exact same treatment in those 2, one will sometime throw the oracle driver nullpointer while the other doesn't - I wonder if one ear isn't using 10.2.0.2 while the other is using 10.2.0.3 (i saw a bug fixed that could fit to our problem for this version). - I need to look better but at first sight both ear use the exact same datasource set in weblogic JNDI resources Any idea?

    Read the article

  • Customizing Bugzilla 4.0.2 Bug ID numbers

    - by Marcus Polk
    Is it possible to customize Bugzilla bug numbers and add a letter designator, to immediately know it came from Bugzilla? My company is evaluating Bugzilla and has now added many new bugs. We also use 2 other bug databases. This wasn't my decision and I believe they were trying to incorporate better reporting, etc. I read an answer here about "seeding" Bugzilla bug numbers, by setting the "AUTO_INCREMENT" field in the "bugs" table to a different value. I wish I would've thought about this sooner and found this board. Setting that value to start at 9000 or some extraordinary value would have ensured that we knew exactly which bugs came from Bugzilla. However, would it be possible to change the field in the "bugs" table to accept letters, as well? Of course that would probably just mess up the whole auto incrementing of the numbers. Any help or advice is greatly appreciated. Thank you.

    Read the article

  • Is it possible to run a compiled program with Xcode on Mac OS X in FreeBSD? (Objective-C/Cocoa)

    - by Eonil
    Hi. I have a plan to build a web-site which running CGI made with Cocoa. My final goal is develop on Mac OS X, and run on FreeBSD. Is this possible? As I know, there is a free implementation of some NextStep classes, the GNUStep. The web-site is almost built with only strings. I read GNUStep documents, classes are enough. DB connection will be made with C interfaces. Most biggest problem which I'm concerning is linking and binary compatibility. I'm currently configuring FreeBSD on VirtualBox, but I wanna know any possibility informations about this from experts. This is not a production server. Just a trial. Please feel free to saying anything.

    Read the article

  • jruby/activerecord-jdbc/tomcat/DB2 ready for enterprise?

    - by arkadiy
    I am trying to introduce RoR to my company and I have two ways of doing so in my mind: (1) rails/ibm_db2/passenger/DB2 - which is my preferable way but it is not really supported by company's infrastructure. (2) jruby/activerecord-jdbc/tomcat/DB2 - probably easier way to migrate relying on current infrastructure and java libs IF I have a proof this is an enterprise ready technology. Does anyone know if there is any prof that jruby/aciverecord-jdbc-adapter/DB2/tomcat is mature enough for production? Are there any problems I should know about during Development/Deployment/Runtime? My webapp is for a company intranet, around 200~400 active users.

    Read the article

  • Ruby On Rails : db:migrate do not run

    - by user332219
    Hi, When i run this command i have an error : rake db:migrate --trace rake aborted! NoMethodError: undefined method `ord' for 0:Fixnum: SET NAMES 'utf8' see the log file here : http://patxi.mayol.free.fr/public/trace.txt my database.yml file is here : MySQL. Versions 4.1 and 5.0 are recommended. # development: adapter: mysql encoding: utf8 reconnect: false database: annuaire_development pool: 5 username: root password: host: 127.0.0.1 test: adapter: mysql encoding: utf8 reconnect: false database: annuaire_test pool: 5 username: root password: host: 127.0.0.1 production: adapter: mysql encoding: utf8 reconnect: false database: annuaire_production pool: 5 username: root password: host: 127.0.0.1 Thanks My config : WampServer 2.0 with MySQL 5.0.51a Aptana 2.0.4 Ruby 1.8.5 Gems: actionmailer (2.3.4, 2.3.2) actionpack (2.3.4, 2.3.2) activerecord (2.3.4, 2.3.2) activeresource (2.3.4, 2.3.2) activesupport (2.3.4, 2.3.2) cgi_multipart_eof_fix (2.5.0) fastthread (1.0.1) gem_plugin (0.2.3) linecache (0.43) mongrel (1.1.5) mysql (2.8.1, 2.7.3) rack (1.0.0) rails (2.3.4, 2.3.2) rake (0.8.7) ruby-debug-base (0.10.3) ruby-debug-ide (0.4.5) sqlite3-ruby (1.2.1)

    Read the article

  • Stored procs breaking overnight

    - by Chad
    We are running MS SQL 2005 and we have been experiencing a very peculiar problem the past few days. I have two procs, one that creates an hourly report of data. And another that calls it, puts its results in a temp table, and does some aggregations, and returns a summary. They work fine...until the next morning. The next morning, suddenly the calling report, complains about an invalid column name. The fix, is simply a recompile of the calling proc, and all works well again. How can this happen? It's happened three nights in a row since moving these procs into production.

    Read the article

  • Authentication in Rails, where to start?

    - by Victor P
    Hello. Im learning Rails by building apps. I want to make my first authenticated app: users signup, login, do some changes in models they have access to and logout. I did the Google search but it is quite confusing: many plugins, many tutorials. Don't know where to start. Is there a state-of-the-art authentication method for Rails? What do you use in Production to authenticate your users? Any help in this will be helpful. Thanks

    Read the article

  • Why does tokyo tyrant slow down exponentially even after adjusting bnum?

    - by HenryL
    Has anyone successfully used Tokyo Cabinet / Tokyo Tyrant with large datasets? I am trying to upload a subgraph of the Wikipedia datasource. After hitting about 30 million records, I get exponential slow down. This occurs with both the HDB and BDB databases. I adjusted bnum to 2-4x the expected number of records for the HDB case with only a slight speed up. I also set xmsiz to 1GB or so but ultimately I still hit a wall. It seems that Tokyo Tyrant is basically an in memory database and after you exceed the xmsiz or your RAM, you get a barely usable database. Has anyone else encountered this problem before? Were you able to solve it?

    Read the article

  • How to inline compressed CSS in Rails with assets pipeline

    - by haimg
    I'm trying to inline CSS into my layout. I'm currently using = Rails.application.assets.find_asset('embedded.css').body.html_safe However, the CSS returned is not compressed. I verified what .digest_path asset file exists, and is properly compressed. I can, of course, write a helper that will check if current on-disk compressed asset file exists for a given asset, and use it. However, I think find_asset actually compiles a CSS asset each time it is called -- not good in production. I hope a cleaner solution exists for this issue.

    Read the article

  • What Can A 'TreeDict' (Or Treemap) Be Used For In Practice?

    - by Seun Osewa
    I'm developing a 'TreeDict' class in Python. This is a basically a dict that allows you to retrieve its key-value pairs in sorted order, just like the Treemap collection class in Java. I've implemented some functionality based on the way unique indexes in relational databases can be used, e.g. functions to let you retrieve values corresponding to a range of keys, keys greater than, less than or equal to a particular value in sorted order, strings or tuples that have a specific prefix in sorted order, etc. Unfortunately, I can't think of any real life problem that will require a class like this. I suspect that the reason we don't have sorted dicts in Python is that in practice they aren't required often enough to be worth it, but I want to be proved wrong. Can you think of any specific applications of a 'TreeDict'? Any real life problem that would be best solved by this data structure? I just want to know for sure whether this is worth it.

    Read the article

  • Any recommended VC++ settings for better PDB analysis on release builds

    - by Brian R. Bondy
    Are there any VC++ settings I should know about to generate better PDB files that contain more information? I have a crash dump analysis system in place based on the project crashrpt. Also, my production build server has the source code installed on the D:\, but my development machine has the source code on the C:\. I entered the source path in the VC++ settings, but when looking through the call stack of a crash, it doesn't automatically jump to my source code. I believe if I had my dev machine's source code on the D:\ it would work.

    Read the article

  • Can I get a "base URL" in Wordpress within a template file?

    - by alex
    Usually in my PHP apps I have a base URL setup so I can do things like this <a href="<?php echo BASE_URL; ?>tom/jones">Tom</a> Then I can move my site from development to production and swap it easily and have the change go site wide (and it seems more reliable than <base href="" />. I'm doing up a Wordpress theme, and I am wondering, does WordPress have anything like this built in, or do I need to redefine my own? I can see ABSPATH, but that is the absolute file path in the file system, not something from the document root.

    Read the article

  • Know if a Visual Studio Website project is recompiling itself in the background?

    - by jdk
    A number of team members update a central ASP.NET dev site (Website project, not a Web application type). Some kinds of changes cause a recompile/rebuild in it. The large website takes a while to recompile and we've noticed it will still seemingly serve out dynamic pages before everything is internally updated. During the site's "gestation" period, our mileage varies while hitting it. Sometimes we get a correct page, sometimes an compilation error page that will eventually be served up without a compilation error, and at other times an unexpected hybrid. Is it possible to query an ASP.NET website application to see if it's currently compiling or rebuilding itself? If so I would write a status page that the team could reference when they're getting weird behaviour, so they would know to wait. Update: Our team often edit files manually on the dev server. For production we'd make pre-compiled pushes. The dev environment is a little more malleable and ever-changing so I'm looking for a solution to reducing the "confusion" there.

    Read the article

  • SQL script to show addition to tables

    - by andreas
    Hey all I have a 2 MS SQL 2005 databases,a TEST and DEV database. Now our developer added some extra columns,tables etc in the DEV database.This created differences in the TEST database.is there a script i can write tha can tell me what the changes where in the DEV database between certain dates...i found a couple of tools but they are quite basic and dont really generate change scripts etc. Also tried the change script function in management studio but it seems to be working when the change is first made and not later. Appreciate your thoughts. A.

    Read the article

  • ANTRL: token to text in rewrite rule

    - by Antonio
    I'm building an AST using ANTLR. I want to write a production that match a this string: ${identifier} so, in my grammar file I have: reference : DOLLAR LBRACE IDENT RBRACE -> ^(NODE_VAR_REFERENCE IDENT) ; This works fine. I'm using my own adaptor to emit tree nodes. The rewrite rule used creates for me two nodes: one for NODE_VAR_REFERENCE and one for IDENT. What I want to do is create only one node (for NODE_VAR_REFERENCE token) and this node must have the IDENT token in his "token" field. Is this possible using a rewrite rule? Thanks.

    Read the article

  • What would you like to see in a book about Rails 3?

    - by Ryan Bigg
    Hello, I'm thinking of writing a book about Rails 3 and I have a basic idea of what I want to write about, but really what I need is that little bit of polish from the community. The book is planned to cover: A little bit of Ruby Version control using Git Testing using RSpec & Cucumber Life without Rails (Rack) Developing an application in Rails (vague, I know, sorry! Work in progress) Javascript testing using Selenium Background processing Document databases (a tentative maybe, at this stage) Monitoring an application And a little more. So, I leave this question for you: What do you want to see in a book about Rails 3, which goes from "This is Rails" to "This is how Rails scales"?

    Read the article

  • How can unit testing make parameter validation redundant?

    - by Johann Gerell
    We have a convention to validate all parameters of constructors and public functions/methods. For mandatory parameters of reference type, we mainly check for non-null and that's the chief validation in constructors, where we set up mandatory dependencies of the type. The number one reason why we do this is to catch that error early and not get a null reference exception a few hours down the line without knowing where or when the faulty parameter was introduced. As we start transitioning to more and more TDD, some team members feel the validation is redundant. Uncle Bob, who is a vocal advocate of TDD, strongly advices against doing parameter validation. His main argument seems to be "I have a suite of unit tests that makes sure everything works". But I can for the life of it just not see in what way unit tests can prevent our developers from calling these methods with bad parameters in production code. Please, unit testers out there, if you could explain this to me in a rational way with concrete examples, I'd be more than happy to seize this parameter validation!

    Read the article

  • How to create a fully qualified hyperlink to a resource dynamically?

    - by Slauma
    In ASP.NET I'd like to create a link which points to a specific Uri and send this link in an email to a user, for instance something like http://www.BlaBla.com/CustomerPortal/Order/9876. I can create the second part of the Uri /CustomerPortal/Order/9876 dynamically in code-behind. My question is: How can I create the base Uri http://www.BlaBla.com without hardcoding it in my application? Basically I want to have something like: http://localhost:1234/CustomerPortal/Order/9876 (on my development machine) http://testserver/CustomerPortal/Order/9876 (on an internal test server) http://www.BlaBla.com/CustomerPortal/Order/9876 (on the production server) So is there a way to ask the server where the application is running: "Please tell me the base Uri of the application" ? Or any other way? Thank you in advance!

    Read the article

  • Solution for distributing MANY simple network tasks?

    - by EmpireJones
    I would like to create some sort of a distributed setup for running a ton of small/simple REST web queries in a production environment. For each 5-10 related queries which are executed from a node, I will generate a very small amount of derived data, which will need to be stored in a standard relational database (such as PostgreSQL). What platforms are built for this type of problem set? The nature, data sizes, and quantities seem to contradict the mindset of Hadoop. There are also more grid based architectures such as Condor and Sun Grid Engine, which I have seen mentioned. I'm not sure if these platforms have any recovery from errors though (checking if a job succeeds). What I would really like is a FIFO type queue that I could add jobs to, with the end result of my database getting updated. Any suggestions on the best tool for the job?

    Read the article

  • Why wouldn't an S3 ACL "stick"?

    - by Chris Phillips
    We would like to set an ACL to allow access to one of our buckets with a partner account. We've tested the process on a test account and everything works fine. On our production account/buckets, however, we can set the ACL and see the update but as soon as we attempt to access the bucket from the other account we get a forbidden response. Afterwards, when we look at the ACL list for the bucket, the permission is gone. We've tried using both Amazon's new S3 tool in the AWS Management Console and CloudBerry Explorer and both tools exhibit exactly the same behavior. Using the same process to update an ACL from our test account works as expected ( the ACL update "sticks" ). What would cause the ACL to not "stick"? Does anyone have any ideas on how to fix/workaround the problem?

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >