Search Results

Search found 13713 results on 549 pages for 'production environment'.

Page 119/549 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • rake db:migrate and rake db:create both work on test database, not development database

    - by geography_guy
    I am new to Stack Overflow and Ruby on Rails. My problem is, when I run the command rake db:create or rake db:migrate, the test database is affected, but the development database is not. rails (3.2.2) my database.yml: # Warning: The database defined as "test" will be erased and # re-generated from your development database when you run "rake". # Do not set this db to the same as development or production. test: &test adapter: postgresql encoding: unicode database: ticketee_test pool: 5 username: ticketee password: my_password_here development: adapter: postgresql encoding: unicode database: ticketee_development pool: 5 username: ticketee password: my_password_here production: adapter: postgresql encoding: unicode database: ticketee_production pool: 5 username: ticketee password: my_password_here cucumber: <<: *test

    Read the article

  • Jquery Ajax Request working on Ubuntu but not working on Debian?

    - by MICADO
    I have a strange bug. I develop my application under linux ubuntu. Then i export my project under linux debian for production. I use a classic $.ajax request which return to url ,a json file to parse its content. I created a javascript alert() on the sucess part to see what is returned. Under the developpement version under ubuntu, it's works and i get : [object Object],[object Object],[object Object] Under the production environnement under debian, it's not working and i get my jsonfile content : [ { "cell_line" : "", "id_user" : "2", "public" : "0", },{...},{..} ,etc...] What is going on here? I really don't understand? How the change of platform (ubuntu to debian) can do this??? There is something I am missing.. I'll really appreciate some help on that. Thanks!

    Read the article

  • How do I make use of multiple cores in Large SQL Server Queries?

    - by Jonathan Beerhalter
    I have two SQL Servers, one for production, and one as an archive. Every night, we've got a SQL job that runs and copies the days production data over to the archive. As we've grown, this process takes longer and longer and longer. When I watch the utilization on the archive server running the archival process, I see that it only ever makes use of a single core. And since this box has eight cores, this is a huge waste of resources. The job runs at 3AM, so it's free to take any and all resources it can find. So what I need to do if figure out how to structure SQL Server jobs so they can take advantage of multiple cores, but I can't find any literature on tackling this problem. We're running SQL Server 2005, but I could certainly push for an upgrade if 2008 takes of this problem.

    Read the article

  • Leaving SQL Management open on the internet

    - by Tim Fraud
    I am a developer, but every so often need access to our production database -- yeah, poor practice, but anyway... My boss doesn't want me directly on the box using RDP, and so we decided to just permit MS SQL Management Console access so that I can do my tasks. So right now we have the SQL box somewhat accessible on the internet (on port 1433 if I am not mistaken), which opens a security hole. But I am wondering, how much of an uncommon practice is this, and what defaults should I be concerned about? We use MSSQL2008 and I created an account that has Read-Only access, because my production tasks only need that. I didn't see any unusual default accounts with default passwords on the system, so I would be interested to hear your take. (And of-course, is there a better way?)

    Read the article

  • Why doesn't my ClickOnce deployment pick up the latest changes to the application?

    - by Simon
    I have a WinForms app which is deployed to a local network drive (as 'Online Only') via ClickOnce. This has been working fine but today I made some changes to the application and attempted to ClickOnce deploy it to a separate network location (to use as a test system) rather than the current production location. ClickOnce publishes successfully, with no errors, to the correct location but only publishes the pre-change version; i.e. none of my changes are visible: the version number is the old version number and the displayed release date is the last production release back in 2009. What do I have to do to get this to publish correctly? I've used a similar approach on other applications with no such issues.

    Read the article

  • How to change data structure in mysql using mysqldump without deleting files

    - by Don Quixote
    Essentially what I'm trying to do is sync a production server with a sandbox server, but only the table structures and stored procedures. The procedures aren't any problem since they can be overriden, but the problem is the tables. I want to sync and alter their structures on the production server using mysqldump (or any other way that you can propose) without altering any existing data. If it helps, I only want to add more columns, not remove any existing ones. Also, I am using mysqlyog. Is there any way to do this?

    Read the article

  • How do i find out in sql what db name I'm connect to

    - by gjutras
    We have a change control environment where the developers give scripts to change control people to run. we have dev,qa, & production environments. I want to conditionalize a couple segments to do some different things depending on what database the change control person is running my script. If @dbname='dev' then begin --do some dev stuff end If @dbname='QA' then begin --do some qa stuff end If @dbname='Prod' then begin --do some production stuff end How do I get at what the current connected database is and fill @dbname?

    Read the article

  • How to setup a development Active Directory

    - by Rob
    Does anyone have any suggestions on how to setup a development environment for active directory? We are thinking of using development.contoso.com or something along those lines that is a completely separate envnironment from our production. This will be used for things like Dev SharePoint and possibly a Dev exchange server. Maybe even a dev CRM. We are thinking of setting this up all using virtual machines. Possibly having the production get replicated down on a regular basis as well. Does anyone have an experience with this or any suggestions on what to do or not to do for this?

    Read the article

  • Is it possible to set path of database for delayed job in rails?

    - by WitchOfCloud
    Now, I am developing with mailing system with delayed_jobs gem. When I ran on developing environment, it operated well. But, after deploying application on server, it is not acted. This is my database.yml development: adapter: sqlite3 database: db/development.sqlite3 pool: 5 timeout: 5000 test: adapter: sqlite3 database: db/test.sqlite3 pool: 5 timeout: 5000 production: adapter: sqlite3 database: /var/www/service/shared/db/production.sqlite3 pool: 5 timeout: 5000 I checked queue(in /var/www/...) and it act well. Also, I started delayed_jobs(rake jobs:work). So, I think that problem is delayed_job crawl db/development.sqlite3 How can solve this problem?

    Read the article

  • Hyper-V and Hyper-threading: On or off?

    - by CapBBeard
    Hey all, With the new Xeon CPUs supporting Hyper-threading, what is the current wisdom with regard to using it (or not) on a Hyper-V host machine? I was originally under the impression that turning it on in a virtual host environment could be detrimental as the 'extra' CPUs were not true cores. However I've also read (unconfirmed) comments along the lines of MS doing some hard work to get Hyper-V running well in a Hyper-threading environment. Does anyone have any solid information or experience in this regard? Cheers!

    Read the article

  • "Outlook must be online or connected to complete this action" windows XP, outlook 2007, connect to e

    - by bob franklin smith harriet
    Hey, I can't connect to an exchange server using windows XP and outlook 2007, using the "connect anywhere over HTTP" process, it has been working until recently and the user reports no recent changes to his environment. The error is "Outlook must be online or connected to complete this action" It will prompt me for the username and password which I can enter, then it will give the errorm however this only happens when I delete the account and enter all details for the excahnge server again. The client computer that is unable to connect using outlook can connect to the HTTPS mail service and login send/receive fine. Nobody else has reported issues. making a test environment with a clean install of XP and outlook 2007 gives the same error, but using windows 7 and outlook 2007 connects perfectly fine everytime. I also removed all passwords using control keymgr.dll which didnt help. Any assistance or ideas would be appreciated, at this point nothing I've tried from technet or google works <_<

    Read the article

  • Troubleshooting PXE-E11: ARP Timeout error under VMware ESXi 4 and HPSA 7.8

    - by warren
    I am running HP Server Automation v7.8 in a lab environment on VMware ESXi, managed via vSphere 4. On the same host I have several small VMs for OS provisioning testing (512MB RAM, 10GB hdd, one NIC on the same vSwitch that HPSA is running on). DHCP is configured to hand-out addresses in the 192.168.10.151-200 range. On boot of the VM, it receives an IP (eg 192.168.10.198) within seconds. However, after it receives its IP, a PXE-E11: ARP Timeout error occurs in trying to boot from the DHCP server. I do not know if this is a HPSA-specific error, as I have seen reports of the PXE-E11 error on various forums. Proposed solutions I have seen so far (changing VLAN settings, for example) have not been applicable to my environment. Are there any pointers/troubleshooting steps that can be taken to resolve this?

    Read the article

  • High Tq values for HAProxy

    - by Will
    I just took over administration of a new environment. A known issue is that the environment is known for high response times (20+ seconds), so I figured I'd turn on haproxy logging and see what is going on. I figured I'd see slow load times in the app servers, but I'm actually seeing high Tq values in HAProxy. The HAProxy is on EC2 and is NOT behind ELB. Sep 5 14:22:00 haproxy-apps01 haproxy[24695]: 76.14.153.221:3371 [05/Sep/2012:14:21:49.780] http-in default_apps/fe04-c 10936/0/0/55/10991 200 488 - - ---- 111/111/0/1/0 0/0 "GET /event_times/next?callback=jQuery170189312373075111_1346854917562&_=1346854918453 HTTP/1.1" As you can see, this one has a Tq of about 10 seconds. Not all the Tq's are high (1+ seconds), but a good percentage of them are (approx 35%). Normally when I see this behavior, I'd expect there to be network issues, but this is an incredibly high percentage of visitors to be having an issue like this, so I'm wondering if anybody has seen this or have any hints on diagnosing if the issue could possibly be on this box?

    Read the article

  • Networking setup for three systems

    - by srihari
    Hi, I want to setup a client server environment. I have three systems one with Solaris and the other two with windows. I want to install all the database and other software on the Server and enable the client systems a limited access to the system resources. Can anyone help me how to setup this and also the hardware requirements to setup such an environment. Your replies will be mostly appreciable and helpful to others who has similar thoughts or requirements. As we have more programmatical knowledge and less networking knowledge ,please explain in detail Please provide any vedio tutorials links or documents which will be helpfull in this regard. Thanks in Advance, Srihari.

    Read the article

  • Networking setup for three systems

    - by srihari
    Hi, I want to setup a client server environment. I have three systems one with Solaris and the other two with windows. I want to install all the database and other software on the Server and enable the client systems a limited access to the system resources. Can anyone help me how to setup this and also the hardware requirements to setup such an environment. Your replies will be mostly appreciable and helpful to others who has similar thoughts or requirements. As we have more programmatical knowledge and less networking knowledge ,please explain in detail Please provide any vedio tutorials links or documents which will be helpfull in this regard.

    Read the article

  • Capistrano asks for SSH password when deploying from local machine to server

    - by GhostRider
    When I try to ssh to a server, I'm able to do it as my id_rsa.pub key is added to the authorized keys in the server. Now when I try to deploy my code via Capistrano to the server from my local project folder, the server asks for a password. I'm unable to understand what could be the issue if I'm able to ssh and unable to deploy to the same server. $ cap deploy:setup "no seed data" triggering start callbacks for `deploy:setup' * 13:42:18 == Currently executing `multistage:ensure' *** Defaulting to `development' * 13:42:18 == Currently executing `development' * 13:42:18 == Currently executing `deploy:setup' triggering before callbacks for `deploy:setup' * 13:42:18 == Currently executing `db:configure_mongoid' * executing "mkdir -p /home/deploy/apps/development/flyingbird/shared/config" servers: ["dev1.noob.com", "176.9.24.217"] Password: Cap script: # gem install capistrano capistrano-ext capistrano_colors begin; require 'capistrano_colors'; rescue LoadError; end require "bundler/capistrano" # RVM bootstrap # $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require 'rvm/capistrano' set :rvm_ruby_string, 'ruby-1.9.2-p290' set :rvm_type, :user # or :user # Application setup default_run_options[:pty] = true # allow pseudo-terminals ssh_options[:forward_agent] = true # forward SSH keys (this will use your SSH key to get the code from git repository) ssh_options[:port] = 22 set :ip, "dev1.noob.com" set :application, "flyingbird" set :repository, "repo-path" set :scm, :git set :branch, fetch(:branch, "master") set :deploy_via, :remote_cache set :rails_env, "production" set :use_sudo, false set :scm_username, "user" set :user, "user1" set(:database_username) { application } set(:production_database) { application + "_production" } set(:staging_database) { application + "_staging" } set(:development_database) { application + "_development" } role :web, ip # Your HTTP server, Apache/etc role :app, ip # This may be the same as your `Web` server role :db, ip, :primary => true # This is where Rails migrations will run # Use multi-staging require "capistrano/ext/multistage" set :stages, ["development", "staging", "production"] set :default_stage, rails_env before "deploy:setup", "db:configure_mongoid" # Uncomment if you use any of these databases after "deploy:update_code", "db:symlink_mongoid" after "deploy:update_code", "uploads:configure_shared" after "uploads:configure_shared", "uploads:symlink" after 'deploy:update_code', 'bundler:symlink_bundled_gems' after 'deploy:update_code', 'bundler:install' after "deploy:update_code", "rvm:trust_rvmrc" # Use this to update crontab if you use 'whenever' gem # after "deploy:symlink", "deploy:update_crontab" if ARGV.include?("seed_data") after "deploy", "db:seed" else p "no seed data" end #Custom tasks to handle resque and redis restart before "deploy", "deploy:stop_workers" after "deploy", "deploy:restart_redis" after "deploy", "deploy:start_workers" after "deploy", "deploy:cleanup" 'Create symlink for public uploads' namespace :uploads do task :symlink do run <<-CMD rm -rf #{release_path}/public/uploads && mkdir -p #{release_path}/public && ln -nfs #{shared_path}/public/uploads #{release_path}/public/uploads CMD end task :configure_shared do run "mkdir -p #{shared_path}/public" run "mkdir -p #{shared_path}/public/uploads" end end namespace :rvm do desc 'Trust rvmrc file' task :trust_rvmrc do run "rvm rvmrc trust #{current_release}" end end namespace :db do desc "Create mongoid.yml in shared path" task :configure_mongoid do db_config = <<-EOF defaults: &defaults host: localhost production: <<: *defaults database: #{production_database} staging: <<: *defaults database: #{staging_database} EOF run "mkdir -p #{shared_path}/config" put db_config, "#{shared_path}/config/mongoid.yml" end desc "Make symlink for mongoid.yml" task :symlink_mongoid do run "ln -nfs #{shared_path}/config/mongoid.yml #{release_path}/config/mongoid.yml" end desc "Fill the database with seed data" task :seed do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake db:seed" end end namespace :bundler do desc "Symlink bundled gems on each release" task :symlink_bundled_gems, :roles => :app do run "mkdir -p #{shared_path}/bundled_gems" run "ln -nfs #{shared_path}/bundled_gems #{release_path}/vendor/bundle" end desc "Install bundled gems " task :install, :roles => :app do run "cd #{release_path} && bundle install --deployment" end end namespace :deploy do task :start, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Restart the app" task :restart, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Start the workers" task :stop_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:stop_workers" end desc "Restart Redis server" task :restart_redis do "/etc/init.d/redis-server restart" end desc "Start the workers" task :start_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:start_workers" end end

    Read the article

  • What’s the password for my FileVault 2 boot volume?

    - by cbowns
    Apple’s support document for FileVault 2 (a.k.a. “full disk encryption” or “FDE”) has lots of information about enabling FDE and what it means for booting the machine. However, it doesn’t cover one very important thing I’m trying to do: mount the drive in the Recovery HD environment to reinstall OS X on it. The Recovery HD environment asks me for the volume passphrase so it can mount my drive and try to install OS X onto it. If this were an external drive which I’d manually enabled FDE on with diskutil, or an external Time Machine volume, I’d know it because it makes you pick one (just like a regular login password), but FileVault 2 never asked me for a volume passphrase (I assume it selects one behind the scenes). I’ve tried my main user’s password, but that doesn’t work, and neither does the recovery key set for the volume. Keychain Access doesn’t have anything that I could find. How do I unlock this volume?

    Read the article

  • Can't RDP Into Windows Server After Windows Server Establishes VPN Session

    - by Jennifer Baker
    Hi there. I've setup a Windows 2008 Server in a Cloud Environment. I am able to RDP to this Windows Server ("aka CloudServer") in the Cloud Environment. When I establish PPTP VPN connection from the CloudServer back to our Windows Server ("aka OfficeServer"), my RDP session is dropped and it won't let me RDP back in. The only way how I can RDP to the CloudServer is using the DHCP ip address issued from the OfficeServer. What do I need to change on the CloudServer? Thanks in advance for your help! Jennifer

    Read the article

  • Error installing php extension OAuth via pecl

    - by PJ
    I'm trying to install the php extension OAuth in my local environment. php.net suggests it's super easy. You just run pecl install oauth. I tried this, and here is the output in terminal: downloading oauth-1.0.0.tgz ... Starting to download oauth-1.0.0.tgz (42,834 bytes) ............done: 42,834 bytes 6 source files, building running: phpize grep: /usr/include/php/main/php.h: No such file or directory grep: /usr/include/php/Zend/zend_modules.h: No such file or directory grep: /usr/include/php/Zend/zend_extensions.h: No such file or directory Configuring for: PHP Api Version: Zend Module Api No: Zend Extension Api No: Cannot find autoconf. Please check your autoconf installation and the $PHP_AUTOCONF environment variable. Then, rerun this script. ERROR: `phpize' failed Any tips on how to fix the errors and install OAuth succesfully? I'm on Mac OS X 10.6.3 Thanks!

    Read the article

  • Parallel prologue and epilogue in Grid Engine

    - by ajdecon
    We have a cluster being used to run MPI jobs for a customer. Previously this cluster used Torque as the scheduler, but we are transitioning to Grid Engine 6.2u5 (for some other features). Unfortunately, we are having trouble duplicating some of our maintenance scripts in the Grid Engine environment. In Torque, we have a prologue.parallel script which is used to carry out an automated health-check on the node. If this script returns a fail condition, Torque will helpfully offline the node and re-queue the job to use a different group of nodes. In Grid Engine, however, the queue "prolog" only runs on the head node of the job. We can manually run our prologue script from the startmpi.sh initialization script, for the mpi parallel environment; but I can't figure out how to detect a fail condition and carry out the same "mark offline and requeue" procedure. Any suggestions?

    Read the article

  • NetApp and SQL Server?

    - by Edinor
    Do you have any good or bad experiences to share running SQL Server OLTP Systems on NetApp appliances? I have been working with a small, relatively low-volume cluster with a lower-end NetApp device, and I have found the environment to be generally unstable, at least compared to my experiences with other SANs, iSCSI arrays, and DAS setups. I struggle to believe that RAID DP and WAFL are more than fairy-dust technologies. A solution has been proposed to me that I just need a bigger, better NetApp, with PAM cards and other cool technology I've not heard of, and I feel like I would be better off spending a quarter of that on good direct-attached drives and a beefy server. At the same time, I feel that an Enterprise-class SAN should be something I can count on to be consistently a more stable, better performer than the less expensive solution I might propose. Are you a SQL Server DBA in an OLTP environment and love your NetApp? If you don't like them, why not?

    Read the article

  • Server Administration

    - by Kassem
    Hi everyone, My client asked me for a job description of a system administration because I might be assigned this position along with the other guy I'm working with. To be honest, I do not know much about a System Administrator's job but I'm willing to learn. Questions: What are the security requirements of a server? * What are the key responsibilities in a system admin's job description? What are some of the day to day tasks of a system admin? What is the average monthly salary of a system admin? Note: I will be working inside a Windows environment. But your replies do not necessarily need to be constricted to a Windows environment. (*) Other software I know will be required are: Windows Server 2008 IIS 7.0 MS SQL Server .NET 4.0 Runtime Let me know if there are other things I should be aware of as well. Thanks!

    Read the article

  • multiple webapps in tomcat -- what is the optimal architecture?

    - by rvdb
    I am maintaining a growing base of mainly Cocoon-2.1-based web applications [http://cocoon.apache.org/2.1/], deployed in a Tomcat servlet container [http://tomcat.apache.org/], and proxied with an Apache http server [http://httpd.apache.org/docs/2.2/]. I am conceptually struggling with the best way to deploy multiple web applications in Tomcat. Since I'm not a Java programmer and we don't have any sysadmin staff I have to figure out myself what is the most sensible way to do this. My setup has evolved through 2 scenarios and I'm considering a third for maximal separation of the distinct webapps. [1] 1 Tomcat instance, 1 Cocoon instance, multiple webapps -tomcat |_ webapps |_ webapp1 |_ webapp2 |_ webapp[n] |_ WEB-INF (with Cocoon libs) This was my first approach: just drop all web applications inside a single Cocoon webapps folder inside a single Tomcat container. This seemed to run fine, I did not encounter any memory issues. However, this poses a maintainability drawback, as some Cocoon components are subject to updates, which often affect the webapp coding. Hence, updating Cocoon becomes unwieldy: since all webapps share the same pool of Cocoon components, updating one of them would require the code in all web applications to be updated simultaneously. In order to isolate the web applications, I moved to the second scenario. [2] 1 Tomcat instance, each webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp1 | |_ WEB-INF (with Cocoon libs) |_ webapp[n] |_ WEB-INF (with Cocoon libs) This approach separates all webapps into their own Cocoon environment, run inside a single Tomcat container. In theory, this works fine: all webapps can be updated independently. However, this soon results in PermGenSpace errors. It seemed that I could manage the problem by increasing memory allocation for Tomcat, but I realise this isn't a structural solution, and that overloading a single Tomcat in this way is prone to future memory errors. This set me thinking about the third scenario. [3] multiple Tomcat instances, each with a single webapp in its dedicated Cocoon environment -tomcat |_ webapps |_ webapp1 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp2 |_ WEB-INF (with Cocoon libs) -tomcat |_ webapps |_ webapp[n] |_ WEB-INF (with Cocoon libs) I haven't tried this approach, but am thinking of the $CATALINA_BASE variable. A single Tomcat distribution can be multiply instanciated with different $CATALINA_BASE environments, each pointing to a Cocoon instance with its own webapp. I wonder whether such an approach could avoid the structural memory-related problems of approach [2], or will the same issues apply? On the other hand, this approach would complicate management of the Apache http frontend, as it will require the AJP connectors of the different Tomcat instances to be listening at different ports. Hence, Apache's worker configuration has to be updated and reloaded whenever a new webapp (in its own Tomcat instance) is added. And there seems no way to reload worker.properties without restarting the entire Apache http server. Is there perhaps another / more dynamic way of 'modularizing' multiple Tomcat-served webapps, or can one of these scenarios be refined? Any thoughts, suggestions, advice much appreciated. Ron

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >