Search Results

Search found 23387 results on 936 pages for 'ruby mysql'.

Page 49/936 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • How to Add Policy-based Audit Compliance to you existing MySQL applications

    - by Rob Young
    As a follow up to an earlier blog on the subject, please join us today at 0900 US PT to learn how to easily add policy-based auditing compliance to your existing MySQL applications.  This brief, informative session will provide an overview of the new MySQL Enterprise Audit plugin and will include a simple, practical step-by-step "how to" approach to get up and running with the new functionality. You can learn more and secure your seat for the presentation here.  Thanks for your continued support of MySQL!

    Read the article

  • how to run mysql drop and create synonym in shell script

    - by bgrif
    I have added this command to a script I am writing and I am running into a issue with it not logging onto mysql and running the commands. How can i fix this and make it run. #! /bin/bash Subject: Please stage the following TFL09143 Locator Bulletin to all TF90 staging environments: # This next section is to go to mysql server and make changes. you can drop and create synonyms truncate a table and insert into a different one. you will be able to verify the counts to the different locations # $ mysql --host=app03-bsi --u "" --p "" "TF90BPS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90BP.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90BP.TFBPS3.BTXSUPB && TRUNCATE TABLE TF90BP.TFBPS3.BTXSUPB SELECT * FROM TF90BP.TFBPS2.BTXSUPB; select count () from TF90BP.TF90.BTXADDR select count() from TF90BPS.TF90.BTXADDR; select count() from TF90BP.TF90.BTXSUPB; select count() from TF90BPS.TF90.BTXSUPB;" $ mysql --host=app03-bsi --u "" --p "" "TF90LMS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90LM.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90LM.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90LM.TFLMS2.BTXADDR;TRUNCATE TABLE TF90LM.TFLMS3.BTXSUPB;INSERT INTO TF90LM.TFLMS3.BTXSUPB SELECT * FROM TF90LM.TFLMS2.BTXSUPB;Verify select count() from TF90LM.TF90.BTXADDR;select count() from TF90LMS.TF90.BTXADDR;select count() from TF90LM.TF90.BTXSUPB;select count() from TF90LMS.TF90.BTXSUPB" $ mysql --host=app03-bsi --u "" --p "" "TF90NCS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90NC.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90NC.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90NC.TFNCS2.BTXADDR; TRUNCATE TABLE TF90NC.TFNCS3.BTXSUPB; INSERT INTO TF90NC.TFNCS3.BTXSUPB SELECT * FROM TF90NC.TFNCS2.BTXSUPB; Verify select count() from TF90NC.TF90.BTXADDR; select count() from TF90NCS.TF90.BTXADDR;select count() from TF90NC.TF90.BTXSUPB;select count() from TF90NCS.TF90.BTXSUPB" $ mysql --host=app03-bsi --u "" --p "" "TF90PVS" -bse "drop synonym TF90.BTXADDR && drop synonym TF90.BTXSUPB && CREATE SYNONYM TF90.BTXADDR FOR TF90PV.TFBPS2.BTXADDR && CREATE SYNONYM TF90.BTXSUPB FOR TF90PV.TFBPS3.BTXSUPB; TRUNCATE TABLE TF90PV.TFPVS2.BTXADDR;TRUNCATE TABLE TF90PV.TFPVS3.BTXSUPB;INSERT INTO TF90PV.TFPVS3.BTXSUPB SELECT * FROM TF90PV.TFPVS2.BTXSUPB;Verify select count() from TF90PV.TF90.BTXADDR;select count() from TF90PVS.TF90.BTXADDR;select count() from TF90PV.TF90.BTXSUPB;select count() from TF90PVS.TF90.BTXSUPB" TFL09143 Staging cd \ntsrv\common\To\IT-CERT-TEST\TFL09143 #change to mapped network drive cp -p TFL09143.pkg /d:/tf90/code_stg && /tf90bp/code_stg && /tf90lm/code_stg && /tf90pv/code_stg # Copies the package from the networked folder and then copies to the location(s) needed.# InvalidInput="true" if [ $# -eq 0 ] ; then echo "This script sets up TF90 Staging" echo -n "Which production do you want to run? (RB/TaxLocator/Cyclic)" read ProductionDistro else ProductionDistro="$1" fi while [ "$InvalidInput" = "true" ] do if [ "$ProductionDistro" = "RB" -o "$ProductionDistro" = "TaxLocator" -o "$ProductionDistro" = "Cyclic" ] ; then InvalidInput="false" break else echo "You have entered an error" echo "You must type RB or TaxLocator or Cyclic" echo "you typed $ProductionDistro" echo "This script sets up TF90 Staging" read ProductionDistro fi done InvalidInput="true" if [ $# -eq 0 ] ; then echo "This script sets up RB TF90 Staging" echo -n "Which Element do you want to run? (TF90/TF90BP/TF90LM/TF90PV/ALL)" read ElementDistro else ElementDistro="$1" fi while [ "$InvalidInput" = "true" ] do if [ "$ElementDistro" = "TF90" -o "$ElementDistro" = "TF90BP" -o "$ElementDistro" = "TF90LM" -o "$ElementDistro" = "TF90PV" -o "$ElementDistro" = "ALL" ] ; then InvalidInput="false" break else echo "You have entered an error" echo "You must type TF90 or TF90BP or TF90LM or TF90PV" echo "you typed $ElementDistro" echo "This script sets up TF90 Staging" read ElementDistro fi done if [ "$ElementDistro" = "TF90" ] ; then cd /d/tf90/code_stg vim TFL09143.pkg export var=TF90_CONNECT_STRING=DSN=TF90NCS;export Description=TF90NCS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90NCS; export DATASET=DEFAULT pkgintall -l -v ../TFL09143.pkg fi if [ "$ElementDistro" = "$TF90BP" ] ; then cd /d/tf90bp/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90BPS;export Description=TF90BPS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90BPS; start tfloader -l –v ../TFL09143.pkg fi if [ "$ElementDistro" = "$TF90LM" ] ; then cd /d/tf90lm/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90LMS;export Description=TF90LMS;export Trusted_Connection=Yes;export WSID=APP03- BSI;export DATABASE=TF90LMS; start tfloader -l -v ../TFL09143.pkg fi if [ "$ElementDistro" = "TF90PV" ] ; then cd /d/tf90pv/code_stg vim TFL09143.pkg export TF90_CONNECT_STRING=DSN=TF90PVS;Description=TF90PVS;Trusted_Connection=Yes;WSID=APP03- BSI;DATABASE=TF90PVS; start tfloader -l –v ../TFL09143.pkg fi exit 0

    Read the article

  • mysql questions for beginners

    - by ankhseeker
    ok, I have a few questions regarding mysql. I am currently running ubuntu 12.04.4 LTS command line version. I am looking for a database that I can use. I am confused at this point because I am uninformed. mysql is just one database that is on the server? or can it contain several or many databases What programs do I use to access it on the server or is it a vt-100 type access? I understand that mysql comes with lamp? or ubuntu. I am thinking that it is already installed but not sure how to access it, but that is another question for later. Outside of the man pages and the ubuntu manual, is there a site for its setup and use? Thanks!

    Read the article

  • How to move a ruby on rails application to a new server

    - by ManiacZX
    I have a rails app on an old Ubuntu server I need to move onto a new machine. I haven't worked with ruby on rails so I don't really know anything about the structure of the app. I want to load this onto an Ubuntu 8.04 AMI on Amazon EC2 and am looking for any information regarding the migration process such as: Do I copy over the entire folder defined as the application root in the mongrel config (for ex: /u/apps/myapp/current) or just certain folders? Am I looking for trouble if I go with the latest versions of ruby and the various gems? Any general gotchas to look out for in the process. Current server information: root@webnode001:/# cat /proc/version Linux version 2.6.15-27-server (buildd@terranova) (gcc version 4.0.3 (Ubuntu 4.0.3-1ubuntu5)) #1 SMP Fri Dec 8 18:43:54 UTC 2006 root@webnode001:/# rails -v Rails 1.2.3 root@webnode001:/# mongrel_rails cluster::configure --version Version 1.0.1 root@webnode001:/# gem -v 0.9.0 root@webnode001:/# gem list -l *** LOCAL GEMS *** actionmailer (1.3.3, 1.2.5) Service layer for easy email delivery and testing. actionpack (1.13.3, 1.12.5) Web-flow and rendering framework putting the VC in MVC. actionwebservice (1.2.3, 1.1.6) Web service support for Action Pack. activerecord (1.15.3, 1.15.2, 1.14.4) Implements the ActiveRecord pattern for ORM. activesupport (1.4.2, 1.4.1, 1.3.1) Support and utility classes used by the Rails framework. cgi_multipart_eof_fix (2.1) Fix an exploitable bug in CGI multipart parsing which affects Ruby <= 1.8.5 when multipart boundary attribute contains a non-halting regular expression string. daemons (1.0.7, 1.0.5, 1.0.4, 1.0.2) A toolkit to create and control daemons in different ways eventmachine (0.7.2, 0.7.0) Ruby/EventMachine socket engine library fastercsv (1.2.0, 1.1.0) FasterCSV is CSV, but faster, smaller, and cleaner. fastthread (1.0) Optimized replacement for thread.rb primitives ferret (0.11.4) Ruby indexing library. gem_plugin (0.2.2, 0.2.1) A plugin system based only on rubygems that uses dependencies only mongrel (1.0.1, 0.3.13.4) A small fast HTTP library and server that runs Rails, Camping, Nitro and Iowa apps. mongrel_cluster (0.2.1) Mongrel plugin that provides commands and Capistrano tasks for managing multiple Mongrel processes. mysql (2.7) MySQL/Ruby provides the same functions for Ruby programs that the MySQL C API provides for C programs. piston (1.3.3) Piston is a utility that enables merge tracking of remote repositories. rails (1.2.3, 1.1.6) Web-application framework with template engine, control-flow layer, and ORM. rake (0.7.3, 0.7.1) Ruby based make-like utility. sources (0.0.1) This package provides download sources for remote gem installation swiftiply (0.5.1) A fast clustering proxy for web applications.

    Read the article

  • Cannot Start MySQL Server on Fresh MAMP Install

    - by alexpelan
    I'm using Mac OS X 10.6.2 on my Macbook Pro. I can get the apache server to start, but not the mysql server, on both the default apache and default MAMP ports. When I try to go to my start page, I get the message "Error: Could not connect to MySQL server!" . Here's what's in my mysql error log: 00513 02:00:07 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended 100513 02:00:16 mysqld_safe Starting mysqld daemon with databases from /Applications/MAMP/db/mysql 100513 2:00:16 [Warning] The syntax '--log_slow_queries' is deprecated and will be removed in a future release. Please use '--slow_query_log'/'--slow_query_log_file' instead. 100513 2:00:16 [Warning] You have forced lower_case_table_names to 0 through a command-line option, even though your file system '/Applications/MAMP/db/mysql/' is case insensitive. This means that you can corrupt a MyISAM table by accessing it with different cases. You should consider changing lower_case_table_names to 1 or 2 100513 2:00:16 [Warning] One can only use the --user switch if running as root 100513 2:00:16 [Note] Plugin 'FEDERATED' is disabled. 100513 2:00:16 [Note] Plugin 'ndbcluster' is disabled. InnoDB: Error: log file /usr/local/mysql/data/ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 16777216 bytes! 100513 2:00:16 [ERROR] Plugin 'InnoDB' init function returned error. 100513 2:00:16 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100513 2:00:16 [ERROR] /Applications/MAMP/Library/libexec/mysqld: unknown option '--skip-bdb' 100513 2:00:16 [ERROR] Aborting 100513 2:00:16 [Note] /Applications/MAMP/Library/libexec/mysqld: Shutdown complete 100513 02:00:16 mysqld_safe mysqld from pid file /Applications/MAMP/tmp/mysql/mysql.pid ended A couple of things: 1) There are a bunch of different .cnf files that come with MAMP (my-huge, my-medium, etc.)...how can I tell which one is actually being used? 2) I deleted the ib_logfile0 and ib_logfile1 as recommended by another post on serverfault, and then ended up with more errors: 100519 16:01:30 InnoDB: Log file /usr/local/mysql/data/ib_logfile0 did not exist: new to be created InnoDB: Setting log file /usr/local/mysql/data/ib_logfile0 size to 16 MB InnoDB: Database physically writes the file full: wait... 100519 16:01:30 InnoDB: Log file /usr/local/mysql/data/ib_logfile1 did not exist: new to be created InnoDB: Setting log file /usr/local/mysql/data/ib_logfile1 size to 16 MB InnoDB: Database physically writes the file full: wait... InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100519 16:01:31 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 100519 16:01:31 InnoDB: Started; log sequence number 0 44556 100519 16:01:31 [ERROR] /Applications/MAMP/Library/libexec/mysqld: unknown option '--skip-bdb' 100519 16:01:31 [ERROR] Aborting And then I got this the next time I tried to run it: InnoDB: Unable to lock /usr/local/mysql/data/ibdata1, error: 35 InnoDB: Check that you do not already have another mysqld process InnoDB: using the same InnoDB data or log files. Sorry that this is a lot of information, but I don't want to leave anything out. Thanks.

    Read the article

  • Severe mysqldump performance degradation using Centos Linux, 8GB PAE and MySQL 5.0.77

    - by Duncan Harris
    We use MySQL 5.0.77 on CentOS 5.5 on VMWare: Linux dev.ic.soschildrensvillages.org.uk 2.6.18-194.11.4.el5PAE #1 SMP Tue Sep 21 05:48:23 EDT 2010 i686 i686 i386 GNU/Linux We have recently upgraded from 4GB RAM to 8GB. When we did this the time of our mysqldump overnight backup jumped from under 10 minutes to over 2 hours. It also caused unresponsiveness on our plone based web site due to database load. The dump is using the optimized mysqldump format and is spooled directly through a socket to another server. Any ideas on what we could do to fix gratefully appreciated. Would a MySQL upgrade help? Anything we can do to MySQL config? Anything we can do to Linux config? Or do we have to add another server or go to 64-bit? We ran a previous (non-virtual) server on 6GB PAE and didn't notice a similar issue. This was on same MySQL version, but Centos 4.4. Server config file: [mysqld] port=3307 socket=/tmp/mysql_live.sock wait_timeout=31536000 interactive_timeout=31536000 datadir=/var/mysql/live/data user=mysql max_connections = 200 max_allowed_packet = 64M table_cache = 2048 binlog_cache_size = 128K max_heap_table_size = 32M sort_buffer_size = 2M join_buffer_size = 2M lower_case_table_names = 1 innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size=1G innodb_log_file_size=300M innodb_log_buffer_size=8M innodb_flush_log_at_trx_commit=1 innodb_file_per_table [mysqldump] # Do not buffer the whole result set in memory before writing it to # file. Required for dumping very large tables quick max_allowed_packet = 64M [mysqld_safe] # Increase the amount of open files allowed per process. Warning: Make # sure you have set the global system limit high enough! The high value # is required for a large number of opened tables open-files-limit = 8192 Server variables: mysql> show variables; +---------------------------------+------------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/local/mysql-5.0.77-linux-i686-glibc23/ | | binlog_cache_size | 131072 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/local/mysql-5.0.77-linux-i686-glibc23/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/mysql/live/data/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + -><()~*:""&| | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | | group_concat_max_len | 1024 | | have_archive | YES | | have_bdb | NO | | have_blackhole_engine | YES | | have_compress | YES | | have_crypt | YES | | have_csv | YES | | have_dynamic_loading | YES | | have_example_engine | NO | | have_federated_engine | YES | | have_geometry | YES | | have_innodb | YES | | have_isam | NO | | have_merge_engine | YES | | have_ndbcluster | DISABLED | | have_openssl | DISABLED | | have_ssl | DISABLED | | have_query_cache | YES | | have_raid | NO | | have_rtree_keys | YES | | have_symlink | YES | | hostname | app.ic.soschildrensvillages.org.uk | | init_connect | | | init_file | | | init_slave | | | innodb_additional_mem_pool_size | 1048576 | | innodb_autoextend_increment | 8 | | innodb_buffer_pool_awe_mem_mb | 0 | | innodb_buffer_pool_size | 1073741824 | | innodb_checksums | ON | | innodb_commit_concurrency | 0 | | innodb_concurrency_tickets | 500 | | innodb_data_file_path | ibdata1:10M:autoextend | | innodb_data_home_dir | | | innodb_adaptive_hash_index | ON | | innodb_doublewrite | ON | | innodb_fast_shutdown | 1 | | innodb_file_io_threads | 4 | | innodb_file_per_table | ON | | innodb_flush_log_at_trx_commit | 1 | | innodb_flush_method | | | innodb_force_recovery | 0 | | innodb_lock_wait_timeout | 50 | | innodb_locks_unsafe_for_binlog | OFF | | innodb_log_arch_dir | | | innodb_log_archive | OFF | | innodb_log_buffer_size | 8388608 | | innodb_log_file_size | 314572800 | | innodb_log_files_in_group | 2 | | innodb_log_group_home_dir | ./ | | innodb_max_dirty_pages_pct | 90 | | innodb_max_purge_lag | 0 | | innodb_mirrored_log_groups | 1 | | innodb_open_files | 300 | | innodb_rollback_on_timeout | OFF | | innodb_support_xa | ON | | innodb_sync_spin_loops | 20 | | innodb_table_locks | ON | | innodb_thread_concurrency | 8 | | innodb_thread_sleep_delay | 10000 | | interactive_timeout | 31536000 | | join_buffer_size | 2097152 | | key_buffer_size | 8384512 | | key_cache_age_threshold | 300 | | key_cache_block_size | 1024 | | key_cache_division_limit | 100 | | language | /usr/local/mysql-5.0.77-linux-i686-glibc23/share/mysql/english/ | | large_files_support | ON | | large_page_size | 0 | | large_pages | OFF | | lc_time_names | en_US | | license | GPL | | local_infile | ON | | locked_in_memory | OFF | | log | OFF | | log_bin | OFF | | log_bin_trust_function_creators | OFF | | log_error | | | log_queries_not_using_indexes | OFF | | log_slave_updates | OFF | | log_slow_queries | OFF | | log_warnings | 1 | | long_query_time | 10 | | low_priority_updates | OFF | | lower_case_file_system | OFF | | lower_case_table_names | 1 | | max_allowed_packet | 67108864 | | max_binlog_cache_size | 4294963200 | | max_binlog_size | 1073741824 | | max_connect_errors | 10 | | max_connections | 200 | | max_delayed_threads | 20 | | max_error_count | 64 | | max_heap_table_size | 33554432 | | max_insert_delayed_threads | 20 | | max_join_size | 18446744073709551615 | | max_length_for_sort_data | 1024 | | max_prepared_stmt_count | 16382 | | max_relay_log_size | 0 | | max_seeks_for_key | 4294967295 | | max_sort_length | 1024 | | max_sp_recursion_depth | 0 | | max_tmp_tables | 32 | | max_user_connections | 0 | | max_write_lock_count | 4294967295 | | multi_range_count | 256 | | myisam_data_pointer_size | 6 | | myisam_max_sort_file_size | 2146435072 | | myisam_recover_options | OFF | | myisam_repair_threads | 1 | | myisam_sort_buffer_size | 8388608 | | myisam_stats_method | nulls_unequal | | ndb_autoincrement_prefetch_sz | 1 | | ndb_force_send | ON | | ndb_use_exact_count | ON | | ndb_use_transactions | ON | | ndb_cache_check_time | 0 | | ndb_connectstring | | | net_buffer_length | 16384 | | net_read_timeout | 30 | | net_retry_count | 10 | | net_write_timeout | 60 | | new | OFF | | old_passwords | OFF | | open_files_limit | 8192 | | optimizer_prune_level | 1 | | optimizer_search_depth | 62 | | pid_file | /var/mysql/live/mysqld.pid | | plugin_dir | | | port | 3307 | | preload_buffer_size | 32768 | | profiling | OFF | | profiling_history_size | 15 | | protocol_version | 10 | | query_alloc_block_size | 8192 | | query_cache_limit | 1048576 | | query_cache_min_res_unit | 4096 | | query_cache_size | 0 | | query_cache_type | ON | | query_cache_wlock_invalidate | OFF | | query_prealloc_size | 8192 | | range_alloc_block_size | 4096 | | read_buffer_size | 131072 | | read_only | OFF | | read_rnd_buffer_size | 262144 | | relay_log | | | relay_log_index | | | relay_log_info_file | relay-log.info | | relay_log_purge | ON | | relay_log_space_limit | 0 | | rpl_recovery_rank | 0 | | secure_auth | OFF | | secure_file_priv | | | server_id | 0 | | skip_external_locking | ON | | skip_networking | OFF | | skip_show_database | OFF | | slave_compressed_protocol | OFF | | slave_load_tmpdir | /tmp/ | | slave_net_timeout | 3600 | | slave_skip_errors | OFF | | slave_transaction_retries | 10 | | slow_launch_time | 2 | | socket | /tmp/mysql_live.sock | | sort_buffer_size | 2097152 | | sql_big_selects | ON | | sql_mode | | | sql_notes | ON | | sql_warnings | OFF | | ssl_ca | | | ssl_capath | | | ssl_cert | | | ssl_cipher | | | ssl_key | | | storage_engine | MyISAM | | sync_binlog | 0 | | sync_frm | ON | | system_time_zone | GMT | | table_cache | 2048 | | table_lock_wait_timeout | 50 | | table_type | MyISAM | | thread_cache_size | 0 | | thread_stack | 196608 | | time_format | %H:%i:%s | | time_zone | SYSTEM | | timed_mutexes | OFF | | tmp_table_size | 33554432 | | tmpdir | /tmp/ | | transaction_alloc_block_size | 8192 | | transaction_prealloc_size | 4096 | | tx_isolation | REPEATABLE-READ | | updatable_views_with_limit | YES | | version | 5.0.77 | | version_comment | MySQL Community Server (GPL) | | version_compile_machine | i686 | | version_compile_os | pc-linux-gnu | | wait_timeout | 31536000 | +---------------------------------+------------------------------------------------------------------+ 237 rows in set (0.00 sec)

    Read the article

  • Corrupted mysql table, cause crash in mysql.h (c++)

    - by Francesco
    i've created a very simple mysql class in c+, but when happen that mysql crash , indexes of tables become corrupted, and all my c++ programs crash too because seems that are unable to recognize corrupted table and allowing me to handle the issue .. Q_RES = mysql_real_query(MY_mysql, tmp_query.c_str(), (unsigned int) tmp_query.size()); if (Q_RES != 0) { if (Q_RES == CR_COMMANDS_OUT_OF_SYNC) cout << "errorquery : CR_COMMANDS_OUT_OF_SYNC " << endl; if (Q_RES == CR_SERVER_GONE_ERROR) cout << "errorquery : CR_SERVER_GONE_ERROR " << endl; if (Q_RES == CR_SERVER_LOST) cout << "errorquery : CR_SERVER_LOST " << endl; LAST_ERROR = mysql_error(MY_mysql); if (n_retrycount < n_retry_limit) { // RETRY! n_retrycount++; sleep(1); cout << "SLEEP - query retry! " << endl; ping(); return select_sql(tmp_query); } return false; } MY_result = mysql_store_result(MY_mysql); B_stored_results = true; cout << "b8" << endl; LAST_affected_rows = (mysql_num_rows(MY_result) + 1); // coult return -1 cout << "b8-1" << endl; the program terminate with a "segmentation fault" after doing the "b8" and before the "b8-1" , Q_RES have no issue even if the table is corrupted.. i would like to know if there is a way to recognize that the table have problems and so then i can run a mysql repair or mysql check .. thanks, Francesco

    Read the article

  • MySQL table data transformation -- how can I dis-aggregate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • random data using php & mysql

    - by Prakash
    I have mysql database structure like below: CREATE TABLE test ( id int(11) NOT NULL auto_increment, title text NULL, tags text NULL, PRIMARY KEY (id) ); data on field tags is stored as a comma separated text like html,php,mysql,website,html etc... now I need create an array that contains around 50 randomly selected tags from random records. currently I am using rand() to select 15 random mysql data from database and then holding all the tags from 15 records in an array. Then I am using array_rand() for randomizing the array and selecting only 50 random records. $query=mysql_query("select * from test order by id asc, RAND() limit 15"); $tags=""; while ($eachData=mysql_fetch_array($query)) { $additionalTags=$eachData['tags']; if ($tags=="") { $tags.=$additionalTags; } else { $tags.=$tags.",".$additionalTags; } } $tags=explode(",", $tags); $newTags=array(); foreach ($tags as $tag) { $tag=trim($tag); if ($tag!="") { if (!in_array($tag, $newTags)) { $newTags[]=$tag; } } } $random_newTags=array_rand($newTags, 50); Now I have huge records on the database, and because of that; rand() is performing very slow and sometimes it doesn't work. So can anyone let me know how to handle this situation correctly so that my page will work normally.

    Read the article

  • MySQL table data transformation -- how can I dis-aggreate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • Mysql::Error: Duplicate entry

    - by Shaliko
    Hi I have a model class Gift < ActiveRecord::Base validates_uniqueness_of :giver_id, :scope => :account_id end add_index(:gifts, [:account_id, :giver_id], :uniq => true) Action def create @gift= Gift.new(params[:gift]) if @gift.save ... else ... end end In the "production" mode, I sometimes get an error ActiveRecord::StatementInvalid: Mysql::Error: Duplicate entry '122394471958-50301499' for key 'index_gifts_on_account_id_and_user_id' What the problem?

    Read the article

  • Simulating MySQL's ORDER BY FIELD() in Postgresql

    - by Peer Allan
    Hello all, Just trying out Postgresql for the first time, coming from MySQL. In our Rails application we have a couple of locations with SQL like so: SELECT * FROM `currency_codes` ORDER BY FIELD(code, 'GBP', 'EUR', 'BBD', 'AUD', 'CAD', 'USD') DESC, name ASC It didn't take long to discover that this is not supported/allowed in Postgresql. Does anyone know how to simulate this behaviour in Postgres or do we have to pull to sorting out into the code? Thanks Peer

    Read the article

  • How to validates cyrilic email in Rails 3.1?

    - by iKeler
    Let's say I had the email address like putin-crab@?????????.?? How to validate that address in rails 3.1? My Model(i use Mongoid): #encoding: utf-8 class User include Mongoid::Document field :email, :type => String validates :email, :presence => true, :format => { :with => RFC822::EMAIL } end For validations reqexp i use gem https://github.com/dim/rfc-822 in rails console (normal email): ruby-1.9.2-p290 :001 > usr = User.new( :email => "[email protected]" ) => #<User _id: 4ec627cf4934db7e4d000001, _type: nil, email: "[email protected]"> ruby-1.9.2-p290 :002 > usr.valid? => true in rails console (fu@#ing email): ruby-1.9.2-p290 :003 > usr = User.new( :email => "putin-crab@?????????.??" ) => #<User _id: 4ec627f44934db7e4d000002, _type: nil, email: "putin-crab@?????????.??"> ruby-1.9.2-p290 :004 > usr.valid? Encoding::CompatibilityError: incompatible encoding regexp match (ASCII-8BIT regexp with UTF-8 string) from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations/format.rb:9:in `=~' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations/format.rb:9:in `!~' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations/format.rb:9:in `validate_each' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validator.rb:153:in `block in validate' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validator.rb:150:in `each' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validator.rb:150:in `validate' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activesupport-3.1.1/lib/active_support/callbacks.rb:302:in `_callback_before_13' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activesupport-3.1.1/lib/active_support/callbacks.rb:404:in `_run_validate_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activesupport-3.1.1/lib/active_support/callbacks.rb:81:in `run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:42:in `block in run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:67:in `call' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:67:in `run_cascading_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:41:in `run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations.rb:212:in `run_validations!' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations/callbacks.rb:53:in `block in run_validations!' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activesupport-3.1.1/lib/active_support/callbacks.rb:390:in `_run_validation_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activesupport-3.1.1/lib/active_support/callbacks.rb:81:in `run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:42:in `block in run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:67:in `call' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:67:in `run_cascading_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:41:in `run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations/callbacks.rb:53:in `run_validations!' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations.rb:179:in `valid?' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/validations.rb:70:in `valid?' from (irb):4 from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/railties-3.1.1/lib/rails/commands/console.rb:45:in `start' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/railties-3.1.1/lib/rails/commands/console.rb:8:in `start' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/railties-3.1.1/lib/rails/commands.rb:40:in `<top (required)>' from script/rails:6:in `require'

    Read the article

  • What advantages switching to ruby might give me as a python programmer ?

    - by Richard Placide
    This is my first question on stackoverflow, so please bear with me. I'm trying to stay away from any form of trolling or flame baiting as i have a tremendous respect for both languages. I'm a python programmer (though not an expert) and i love it. My first language was C++. My line of work (web development) is pushing me towards other languages like php and javascript. Recently, I've been very excited by Ruby's increasing popularity. However I used to be under the impression that Python and Ruby were so close that there was little point in trying to learn and master both. But I get the sense that I was wrong, hence my question : I'd like to hear from python programmers who have either switched entirely to ruby or added ruby to their toolset. What specific benefits did you get from switching (entirely or partially) to Ruby from Python ? Ideally I'd like to hear from real world experiences.

    Read the article

  • [Ruby on Rails] Data Structure

    - by siulamvictor
    I am building a online form, with about 20 multiple choice checkboxes. I can get the nested data with this command. raise params.to_yaml I need to store these data and call them again later. I want to sort out which user chose which specific checkbox, i.e. who chose checkbox no.2? What's the best way to store these data in database?

    Read the article

  • MySQL, An Ideal Choice for The Cloud

    - by Bertrand Matthelié
    As the world's most popular web database, MySQL has quickly become the leading database for the cloud, with most providers offering MySQL-based services. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Access our Resource Kit to discover: Why MySQL has become the leading database in the cloud, and how it addresses the critical attributes of cloud-based deployments How ISVs rely on MySQL to power their SaaS offerings Best practices to deploy the world’s most popular open source database in public and private clouds Normal 0 false false false EN-US X-NONE X-NONE You will also find out how you can leverage MySQL together with Hadoop and other technologies to unlock the value of Big Data, either on-premise or in the cloud. Access white papers, webinars, case studies and other resources in /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} our Resource Kit now!

    Read the article

  • Take data from an XML file and put it into a MySQL database

    - by Aidan
    Hi Guys, I'm looking to construct a script that would go through an XML file. Would find specific tags in it, put them in a table and fill the table with specific tags within them. I'm using MySQL 5.1 so loadXML isn't an option and I think that ExtractData() method wont be much use either.. but I don't really know. What would be the best way to go about this?

    Read the article

  • Locking DB w/ Large Reads (Ruby-on-Rails/Heroku)

    - by Splashlin
    Currently I have a Web API running on Heroku that is constantly writing information we're collecting from other data sources (currently theres about half a GB of data and it's growing very quickly). We're looking to add a reporting system on top of the current database that we can use to extract useful information out of the DB. The problem is that when we're running reports we're locking the DB and any other sites communicating with the DB are timing out. Does anyone have any solutions on how to solve this type of issue? Amazon RDS seems to have some interesting stuff with database replication but I don't know if that will solve my problems. Any advice would be greatly appreciated. Thanks

    Read the article

  • Rails Passenger Nginx cannot load such file -- bundler

    - by Stuart
    I have set up Rails, Passenger, nginx, and PostgreSQL on Ubuntu Server 12.04LTS. Upon trying to access the application/website, however, I am greeted with an error page saying that the application could not be started because a source file is missing. Error message: cannot load such file -- bundler. My nginx config (/opt/nginx/conf/nginx.conf): user railsapp; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; passenger_root /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14; passenger_ruby /home/railsapp/.rvm/rubies/ruby-1.9.3-p194/bin/ruby; server { listen 80; server_name fitness_schedules.local; root /home/railsapp/fitness_schedules/public; passenger_enabled on; rack_env development; } } Here is the error message: A source file that the application requires, is missing. It is possible that you didn't upload your application files correctly. Please check whether all your application files are uploaded. A required library may not installed. Please install all libraries that this application requires. Further information about the error may have been written to the application's log file. Please check it in order to analyse the problem. Error message: cannot load such file -- bundler Exception class: LoadError Application root: /home/railsapp/fitness_schedules Here is the backtrace from the webpage that is presented by nginx: Backtrace: # File Line Location 0 /home/railsapp/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb 36 in `require' 1 /home/railsapp/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb 36 in `require' 2 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/utils.rb 325 in `prepare_app_process' 3 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/rack/application_spawner.rb 156 in `block in initialize_server' 4 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/utils.rb 563 in `report_app_init_status' 5 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/rack/application_spawner.rb 154 in `initialize_server' 6 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/abstract_server.rb 204 in `start_synchronously' 7 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/abstract_server.rb 180 in `start' 8 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/rack/application_spawner.rb 129 in `start' 9 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/spawn_manager.rb 253 in `block (2 levels) in spawn_rack_application' 10 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/abstract_server_collection.rb 132 in `lookup_or_add' 11 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/spawn_manager.rb 246 in `block in spawn_rack_application' 12 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/abstract_server_collection.rb 82 in `block in synchronize' 13 prelude> 10:in `synchronize' 14 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/abstract_server_collection.rb 79 in `synchronize' 15 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/spawn_manager.rb 244 in `spawn_rack_application' 16 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/spawn_manager.rb 137 in `spawn_application' 17 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/spawn_manager.rb 275 in `handle_spawn_application' 18 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/abstract_server.rb 357 in `server_main_loop' 19 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/lib/phusion_passenger/abstract_server.rb 206 in `start_synchronously' 20 /home/railsapp/.rvm/gems/ruby-1.9.3-p194/gems/passenger-3.0.14/helper-scripts/passenger-spawn-server 99 in `' In ~/fitness_schedules/log there are only development and test logs, no production/development logs.

    Read the article

  • How to proceed setting up a secondary mysql linux slave?

    - by Algorist
    I have a mysql database master and slave in production. I want to setup additional mysql slave. There is around 15 Terabyte of data in the database and there are MYISAM and InnoDB tables in the database. I am thinking of below options: Shutdown master database and copy the mysql data folder to secondary slave. Can Innodb tables be copied like this? Run flush table with read lock, scp the file to new slave and unlock the table and this is possible for myisam tables, can I do the same for innodb tables too? Thanks for looking at the question.

    Read the article

  • How to generate the right password format for Apache2 authentication in use with DBD and MySQL 5.1?

    - by Walkman
    I want to authenticate users for a folder from a MySQL 5.1 database with AuthType Basic. The passwords are stored in plain text (they are not really passwords, so doesn't matter). The password format for apache however only allows for SHA1, MD5 on Linux systems as described here. How could I generate the right format with an SQL query ? Seems like apache format is a binary format with a lenght of 20, but the mysql SHA1 function return 40 long. My SQL query is something like this: SELECT CONCAT('{SHA}', BASE64_ENCODE(SHA1(access_key))) FROM user_access_keys INNER JOIN users ON user_access_keys.user_id = users.id WHERE name = %s where base64_encode is a stored function (Mysql 5.1 doesn't have TO_BASE64 yet). This query returns a 61 byte BLOB which is not the same format that apache uses. How could I generate the same format ? You can suggest other method for this too. The point is that I want to authenticate users from a MySQL5.1 database using plain text as password.

    Read the article

  • What could cause sudden crash of a MySQL 5.0.67 installation?

    - by Alex R
    I have an old Ubuntu 8.10 32-bit with MySQL 5.0.67. There's 5.7GB of data in it and it grows by about 100MB every day. About 3 days ago, the MySQL instance begin dying suddenly and quitely (no log entry) during the nightly mysqldump. What could be causing it? Upgrading MySQL is a long-term project for me, unless there happens to be a specific bug in 5.0.67 then I guess I'll just need to reprioritize. I'm hoping somebody might be familiar with this problem since this is a fairly popular version bundled with Ubuntu 8.10. Thanks

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >