Search Results

Search found 727 results on 30 pages for 'evaluation'.

Page 22/30 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • PHP - Store & Calculate the total mark from radio input

    - by user1806136
    I have designed a small web-based system that have a school evaluation form to ask specific users who can access the system" some questions and the input will be a radio type ( 1 or 2 or 3 or 4)! the code is working and can inserts the input into the database but i don't know the correct query to calculate the total mark and store it in the database, this is currently working code below: <?php session_start(); $Load=$_SESSION['login_user']; include('../connect.php'); $sql= "Select name from student where ID='$Load'"; $username = mysql_query($sql); $id=$_SESSION['login_user']; if (isset($_POST['submit'])) { $v1 = $_POST['v1']; $v2 = $_POST['v2']; $v3 = $_POST['v3']; $total = $_POST['total']; mysql_query("INSERT into Form1 (P1,P2,P3,TOTAL) values('$v1','$v2','$v3','$total')") or die(mysql_error()); header("Location: mark.php"); } ?> <html> <head> <?php if(!isset($_SESSION['login_user'])) header("Location:index.html"); ?> <title>Q&A Form</title> </head> <body> <center><form method="post" action="" > <table style="width: 20%" > <tr> <th> Criteria </th> <th> </th> </tr> <tr> <th> Excellent </th> <td > 4 </td> </tr> <tr> <th > Good <font size="3" > </font></th> <td> 3 <font size="4" > </font></td> </tr> <tr> <th > Average <font size="3" > </font></th> <td > 2 <font size="4" > </font></td> </tr> <tr> <th > Poor <font size="3" > </font></th> <td > 1 <font size="4" > </td> </tr> <font size='4'> <table style="width: 70%"> <tr> <th > School Evaluation <font size="4" > </font></th> <tr> <th > Criteria <font size="4" > </font></th> <th> 4<font size="4" > </font></th> <th> 3<font size="4" > </font></th> <th> 2<font size="4" > </font></th> <th> 1<font size="4" > </font></th> </tr> <tr> <th> Your attendance<font size="4" > </font></th> <td> <input type="radio" name ="v1" value = "4" checked = "checked" /></td> <td> <input type="radio" name ="v1" value = "3" /></td> <td> <input type="radio" name ="v1" value = "2" /></td> <td> <input type="radio" name ="v1" value = "1" /></td> </tr> <tr> <th > Your grades <font size="4" > </font></th> <td> <input type="radio" name ="v2" value = "4" checked = "checked" /></td> <td> <input type="radio" name ="v2" value = "3" /></td> <td> <input type="radio" name ="v2" value = "2" /></td> <td> <input type="radio" name ="v2" value = "1" /></td> </tr> <tr> <th >Your self-control <font size="4" > </font></th> <td> <input type="radio" name ="v3" value = "4" checked = "checked" /></td> <td> <input type="radio" name ="v3" value = "3" /></td> <td> <input type="radio" name ="v3" value = "2" /></td> <td> <input type="radio" name ="v3" value = "1" /></td> </tr> </tr> </table> <br> <a href="evaE.php"> <td><input type="submit" name="submit" value="Submit"> <input type="reset" name="clear" value="clear" style="width: 70px"></td> <?php $total = $v1+ $v2 + $v3; ?> </form> </center> </div> </body> </html> i used this query but it doesn't work out .. any help please? <?php $total = $v1+ $v2 + $v3; ?>

    Read the article

  • MySQL is running VERY slow

    - by user1032531
    I have two servers: a VPS and a laptop. I recently re-built both of them, and MySQL is running about 20 times slower on the laptop. Both servers used to run CentOS 5.8 and I think MySQL 5.1, and the laptop used to do great so I do not think it is the hardware. For the VPS, my provider installed CentOS 6.4, and then I installed MySQL 5.1.69 using yum with the CentOS repo. For the laptop, I installed CentOS 6.4 basic server and then installed MySQL 5.1.69 using yum with the CentOS repo. my.cnf for both servers are identical, and I have shown below. For both servers, I've also included below the output from SHOW VARIABLES; as well as output from sysbench, file system information, and cpu information. I have tried adding skip-name-resolve, but it didn't help. The matrix below shows the SHOW VARIABLES output from both servers which is different. Again, MySQL was installed the same way, so I do not know why it is different, but it is and I think this might be why the laptop is executing MySQL so slowly. Why is the laptop running MySQL slowly, and how do I fix it? Differences between SHOW VARIABLES on both servers +---------------------------+-----------------------+-------------------------+ | Variable | Value-VPS | Value-Laptop | +---------------------------+-----------------------+-------------------------+ | hostname | vps.site1.com | laptop.site2.com | | max_binlog_cache_size | 4294963200 | 18446744073709500000 | | max_seeks_for_key | 4294967295 | 18446744073709500000 | | max_write_lock_count | 4294967295 | 18446744073709500000 | | myisam_max_sort_file_size | 2146435072 | 9223372036853720000 | | myisam_mmap_size | 4294967295 | 18446744073709500000 | | plugin_dir | /usr/lib/mysql/plugin | /usr/lib64/mysql/plugin | | pseudo_thread_id | 7568 | 2 | | system_time_zone | EST | PDT | | thread_stack | 196608 | 262144 | | timestamp | 1372252112 | 1372252046 | | version_compile_machine | i386 | x86_64 | +---------------------------+-----------------------+-------------------------+ my.cnf for both servers [root@server1 ~]# cat /etc/my.cnf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid innodb_strict_mode=on sql_mode=TRADITIONAL # sql_mode=STRICT_TRANS_TABLES,NO_ZERO_DATE,NO_ZERO_IN_DATE character-set-server=utf8 collation-server=utf8_general_ci log=/var/log/mysqld_all.log [root@server1 ~]# VPS SHOW VARIABLES Info Same as Laptop shown below but changes per above matrix (removed to allow me to be under the 30000 characters as required by ServerFault) Laptop SHOW VARIABLES Info auto_increment_increment 1 auto_increment_offset 1 autocommit ON automatic_sp_privileges ON back_log 50 basedir /usr/ big_tables OFF binlog_cache_size 32768 binlog_direct_non_transactional_updates OFF binlog_format STATEMENT bulk_insert_buffer_size 8388608 character_set_client utf8 character_set_connection utf8 character_set_database latin1 character_set_filesystem binary character_set_results utf8 character_set_server latin1 character_set_system utf8 character_sets_dir /usr/share/mysql/charsets/ collation_connection utf8_general_ci collation_database latin1_swedish_ci collation_server latin1_swedish_ci completion_type 0 concurrent_insert 1 connect_timeout 10 datadir /var/lib/mysql/ date_format %Y-%m-%d datetime_format %Y-%m-%d %H:%i:%s default_week_format 0 delay_key_write ON delayed_insert_limit 100 delayed_insert_timeout 300 delayed_queue_size 1000 div_precision_increment 4 engine_condition_pushdown ON error_count 0 event_scheduler OFF expire_logs_days 0 flush OFF flush_time 0 foreign_key_checks ON ft_boolean_syntax + -><()~*:""&| ft_max_word_len 84 ft_min_word_len 4 ft_query_expansion_limit 20 ft_stopword_file (built-in) general_log OFF general_log_file /var/run/mysqld/mysqld.log group_concat_max_len 1024 have_community_features YES have_compress YES have_crypt YES have_csv YES have_dynamic_loading YES have_geometry YES have_innodb YES have_ndbcluster NO have_openssl DISABLED have_partitioning YES have_query_cache YES have_rtree_keys YES have_ssl DISABLED have_symlink DISABLED hostname server1.site2.com identity 0 ignore_builtin_innodb OFF init_connect init_file init_slave innodb_adaptive_hash_index ON innodb_additional_mem_pool_size 1048576 innodb_autoextend_increment 8 innodb_autoinc_lock_mode 1 innodb_buffer_pool_size 8388608 innodb_checksums ON innodb_commit_concurrency 0 innodb_concurrency_tickets 500 innodb_data_file_path ibdata1:10M:autoextend innodb_data_home_dir innodb_doublewrite ON innodb_fast_shutdown 1 innodb_file_io_threads 4 innodb_file_per_table OFF innodb_flush_log_at_trx_commit 1 innodb_flush_method innodb_force_recovery 0 innodb_lock_wait_timeout 50 innodb_locks_unsafe_for_binlog OFF innodb_log_buffer_size 1048576 innodb_log_file_size 5242880 innodb_log_files_in_group 2 innodb_log_group_home_dir ./ innodb_max_dirty_pages_pct 90 innodb_max_purge_lag 0 innodb_mirrored_log_groups 1 innodb_open_files 300 innodb_rollback_on_timeout OFF innodb_stats_method nulls_equal innodb_stats_on_metadata ON innodb_support_xa ON innodb_sync_spin_loops 20 innodb_table_locks ON innodb_thread_concurrency 8 innodb_thread_sleep_delay 10000 innodb_use_legacy_cardinality_algorithm ON insert_id 0 interactive_timeout 28800 join_buffer_size 131072 keep_files_on_create OFF key_buffer_size 8384512 key_cache_age_threshold 300 key_cache_block_size 1024 key_cache_division_limit 100 language /usr/share/mysql/english/ large_files_support ON large_page_size 0 large_pages OFF last_insert_id 0 lc_time_names en_US license GPL local_infile ON locked_in_memory OFF log OFF log_bin OFF log_bin_trust_function_creators OFF log_bin_trust_routine_creators OFF log_error /var/log/mysqld.log log_output FILE log_queries_not_using_indexes OFF log_slave_updates OFF log_slow_queries OFF log_warnings 1 long_query_time 10.000000 low_priority_updates OFF lower_case_file_system OFF lower_case_table_names 0 max_allowed_packet 1048576 max_binlog_cache_size 18446744073709547520 max_binlog_size 1073741824 max_connect_errors 10 max_connections 151 max_delayed_threads 20 max_error_count 64 max_heap_table_size 16777216 max_insert_delayed_threads 20 max_join_size 18446744073709551615 max_length_for_sort_data 1024 max_long_data_size 1048576 max_prepared_stmt_count 16382 max_relay_log_size 0 max_seeks_for_key 18446744073709551615 max_sort_length 1024 max_sp_recursion_depth 0 max_tmp_tables 32 max_user_connections 0 max_write_lock_count 18446744073709551615 min_examined_row_limit 0 multi_range_count 256 myisam_data_pointer_size 6 myisam_max_sort_file_size 9223372036853727232 myisam_mmap_size 18446744073709551615 myisam_recover_options OFF myisam_repair_threads 1 myisam_sort_buffer_size 8388608 myisam_stats_method nulls_unequal myisam_use_mmap OFF net_buffer_length 16384 net_read_timeout 30 net_retry_count 10 net_write_timeout 60 new OFF old OFF old_alter_table OFF old_passwords OFF open_files_limit 1024 optimizer_prune_level 1 optimizer_search_depth 62 optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on pid_file /var/run/mysqld/mysqld.pid plugin_dir /usr/lib64/mysql/plugin port 3306 preload_buffer_size 32768 profiling OFF profiling_history_size 15 protocol_version 10 pseudo_thread_id 3 query_alloc_block_size 8192 query_cache_limit 1048576 query_cache_min_res_unit 4096 query_cache_size 0 query_cache_type ON query_cache_wlock_invalidate OFF query_prealloc_size 8192 rand_seed1 rand_seed2 range_alloc_block_size 4096 read_buffer_size 131072 read_only OFF read_rnd_buffer_size 262144 relay_log relay_log_index relay_log_info_file relay-log.info relay_log_purge ON relay_log_space_limit 0 report_host report_password report_port 3306 report_user rpl_recovery_rank 0 secure_auth OFF secure_file_priv server_id 0 skip_external_locking ON skip_name_resolve OFF skip_networking OFF skip_show_database OFF slave_compressed_protocol OFF slave_exec_mode STRICT slave_load_tmpdir /tmp slave_max_allowed_packet 1073741824 slave_net_timeout 3600 slave_skip_errors OFF slave_transaction_retries 10 slow_launch_time 2 slow_query_log OFF slow_query_log_file /var/run/mysqld/mysqld-slow.log socket /var/lib/mysql/mysql.sock sort_buffer_size 2097144 sql_auto_is_null ON sql_big_selects ON sql_big_tables OFF sql_buffer_result OFF sql_log_bin ON sql_log_off OFF sql_log_update ON sql_low_priority_updates OFF sql_max_join_size 18446744073709551615 sql_mode sql_notes ON sql_quote_show_create ON sql_safe_updates OFF sql_select_limit 18446744073709551615 sql_slave_skip_counter sql_warnings OFF ssl_ca ssl_capath ssl_cert ssl_cipher ssl_key storage_engine MyISAM sync_binlog 0 sync_frm ON system_time_zone PDT table_definition_cache 256 table_lock_wait_timeout 50 table_open_cache 64 table_type MyISAM thread_cache_size 0 thread_handling one-thread-per-connection thread_stack 262144 time_format %H:%i:%s time_zone SYSTEM timed_mutexes OFF timestamp 1372254399 tmp_table_size 16777216 tmpdir /tmp transaction_alloc_block_size 8192 transaction_prealloc_size 4096 tx_isolation REPEATABLE-READ unique_checks ON updatable_views_with_limit YES version 5.1.69 version_comment Source distribution version_compile_machine x86_64 version_compile_os redhat-linux-gnu wait_timeout 28800 warning_count 0 VPS Sysbench Info [root@vps ~]# cat sysbench.txt sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 8 Doing OLTP test. Running mixed OLTP test Doing read-only test Using Special distribution (12 iterations, 1 pct of values are returned in 75 pct cases) Using "BEGIN" for starting transactions Using auto_inc on the id column Threads started! Time limit exceeded, exiting... (last message repeated 7 times) Done. OLTP test statistics: queries performed: read: 1449966 write: 0 other: 207138 total: 1657104 transactions: 103569 (1726.01 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 1449966 (24164.08 per sec.) other operations: 207138 (3452.01 per sec.) Test execution summary: total time: 60.0050s total number of events: 103569 total time taken by event execution: 479.1544 per-request statistics: min: 1.98ms avg: 4.63ms max: 330.73ms approx. 95 percentile: 8.26ms Threads fairness: events (avg/stddev): 12946.1250/381.09 execution time (avg/stddev): 59.8943/0.00 [root@vps ~]# Laptop Sysbench Info [root@server1 ~]# cat sysbench.txt sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 8 Doing OLTP test. Running mixed OLTP test Doing read-only test Using Special distribution (12 iterations, 1 pct of values are returned in 75 pct cases) Using "BEGIN" for starting transactions Using auto_inc on the id column Threads started! Time limit exceeded, exiting... (last message repeated 7 times) Done. OLTP test statistics: queries performed: read: 634718 write: 0 other: 90674 total: 725392 transactions: 45337 (755.56 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 634718 (10577.78 per sec.) other operations: 90674 (1511.11 per sec.) Test execution summary: total time: 60.0048s total number of events: 45337 total time taken by event execution: 479.4912 per-request statistics: min: 2.04ms avg: 10.58ms max: 85.56ms approx. 95 percentile: 19.70ms Threads fairness: events (avg/stddev): 5667.1250/42.18 execution time (avg/stddev): 59.9364/0.00 [root@server1 ~]# VPS File Info [root@vps ~]# df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/simfs simfs 20971520 16187440 4784080 78% / none tmpfs 6224432 4 6224428 1% /dev none tmpfs 6224432 0 6224432 0% /dev/shm [root@vps ~]# Laptop File Info [root@server1 ~]# df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_server1-lv_root ext4 72383800 4243964 64462860 7% / tmpfs tmpfs 956352 0 956352 0% /dev/shm /dev/sdb1 ext4 495844 60948 409296 13% /boot [root@server1 ~]# VPS CPU Info Removed to stay under the 30000 character limit required by ServerFault Laptop CPU Info [root@server1 ~]# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 Duo CPU T7100 @ 1.80GHz stepping : 13 cpu MHz : 800.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 3591.39 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 Duo CPU T7100 @ 1.80GHz stepping : 13 cpu MHz : 800.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 3591.39 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: [root@server1 ~]# EDIT New Info requested by shakalandy [root@localhost ~]# cat /proc/meminfo MemTotal: 2044804 kB MemFree: 761464 kB Buffers: 68868 kB Cached: 369708 kB SwapCached: 0 kB Active: 881080 kB Inactive: 246016 kB Active(anon): 688312 kB Inactive(anon): 4416 kB Active(file): 192768 kB Inactive(file): 241600 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 4095992 kB SwapFree: 4095992 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 688428 kB Mapped: 65156 kB Shmem: 4216 kB Slab: 92428 kB SReclaimable: 31260 kB SUnreclaim: 61168 kB KernelStack: 2392 kB PageTables: 28356 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 5118392 kB Committed_AS: 1530212 kB VmallocTotal: 34359738367 kB VmallocUsed: 343604 kB VmallocChunk: 34359372920 kB HardwareCorrupted: 0 kB AnonHugePages: 520192 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 8556 kB DirectMap2M: 2078720 kB [root@localhost ~]# ps aux | grep mysql root 2227 0.0 0.0 108332 1504 ? S 07:36 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/localhost.badobe.com.pid mysql 2319 0.1 24.5 1470068 501360 ? Sl 07:36 0:57 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/localhost.badobe.com.err --pid-file=/var/lib/mysql/localhost.badobe.com.pid root 3579 0.0 0.1 201840 3028 pts/0 S+ 07:40 0:00 mysql -u root -p root 13887 0.0 0.1 201840 3036 pts/3 S+ 18:08 0:00 mysql -uroot -px xxxxxxxxxx root 14449 0.0 0.0 103248 840 pts/2 S+ 18:16 0:00 grep mysql [root@localhost ~]# ps aux | grep mysql root 2227 0.0 0.0 108332 1504 ? S 07:36 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/localhost.badobe.com.pid mysql 2319 0.1 24.5 1470068 501356 ? Sl 07:36 0:57 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/localhost.badobe.com.err --pid-file=/var/lib/mysql/localhost.badobe.com.pid root 3579 0.0 0.1 201840 3028 pts/0 S+ 07:40 0:00 mysql -u root -p root 13887 0.0 0.1 201840 3048 pts/3 S+ 18:08 0:00 mysql -uroot -px xxxxxxxxxx root 14470 0.0 0.0 103248 840 pts/2 S+ 18:16 0:00 grep mysql [root@localhost ~]# vmstat 1 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 742172 76376 371064 0 0 6 6 78 202 2 1 97 1 0 0 0 0 742164 76380 371060 0 0 0 16 191 467 2 1 93 5 0 0 0 0 742164 76380 371064 0 0 0 0 148 388 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 159 418 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 145 380 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 166 429 2 1 97 0 0 1 0 0 742164 76380 371064 0 0 0 0 148 373 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 149 382 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 168 408 2 0 97 0 0 0 0 0 742164 76380 371064 0 0 0 0 165 394 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 159 354 2 1 98 0 0 0 0 0 742164 76388 371060 0 0 0 16 180 447 2 0 91 6 0 0 0 0 742164 76388 371064 0 0 0 0 143 344 2 1 98 0 0 0 1 0 742784 76416 370044 0 0 28 580 360 678 3 1 74 23 0 1 0 0 744768 76496 367772 0 0 40 1036 437 865 3 1 53 43 0 0 1 0 747248 76596 365412 0 0 48 1224 561 923 3 2 53 43 0 0 1 0 749232 76696 363092 0 0 32 1132 512 883 3 2 52 44 0 0 1 0 751340 76772 361020 0 0 32 1008 472 872 2 1 52 45 0 0 1 0 753448 76840 358540 0 0 36 1088 512 860 2 1 51 46 0 0 1 0 755060 76936 357636 0 0 28 1012 481 922 2 2 52 45 0 0 1 0 755060 77064 357988 0 0 12 896 444 902 2 1 53 45 0 0 1 0 754688 77148 358448 0 0 16 1096 506 1007 1 1 56 42 0 0 2 0 754192 77268 358932 0 0 12 1060 481 957 1 2 53 44 0 0 1 0 753696 77380 359392 0 0 12 1052 512 1025 2 1 55 42 0 0 1 0 751028 77480 359828 0 0 8 984 423 909 2 2 52 45 0 0 1 0 750524 77620 360200 0 0 8 788 367 869 1 2 54 44 0 0 1 0 749904 77700 360664 0 0 8 928 439 924 2 2 55 43 0 0 1 0 749408 77796 361084 0 0 12 976 468 967 1 1 56 43 0 0 1 0 748788 77896 361464 0 0 12 992 453 944 1 2 54 43 0 1 1 0 748416 77992 361996 0 0 12 784 392 868 2 1 52 46 0 0 1 0 747920 78092 362336 0 0 4 896 382 874 1 1 52 46 0 0 1 0 745252 78172 362780 0 0 12 1040 444 923 1 1 56 42 0 0 1 0 744764 78288 363220 0 0 8 1024 448 934 2 1 55 43 0 0 1 0 744144 78408 363668 0 0 8 1000 461 982 2 1 53 44 0 0 1 0 743648 78488 364148 0 0 8 872 443 888 2 1 54 43 0 0 1 0 743152 78548 364468 0 0 16 1020 511 995 2 1 55 43 0 0 1 0 742656 78632 365024 0 0 12 928 431 913 1 2 53 44 0 0 1 0 742160 78728 365468 0 0 12 996 470 955 2 2 54 44 0 1 1 0 739492 78840 365896 0 0 8 988 447 939 1 2 52 46 0 0 1 0 738872 78996 366352 0 0 12 972 442 928 1 1 55 44 0 1 1 0 738244 79148 366812 0 0 8 948 549 1126 2 2 54 43 0 0 1 0 737624 79312 367188 0 0 12 996 456 953 2 2 54 43 0 0 1 0 736880 79456 367660 0 0 12 960 444 918 1 1 53 46 0 0 1 0 736260 79584 368124 0 0 8 884 414 921 1 1 54 44 0 0 1 0 735648 79716 368488 0 0 12 976 450 955 2 1 56 41 0 0 1 0 733104 79840 368988 0 0 12 932 453 918 1 2 55 43 0 0 1 0 732608 79996 369356 0 0 16 916 444 889 1 2 54 43 0 1 1 0 731476 80128 369800 0 0 16 852 514 978 2 2 54 43 0 0 1 0 731244 80252 370200 0 0 8 904 398 870 2 1 55 43 0 1 1 0 730624 80384 370612 0 0 12 1032 447 977 1 2 57 41 0 0 1 0 730004 80524 371096 0 0 12 984 469 941 2 2 52 45 0 0 1 0 729508 80636 371544 0 0 12 928 438 922 2 1 52 46 0 0 1 0 728888 80756 371948 0 0 16 972 439 943 2 1 55 43 0 0 1 0 726468 80900 372272 0 0 8 960 545 1024 2 1 54 43 0 1 1 0 726344 81024 372272 0 0 8 464 490 1057 1 2 53 44 0 0 1 0 726096 81148 372276 0 0 4 328 441 1063 2 1 53 45 0 1 1 0 726096 81256 372292 0 0 0 296 387 975 1 1 53 45 0 0 1 0 725848 81380 372284 0 0 4 332 425 1034 2 1 54 44 0 1 1 0 725848 81496 372300 0 0 4 308 386 992 2 1 54 43 0 0 1 0 725600 81616 372296 0 0 4 328 404 1060 1 1 54 44 0 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 1 0 725600 81732 372296 0 0 4 328 439 1011 1 1 53 44 0 0 1 0 725476 81848 372308 0 0 0 316 441 1023 2 2 52 46 0 1 1 0 725352 81972 372300 0 0 4 344 451 1021 1 1 55 43 0 2 1 0 725228 82088 372320 0 0 0 328 427 1058 1 1 54 44 0 1 1 0 724980 82220 372300 0 0 4 336 419 999 2 1 54 44 0 1 1 0 724980 82328 372320 0 0 4 320 430 1019 1 1 54 44 0 1 1 0 724732 82436 372328 0 0 0 388 363 942 2 1 54 44 0 1 1 0 724608 82560 372312 0 0 4 308 419 993 1 2 54 44 0 1 0 0 724360 82684 372320 0 0 0 304 421 1028 2 1 55 42 0 1 0 0 724360 82684 372388 0 0 0 0 158 416 2 1 98 0 0 1 1 0 724236 82720 372360 0 0 0 6464 243 855 3 2 84 12 0 1 0 0 724112 82748 372360 0 0 0 5356 266 895 3 1 84 12 0 2 1 0 724112 82764 372380 0 0 0 3052 221 511 2 2 93 4 0 1 0 0 724112 82796 372372 0 0 0 4548 325 1067 2 2 81 16 0 1 0 0 724112 82816 372368 0 0 0 3240 259 829 3 1 90 6 0 1 0 0 724112 82836 372380 0 0 0 3260 309 822 3 2 88 8 0 1 1 0 724112 82876 372364 0 0 0 4680 326 978 3 1 77 19 0 1 0 0 724112 82884 372380 0 0 0 512 207 508 2 1 95 2 0 1 0 0 724112 82884 372388 0 0 0 0 138 361 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 158 397 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 146 395 2 1 98 0 0 2 0 0 724112 82884 372388 0 0 0 0 160 395 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 163 382 1 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 176 422 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 134 351 2 1 98 0 0 0 0 0 724112 82884 372388 0 0 0 0 190 429 2 1 97 0 0 0 0 0 724104 82884 372392 0 0 0 0 139 358 2 1 98 0 0 0 0 0 724848 82884 372392 0 0 0 4 211 432 2 1 97 0 0 1 0 0 724980 82884 372392 0 0 0 0 166 370 2 1 98 0 0 0 0 0 724980 82884 372392 0 0 0 0 164 397 2 1 98 0 0 ^C [root@localhost ~]#

    Read the article

  • FusionCharts vs GoogleCharts vs HighCharts suggestions required for commercial use

    - by Forte
    I find that FusionCharts v3 evaluation and HighCharts cannot be used for commercial purpose. Google charts is the best option but those are not as good looking as any of the above. They don't even have 3d charts although their visualization API does support 3D which i cannot find. Is there any open source graphing or charting solution available which can be used in commercial products? I even looked in to Open Flash Charts 2 but found that the developer had left the project long time a go and the currently out libs are way too buggy. I had to fix more than 50 bugs to get their 1 chart working. I tried to fix others but couldn't get Pie charts etc. working. What i'm looking for is - 1. Attractive 3d column chart. 2. 3d Pie Chart. 3. Spline Chart. 4. Geographical Chart. Does anyone knows any open source or free solution which can be used for commercial products? Cheers!

    Read the article

  • StackOverflow in VB.NET SQLite query

    - by Majgel
    I have an StackOverflowException in one of my DB functions that I don't know how to deal with. I have a SQLite database with one table "tblEmployees" that holds records for each employees (several thousand posts) and usually this function runs without any problem. But sometimes after the the function is called a thousand times it breaks with an StackOverflowException at the line "ReturnTable.Load(reader)" with the message: An unhandled exception of type 'System.StackOverflowException' occurred in System.Data.SQLite.dll If I restart the application it has no problem to continue with the exact same post it last crashed on. I can also make the exactly same DB-call from SQLite Admin at the crash time without no problems. Here is the code: Public Function GetNextEmployeeInQueue() As String Dim NextEmployeeInQueue As String = Nothing Dim query As [String] = "SELECT FirstName FROM tblEmployees WHERE Checked=0 LIMIT 1;" Try Dim ReturnTable As New DataTable() Dim mycommand As New SQLiteCommand(cnn) mycommand.CommandText = query Dim reader As SQLiteDataReader = mycommand.ExecuteReader() ReturnTable.Load(reader) reader.Close() If ReturnTable.Rows.Count > 0 Then NextEmployeeInQueue = ReturnTable.Rows(0)("FirstName").ToString() Else MsgBox("No more employees found in queue") End If Catch fail As Exception MessageBox.Show("Error: " & fail.Message.ToString()) End Try If NextEmployeeInQueue IsNot Nothing Then Return NextEmployeeInQueue Else Return "No more records in queue" End If End Function When crashes, the reader has "Property evaluation failed." in all values. I assume there is some problem with allocated memory that isn't released correctly, but can't figure out what object it's all about. The DB-connection opens when the DB-class object is created and closes on main form Dispose. Should I maybe dispose the mycommand object every time in a finally block? But wouldn't that result in a closed DB-connection?

    Read the article

  • Atomic UPSERT in SQL Server 2005

    - by rabidpebble
    What is the correct pattern for doing an atomic "UPSERT" (UPDATE where exists, INSERT otherwise) in SQL Server 2005? I see a lot of code on SO (e.g. see http://stackoverflow.com/questions/639854/tsql-check-if-a-row-exists-otherwise-insert) with the following two-part pattern: UPDATE ... FROM ... WHERE <condition> -- race condition risk here IF @@ROWCOUNT = 0 INSERT ... or IF (SELECT COUNT(*) FROM ... WHERE <condition>) = 0 -- race condition risk here INSERT ... ELSE UPDATE ... where will be an evaluation of natural keys. None of the above approaches seem to deal well with concurrency. If I cannot have two rows with the same natural key, it seems like all of the above risk inserting rows with the same natural keys in race condition scenarios. I have been using the following approach but I'm surprised not to see it anywhere in people's responses so I'm wondering what is wrong with it: INSERT INTO <table> SELECT <natural keys>, <other stuff...> FROM <table> WHERE NOT EXISTS -- race condition risk here? ( SELECT 1 FROM <table> WHERE <natural keys> ) UPDATE ... WHERE <natural keys> (Note: I'm assuming that rows will not be deleted from this table. Although it would be nice to discuss how to handle the case where they can be deleted -- are transactions the only option? Which level of isolation?) Is this atomic? I can't locate where this would be documented in SQL Server documentation.

    Read the article

  • How to program a neural network for chess?

    - by marco92w
    Hello! I want to program a chess engine which learns to make good moves and win against other players. I've already coded a representation of the chess board and a function which outputs all possible moves. So I only need an evaluation function which says how good a given situation of the board is. Therefore, I would like to use an artificial neural network which should then evaluate a given position. The output should be a numerical value. The higher the value is, the better is the position for the white player. My approach is to build a network of 385 neurons: There are six unique chess pieces and 64 fields on the board. So for every field we take 6 neurons (1 for every piece). If there is a white piece, the input value is 1. If there is a black piece, the value is -1. And if there is no piece of that sort on that field, the value is 0. In addition to that there should be 1 neuron for the player to move. If it is White's turn, the input value is 1 and if it's Black's turn, the value is -1. I think that configuration of the neural network is quite good. But the main part is missing: How can I implement this neural network into a coding language (e.g. Delphi)? I think the weights for each neuron should be the same in the beginning. Depending on the result of a match, the weights should then be adjusted. But how? I think I should let 2 computer players (both using my engine) play against each other. If White wins, Black gets the feedback that its weights aren't good. So it would be great if you could help me implementing the neural network into a coding language (best would be Delphi, otherwise pseudo-code). Thanks in advance!

    Read the article

  • Flex vs. jQuery vs. GWTvs./ Closre vs. Cappuccino vs. plain JS and HTML5?

    - by Laith J
    Hello, I'm creating my first web application and I'm really confused as to what technology to go for. My application needs to look serious (like an application), it doesn't need many colorful graphical interfaces. It only needs a toolbar, a tab bar, a split panel (preferably 3 columns), an easily-formatable text field, and a status bar. It will connect to a MySQL database through PHP (unless I go for GWT). Users will upload files. My evaluation of the options: Flex: Probably the easiest to develop but I'm pretty sure my application is something one would use on an iPad and with Flash's future on the iPad still unsure, I don't want to take the risk, otherwise Flex would've been my choice. jQuery: I've heard a lot about it and a lot of people recommend but I don't know how easy it is to use and how customizable the look of my app is. GWT: The problem with GWT is that it doesn't have many widgets. Another problem is that I'm gonna have to host the files in AppEngine's datastore and transfer them back and forth to a web server that will do operations on them (I need to process them) which adds more traffic and slows the process which worsens the user experience. Closure: It has a nice toolbar and a nice text field. I'm not sure how easy it is to use. Plus, I read an article that makes it sound really bad. Cappuccino: It has a very nice UI and it has a mac feel. I'm planning to give my application a mac feel anyway so this will save me a lot of theming. But if I go for this option I won't be able to make use of HTML5's new features (especially working offline). Plain JS and HTML5: This gives me the most flexibility but it is the hardest to work for. I'm sorry if this is subjective but I really need help with this.

    Read the article

  • Sparse parameter selection using Genetic Algorithm

    - by bgbg
    Hello, I'm facing a parameter selection problem, which I would like to solve using Genetic Algorithm (GA). I'm supposed to select not more than 4 parameters out of 3000 possible ones. Using the binary chromosome representation seems like a natural choice. The evaluation function punishes too many "selected" attributes and if the number of attributes is acceptable, it then evaluates the selection. The problem is that in these sparse conditions the GA can hardly improve the population. Neither the average fitness cost, nor the fitness of the "worst" individual improves over the generations. All I see is slight (even tiny) improvement in the score of the best individual, which, I suppose, is a result of random sampling. Encoding the problem using indices of the parameters doesn't work either. This is most probably, due to the fact that the chromosomes are directional, while the selection problem isn't (i.e. chromosomes [1, 2, 3, 4]; [4, 3, 2, 1]; [3, 2, 4, 1] etc. are identical) What problem representation would you suggest? P.S If this matters, I use PyEvolve.

    Read the article

  • Tail-recursive merge sort in OCaml

    - by CFP
    Hello world! I’m trying to implement a tail-recursive list-sorting function in OCaml, and I’ve come up with the following code: let tailrec_merge_sort l = let split l = let rec _split source left right = match source with | [] -> (left, right) | head :: tail -> _split tail right (head :: left) in _split l [] [] in let merge l1 l2 = let rec _merge l1 l2 result = match l1, l2 with | [], [] -> result | [], h :: t | h :: t, [] -> _merge [] t (h :: result) | h1 :: t1, h2 :: t2 -> if h1 < h2 then _merge t1 l2 (h1 :: result) else _merge l1 t2 (h2 :: result) in List.rev (_merge l1 l2 []) in let rec sort = function | [] -> [] | [a] -> [a] | list -> let left, right = split list in merge (sort left) (sort right) in sort l ;; Yet it seems that it is not actually tail-recursive, since I encounter a "Stack overflow during evaluation (looping recursion?)" error. Could you please help me spot the non tail-recursive call in this code? I've searched quite a lot, without finding it. Cout it be the let binding in the sort function? Thanks a lot, CFP.

    Read the article

  • Cannot add SourceSafe Database as Visual Studio 2010 source control.

    - by CletusLoomis
    My issue is that I cannot add SourceSafe Database for source control within Visual Studio 2010. Our team was initially using VSS for source control in Visual Studio 2010. During an evaluation of TFS, I switched my source control to TFS. It will be a few weeks before a decision is made on TFS, so I needed to switch my source control back to VSS. However I'm now unable to add a SourceSafe Database in Visual Studio. Steps to Reproduce in Visual Studio 2010: 1) Access the 'Open SourceSafe Database' form via Tools-Options-Source Control-Plug-in Settings--Advanced or via File-Source Control 2) The list of available database is blank so I choose 'Browse'. 3) I browse to the srcsafe.ini file for my VSS database and select it. 4) I'm promted to confirm the Database Name - Click OK. 5) The database does not appear in the 'Open SourceSafe' Database form. The list of available databases is still blank. Note that I can add the database fine outside of Visual Studio using VSS directly. However the databases I add via VSS do not appear in the Visual Studio forms. I'm suspicious that this is related to "down-grading" from TFS to VSS which may not have been heavily tested at MS. Any assistance is appreciated.

    Read the article

  • How can I write a clean Repository without exposing IQueryable to the rest of my application?

    - by Simucal
    So, I've read all the Q&A's here on SO regarding the subject of whether or not to expose IQueryable to the rest of your project or not (see here, and here), and I've ultimately decided that I don't want to expose IQueryable to anything but my Model. Because IQueryable is tied to certain persistence implementations I don't like the idea of locking myself into this. Similarly, I'm not sure how good I feel about classes further down the call chain modifying the actual query that aren't in the repository. So, does anyone have any suggestions for how to write a clean and concise Repository without doing this? One problem I see, is my Repository will blow up from a ton of methods for various things I need to filter my query off of. Having a bunch of: IEnumerable GetProductsSinceDate(DateTime date); IEnumberable GetProductsByName(string name); IEnumberable GetProductsByID(int ID); If I was allowing IQueryable to be passed around I could easily have a generic repository that looked like: public interface IRepository<T> where T : class { T GetById(int id); IQueryable<T> GetAll(); void InsertOnSubmit(T entity); void DeleteOnSubmit(T entity); void SubmitChanges(); } However, if you aren't using IQueryable then methods like GetAll() aren't really practical since lazy evaluation won't be taking place down the line. I don't want to return 10,000 records only to use 10 of them later. What is the answer here? In Conery's MVC Storefront he created another layer called the "Service" layer which received IQueryable results from the respository and was responsible for applying various filters. Is this what I should do, or something similar? Have my repository return IQueryable but restrict access to it by hiding it behind a bunch of filter classes like GetProductByName, which will return a concrete type like IList or IEnumerable?

    Read the article

  • SCM for Xcode?

    - by Gregor
    I am developing an application for the Mac as a small team (me + another person) effort. We are located in different cities, and have started to see the need for solid source control management. None of us have any experience with this, and both of us are relatively new to Cocoa/Obj-C/Xcode (but do have C knowledge). Does anyone have any recommendations as to which SCM system to choose? I understand that a lot of people are using Subversion, which is also supported in Xcode 3.1. Does anyone have experience with using Subversion through Xcode? Or is it a better option to chose a stand alone GUI alternative, such as Versions? Grateful for any input on this. Gregor Tomasevic, Sweden Update/personal experiences: Since this post, we have tried Versions and Cornerstone (both of which are SVN GUI-clients), as well as Xcodes built-in support for SVN. We were not particularly pleased with Versions, which seemed to have some problems with committing unversioned files/build files. The built-in SVN support in Xcode works quite well, although it probably has limitations that we have still not run into. Cornerstone is both simple to use and powerful, and does not seem to suffer from the problems we encountered with Versions. So far, we have just tried committing, updating repo, checking out latest/previous versions of our files and worked some with file comparison. It might be a whole different ball game once you start working extensively with branching, an area which we have been told both these GUI clients might have some weaknesses in. For what it's worth (and with only days of evaluation) Cornerstone seems to be a somewhat better alternative, although for simpler SCM, Xcode works well too. Thanks for all the comments.

    Read the article

  • Which key:value store to use with Python?

    - by Kurt
    So I'm looking at various key:value (where value is either strictly a single value or possibly an object) stores for use with Python, and have found a few promising ones. I have no specific requirement as of yet because I am in the evaluation phase. I'm looking for what's good, what's bad, what are the corner cases these things handle well or don't, etc. I'm sure some of you have already tried them out so I'd love to hear your findings/problems/etc. on the various key:value stores with Python. I'm looking primarily at: memcached - http://www.danga.com/memcached/ python clients: http://pypi.python.org/pypi/python-memcached/1.40 http://www.tummy.com/Community/software/python-memcached/ CouchDB - http://couchdb.apache.org/ python clients: http://code.google.com/p/couchdb-python/ Tokyo Tyrant - http://1978th.net/tokyotyrant/ python clients: http://code.google.com/p/pytyrant/ Lightcloud - http://opensource.plurk.com/LightCloud/ Based on Tokyo Tyrant, written in Python Redis - http://code.google.com/p/redis/ python clients: http://pypi.python.org/pypi/txredis/0.1.1 MemcacheDB - http://memcachedb.org/ So I started benchmarking (simply inserting keys and reading them) using a simple count to generate numeric keys and a value of "A short string of text": memcached: CentOS 5.3/python-2.4.3-24.el5_3.6, libevent 1.4.12-stable, memcached 1.4.2 with default settings, 1 gig memory, 14,000 inserts per second, 16,000 seconds to read. No real optimization, nice. memcachedb claims on the order of 17,000 to 23,000 inserts per second, 44,000 to 64,000 reads per second. I'm also wondering how the others stack up speed wise.

    Read the article

  • Process results of conditional split in SSIS

    - by Robert
    I have a Data Flow Task and am connecting to a database via an OLE DB Source component to extract data. This data feeds into a Conditional Split component to separate the data based on a simple expression. After the evaluation of this expression, the data will end up in either of two locations: LocationA or LocationB. Alright, I have that all set up and working properly. Once the data is separated into these two locations, additional processing is to be done on the records. Here's where I am stuck: I need the the processing of records in LocationA to occur before the processing of records in LocationB. Is there a way to set precedence of which tasks occur before others? If not, what is the best way to handle this? I was thinking I may need to write the data in LocationA and LocationB back out to the database and create a new data flow task in the control flow to handle the order of which these records must be dealt with. Any help is greatly appreciated!

    Read the article

  • CLR Stored Procedures

    - by Paul Hatcherian
    In an ASP.NET application, I have a small number of fairly complex, frequently used operations to execute against a database. In these operations, one or more of several tables needs updates or inserts based a logical evaluation of both input parameters and values of certain tables. I've maintained a separation of logic and data access, so the operation currently looks like this: Request received from client Business layer invokes data layer to retrieve data from database Business layer processes result and determines which operation to execute Business layer invokes appropriate data operation Response sent to client As you can see, the client is kept waiting while two separate requests are made to the database. In searching for a solution to this, I've found CLR Stored Procedures, but I'm not sure if I have the right idea about what they are useful for. I have written a replacement for the code above which especially places steps 2-4 in a CLR SP. My understanding is that the SP will be executed locally by SQL Server and result in only one call being made to the server. My initial benchmark tests show this is actually orders of magnitude slower than my original code, but I attribute that recompilation of the code I have not worked out yet and/or some flaw in my environment. My question is basically, is this the intended use of CLR SPs or am I missing something? I realize this is a bit of a compromise structurally, so if there's a better way to do it I'd love to hear it.

    Read the article

  • Conditions with common logic: question of style, readability, efficiency, ...

    - by cdonner
    I have conditional logic that requires pre-processing that is common to each of the conditions (instantiating objects, database lookups etc). I can think of 3 possible ways to do this, but each has a flaw: Option 1 if A prepare processing do A logic else if B prepare processing do B logic else if C prepare processing do C logic // else do nothing end The flaw with option 1 is that the expensive code is redundant. Option 2 prepare processing // not necessary unless A, B, or C if A do A logic else if B do B logic else if C do C logic // else do nothing end The flaw with option 2 is that the expensive code runs even when neither A, B or C is true Option 3 if (A, B, or C) prepare processing end if A do A logic else if B do B logic else if C do C logic end The flaw with option 3 is that the conditions for A, B, C are being evaluated twice. The evaluation is also costly. Now that I think about it, there is a variant of option 3 that I call option 4: Option 4 if (A, B, or C) prepare processing if A set D else if B set E else if C set F end end if D do A logic else if E do B logic else if F do C logic end While this does address the costly evaluations of A, B, and C, it makes the whole thing more ugly and I don't like it. How would you rank the options, and are there any others that I am not seeing?

    Read the article

  • Techniques for sharing a value among classes in a program

    - by Kenneth Cochran
    I'm using Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData) + "\MyProgram" As the path to store several files used by my program. I'd like to avoid pasting the same snippet of code all over the my applcation. I need to ensure that: The path cannot be accidentally changed once its been set The classes that need it have access to it. I've considered: Making it a singleton Using constructor dependency injection Using property dependency injection Using AOP to create the path where its needed. Each has pros and cons. The singleton is everyone's favorite whipping boy. I'm not opposed to using one but there are valid reasons to avoid it if possible. I'm already heavily using constructor injection through Castle Windsor. But this is a path string and Windsor doesn't handle system type dependencies very gracefully. I could always wrap it in a class but that seems like overkill for something as simple as a passing around a string value. In any case this route would add yet another constructor argument to each class where it is used. The problem I see with property injection in this case is that there is a large amount of indirection from the where the value is set to where it is needed. I would need a very long line of middlemen to reach all the places where its used. AOP looks promising and I'm planning on using AOP for logging anyway so this at least sounds like a simple solution. Is there any other options I haven't considered? Am I off base with my evaluation of the options I have considered?

    Read the article

  • using indexer to retrieve Linq to SQL object from datastore

    - by fearofawhackplanet
    class UserDatastore : IUserDatastore { ... public IUser this[Guid userId] { get { User user = (from u in _dataContext.Users where u.Id == userId select u).FirstOrDefault(); return user; } } ... } One of the developers in our team is arguing that an indexer in the above situation is not appropriate and that a GetUser(Guid id) method should be prefered. The arguments being that: 1) We aren't indexing into an in-memory collection, the indexer is basically performing a hidden SQL query 2) Using a Guid in an indexer is bad (FxCop flagged this also) 3) Returning null from an indexer isn't normal behaviour 4) An API user generally wouldn't expect any of this behaviour I agree to an extent with (most of) these points. But I'm also inclined to argue that one of the characteristics of Linq is to abstract the database access to make it appear that you're simply working with a bunch of collections, even though the lazy evaluation paradigm means those collections aren't evaluated until you run a query over them. It doesn't seem inconsistent to me to access the datastore in the same manner as if it was a concrete in-memory collection here. Also bearing in mind this is an inherited codebase which uses this pattern extensively and consistently, is it worth the refactoring? I accept that it might have been better to use a Get method from the start, but I'm not yet convinced that it's completely incorrect to be using an indexer. I'd be interested to hear all opinions, thanks.

    Read the article

  • Is functional GUI programming possible?

    - by eman
    I've recently caught the FP bug (trying to learn Haskell), and I've been really impressed with what I've seen so far (first-class functions, lazy evaluation, and all the other goodies). I'm no expert yet, but I've already begun to find it easier to reason "functionally" than imperatively for basic algorithms (and I'm having trouble going back where I have to). The one area where current FP seems to fall flat, however, is GUI programming. The Haskell approach seems to be to just wrap imperative GUI toolkits (such as GTK+ or wxWidgets) and to use "do" blocks to simulate an imperative style. I haven't used F#, but my understanding is that it does something similar using OOP with .NET classes. Obviously, there's a good reason for this--current GUI programming is all about IO and side effects, so purely functional programming isn't possible with most current frameworks. My question is, is it possible to have a functional approach to GUI programming? I'm having trouble imagining what this would look like in practice. Does anyone know of any frameworks, experimental or otherwise, that try this sort of thing (or even any frameworks that are designed from the ground up for a functional language)? Or is the solution to just use a hybrid approach, with OOP for the GUI parts and FP for the logic? (I'm just asking out of curiosity--I'd love to think that FP is "the future," but GUI programming seems like a pretty large hole to fill.)

    Read the article

  • I have two choices of Master's classes this fall. Which is the most useful?

    - by ahplummer
    (For background purposes and context): I am a Software Engineer, and manage other Software Engineers currently. I kind of wear two hats right now: one of a programmer, and one as a 'team lead'. In this regard, I've started going back to school to get my Master's degree with an emphasis in Computer Science. I already have a Bachelor's in Computer Science, and have been working in the field for about 13 years. Our primary development environment is a Windows environment, writing in .NET, Delphi, and SQL Server. Choice #1: CST 798 DATA VISUALIZATION Course Description: Basically, this is a course on the "Processing" language: http://processing.org/ Choice #2: CST 711 INFORMATICS Course Description: (From catalog): Informatics is the science of the use and processing of data, information, and knowledge. This course covers a variety of applied issues from information technology, information management at a variety of levels, ranging from simple data entry, to the creation, design and implementation of new information systems, to the development of models. Topics include basic information representation, processing, searching, and organization, evaluation and analysis of information, Internet-based information access tools, ethics and economics of information sharing.

    Read the article

  • Multi-dimensional array edge/border conditions

    - by kirbuchi
    Hi, I'm iterating over a 3 dimensional array (which is an image with 3 values for each pixel) to apply a 3x3 filter to each pixel as follows: //For each value on the image for (i=0;i<3*width*height;i++){ //For each filter value for (j=0;j<9;j++){ if (notOutsideEdgesCondition){ *(**(outArray)+i)+= *(**(pixelArray)+i-1+(j%3)) * (*(filter+j)); } } } I'm using pointer arithmetic because if I used array notation I'd have 4 loops and I'm trying to have the least possible number of loops. My problem is my notOutsideEdgesCondition is getting quite out of hands because I have to consider 8 border cases. I have the following handled conditions Left Column: ((i%width)==0) && (j%3==0) Right Column: ((i-1)%width ==0) && (i>1) && (j%3==2) Upper Row: (i<width) && (j<2) Lower Row: (i>(width*height-width)) && (j>5) and still have to consider the 4 corner cases which will have longer expressions. At this point I've stopped and asked myself if this is the best way to go because If I have a 5 line long conditional evaluation it'll not only be truly painful to debug but will slow the inner loop. That's why I come to you to ask if there's a known algorithm to handle this cases or if there's a better approach for my problem. Thanks a lot.

    Read the article

  • can I disable the "(Type e to repeat macro)" message in emacs?

    - by lindes
    Hi there, So, I've finally made the plunge, and have gotten to the state where I'm quite happy to have switched from vi and vim to emacs... I've been putting stuff in my .emacs file, learning how to evaluate things (not to mention becoming familiar with movement commands), etc. etc. etc. And now I have a problem with a require line in my .emacs file (a require statement*), which bombs out when I launch emacs (and generally fails to work). So, this lead me to the following situation: In the process of trying to debug the above situation, one of the steps I did was to open the file I was trying to require, and evaluate it bit by bit, using C-M-f and C-x C-e (and later just M-x eval-buffer), which all worked fine. But along the way of the section-by-section, I got tired of typing all those, and so I recorded a keyboard macro... C-x ( C-M-f C-x C-e C-x ) and then C-x e... which gave me a message in the minibuffer (I think I'm using the right name), saying (Type e to repeat macro). Which meant I could no longer see the resultant value of the evaluation of each section of code... which, while not critical in this case, I was liking having. Which leads me to the actual question: Is there a way to disable that message, and/or to cause the minibuffer to show multiple lines at once? I know about the *Messages* buffer, and that could have helped, I'm just wondering if there's a way to either disable that message, or otherwise make it coexist with other messages. Any suggestions? Thanks! lindes * - the problem at hand, which is not really my question, is that (require 'ruby-mode/ruby-mode) fails, even though emacs is definitely and successfully (per system call tracing) opening and reading the ruby-mode.el file. I presume this is because the provide line says just 'ruby-mode. I've found a solution for this, but if anyone can point me to any "best practices", I'd appreciate it.

    Read the article

  • Creating Emulated iSCSI Target in a Lab/Testing Environment using Windows Server 2008 R2

    - by Brian McCleary
    We have a single server running Windows Server 2008 with Hyper-V installed running 5 virtual machines. I have purchased a second DELL R805 Server so that we can create a fail-over cluster to our current R805 that is currently in production. Right now, our R805 connects via iSCSI to a MD3000i iSCSI SAN. Before we try to roll out the second server and clustering to our production environment, I want to be able to test and "play with" the clustering features in our lab before rolling it out. The problem is that I don't want to spend a couple thousand dollars on another iSCSI SAN server just for testing. I already have two servers in my lab that are installed with Windows Server 2008 R2 64bit (one is the R805 and another spare desktop that was laying around) and with the Hyper-V roll enabled that should be ready to test with, but I don't have an iSCSI target to use as the Cluster Shared Volume. Is there anyway to install, either on a Hyper-V image or on a external spare computer that we have some sort of emulated iSCSI target? In our lab, we obviously don't need a real SAN, just something that we can test out how to setup the clustering properly outside of our production environment. Any advise is appreciated. FYI - I have read Jose Barret's blog on WUDSS at http://blogs.technet.com/josebda/archive/2008/01/07/installing-the-evaluation-version-of-wudss-2003-refresh-and-the-microsoft-iscsi-software-target-version-3-1-on-a-vm.aspx, but it seems awfully complex. I'm hoping for an easier solution.

    Read the article

  • What if you used the wrong language?

    - by HS
    A reply to another question made me remember a project from some years ago when it turned out that Java was not the right tool to use. I typically only learn a new language when I have a problem that it solves better than the ones I already know. [...] Then I write whatever program I wanted to learn that language for in the first place. [...] By the time I've gotten my target program written, I've usually got a decent handle on the language, not to mention any other features it has, and I've got other ideas to use it for. I did just that back then with Java, because the client thought it to be the right language to use (platform independent) and initial evaluation confirmed that. However, much later in the project there were some issue (can't really remember all the details by now). So, the project that started as a nice learning experience turned into a nightmare toward the end. I was at the brink of switching over to my trusted C++ and doing a complete rewrite. The client was not so much of a problem to convince back then, but my supervisor was strongly opposed because of all the work that already went into the Java version. In hindsight, he was right and the project was complete more or less with the intended features kind of working, but it was the project that I am least proud of by now. Long story short: what do you think, when is it too much and the switch to another technology is worthwhile? I personally would estimate the point of no return to be around 50% of the planned effort, but really want to know, if anyone has real experience with such a switch. And to answer the inevitable question: I do not really care, if the technology switched to is proven or another new thing. The latter would basically need more initial scrutiny based on the past experiences in the problematic project.

    Read the article

  • Best way to detect similar email addresses?

    - by Chris
    I have a list of ~20,000 email addresses, some of which I know to be fraudulent attempts to get around a "1 per e-mail" limit. ([email protected], [email protected], [email protected], etc...). I want to find similar email addresses for evaluation. Currently I'm using a levenshtein algorithm to check each e-mail against the others in the list and report any with an edit distance of less than 2. However, this is painstakingly slow. Is there a more efficient approach? The test code I'm using now is: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Threading; namespace LevenshteinAnalyzer { class Program { const string INPUT_FILE = @"C:\Input.txt"; const string OUTPUT_FILE = @"C:\Output.txt"; static void Main(string[] args) { var inputWords = File.ReadAllLines(INPUT_FILE); var outputWords = new SortedSet<string>(); for (var i = 0; i < inputWords.Length; i++) { if (i % 100 == 0) Console.WriteLine("Processing record #" + i); var word1 = inputWords[i].ToLower(); for (var n = i + 1; n < inputWords.Length; n++) { if (i == n) continue; var word2 = inputWords[n].ToLower(); if (word1 == word2) continue; if (outputWords.Contains(word1)) continue; if (outputWords.Contains(word2)) continue; var distance = LevenshteinAlgorithm.Compute(word1, word2); if (distance <= 2) { outputWords.Add(word1); outputWords.Add(word2); } } } File.WriteAllLines(OUTPUT_FILE, outputWords.ToArray()); Console.WriteLine("Found {0} words", outputWords.Count); } } }

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >