Search Results

Search found 5933 results on 238 pages for 'mass storage'.

Page 231/238 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • How do I make Linux recognize a new SATA /dev/sda drive I hot swapped in without rebooting?

    - by Philip Durbin
    Hot swapping out a failed SATA /dev/sda drive worked fine, but when I went to swap in a new drive, it wasn't recognized: [root@fs-2 ~]# tail -18 /var/log/messages May 5 16:54:35 fs-2 kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x50000 action 0xe frozen May 5 16:54:35 fs-2 kernel: ata1: SError: { PHYRdyChg CommWake } May 5 16:54:40 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:45 fs-2 kernel: ata1: device not ready (errno=-16), forcing hardreset May 5 16:54:45 fs-2 kernel: ata1: soft resetting link May 5 16:54:50 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:55 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:54:55 fs-2 kernel: ata1: soft resetting link May 5 16:55:00 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:05 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:05 fs-2 kernel: ata1: soft resetting link May 5 16:55:10 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:40 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:40 fs-2 kernel: ata1: limiting SATA link speed to 1.5 Gbps May 5 16:55:40 fs-2 kernel: ata1: soft resetting link May 5 16:55:45 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:45 fs-2 kernel: ata1: reset failed, giving up May 5 16:55:45 fs-2 kernel: ata1: EH complete I tried a couple things to make the server find the new /dev/sda, such as rescan-scsi-bus.sh but they didn't work: [root@fs-2 ~]# echo "---" > /sys/class/scsi_host/host0/scan -bash: echo: write error: Invalid argument [root@fs-2 ~]# [root@fs-2 ~]# /root/rescan-scsi-bus.sh -l [snip] 0 new device(s) found. 0 device(s) removed. [root@fs-2 ~]# [root@fs-2 ~]# ls /dev/sda ls: /dev/sda: No such file or directory I ended up rebooting the server. /dev/sda was recognized, I fixed the software RAID, and everything is fine now. But for next time, how can I make Linux recognize a new SATA drive I have hot swapped in without rebooting? The operating system in question is RHEL5.3: [root@fs-2 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.3 (Tikanga) The hard drive is a Seagate Barracuda ES.2 SATA 3.0-Gb/s 500-GB, model ST3500320NS. Here is the lscpi output: [root@fs-2 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0a.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0d.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0e.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) Update: In perhaps a dozen cases, we've been forced to reboot servers because hot swap hasn't "just worked." Thanks for the answers to look more into the SATA controller. I've included the lspci output for the problematic system above (hostname: fs-2). I could still use some help understanding what exactly isn't supported hardware-wise in terms of hot swap for that system. Please let me know what other output besides lspci might be useful. The good news is that hot swap "just worked" today on one of our servers (hostname: www-1), which is very rare for us. Here is the lspci output: [root@www-1 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:19.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 09:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS (rev 04)

    Read the article

  • MySQL "ERROR 1005 (HY000): Can't create table 'foo.#sql-12c_4' (errno: 150)"

    - by Ankur Banerjee
    Hi, I was working on creating some tables in database foo, but every time I end up with errno 150 regarding the foreign key. Firstly, here's my code for creating tables: CREATE TABLE Clients ( client_id CHAR(10) NOT NULL , client_name CHAR(50) NOT NULL , provisional_license_num CHAR(50) NOT NULL , client_address CHAR(50) NULL , client_city CHAR(50) NULL , client_county CHAR(50) NULL , client_zip CHAR(10) NULL , client_phone INT NULL , client_email CHAR(255) NULL , client_dob DATETIME NULL , test_attempts INT NULL ); CREATE TABLE Applications ( application_id CHAR(10) NOT NULL , office_id INT NOT NULL , client_id CHAR(10) NOT NULL , instructor_id CHAR(10) NOT NULL , car_id CHAR(10) NOT NULL , application_date DATETIME NULL ); CREATE TABLE Instructors ( instructor_id CHAR(10) NOT NULL , office_id INT NOT NULL , instructor_name CHAR(50) NOT NULL , instructor_address CHAR(50) NULL , instructor_city CHAR(50) NULL , instructor_county CHAR(50) NULL , instructor_zip CHAR(10) NULL , instructor_phone INT NULL , instructor_email CHAR(255) NULL , instructor_dob DATETIME NULL , lessons_given INT NULL ); CREATE TABLE Cars ( car_id CHAR(10) NOT NULL , office_id INT NOT NULL , engine_serial_num CHAR(10) NULL , registration_num CHAR(10) NULL , car_make CHAR(50) NULL , car_model CHAR(50) NULL ); CREATE TABLE Offices ( office_id INT NOT NULL , office_address CHAR(50) NULL , office_city CHAR(50) NULL , office_County CHAR(50) NULL , office_zip CHAR(10) NULL , office_phone INT NULL , office_email CHAR(255) NULL ); CREATE TABLE Lessons ( lesson_num INT NOT NULL , client_id CHAR(10) NOT NULL , date DATETIME NOT NULL , time DATETIME NOT NULL , milegage_used DECIMAL(5, 2) NULL , progress CHAR(50) NULL ); CREATE TABLE DrivingTests ( test_num INT NOT NULL , client_id CHAR(10) NOT NULL , test_date DATETIME NOT NULL , seat_num INT NOT NULL , score INT NULL , test_notes CHAR(255) NULL ); ALTER TABLE Clients ADD PRIMARY KEY (client_id); ALTER TABLE Applications ADD PRIMARY KEY (application_id); ALTER TABLE Instructors ADD PRIMARY KEY (instructor_id); ALTER TABLE Offices ADD PRIMARY KEY (office_id); ALTER TABLE Lessons ADD PRIMARY KEY (lesson_num); ALTER TABLE DrivingTests ADD PRIMARY KEY (test_num); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Offices FOREIGN KEY (office_id) REFERENCES Offices (office_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Instructors FOREIGN KEY (instructor_id) REFERENCES Instructors (instructor_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Cars FOREIGN KEY (car_id) REFERENCES Cars (car_id); ALTER TABLE Lessons ADD CONSTRAINT FK_Lessons_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); ALTER TABLE Cars ADD CONSTRAINT FK_Cars_Offices FOREIGN KEY (office_id) REFERENCES Offices (office_id); ALTER TABLE Clients ADD CONSTRAINT FK_DrivingTests_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); These are the errors that I get: mysql> ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Cars FOREIGN KEY (car_id) REFERENCES Cars (car_id); ERROR 1005 (HY000): Can't create table 'foo.#sql-12c_4' (errno: 150) I ran SHOW ENGINE INNODB STATUS which gives a more detailed error description: ------------------------ LATEST FOREIGN KEY ERROR ------------------------ 100509 20:59:49 Error in foreign key constraint of table practice9/#sql-12c_4: FOREIGN KEY (car_id) REFERENCES Cars (car_id): Cannot find an index in the referenced table where the referenced columns appear as the first columns, or column types in the table and the referenced table do not match for constraint. Note that the internal storage type of ENUM and SET changed in tables created with >= InnoDB-4.1.12, and such columns in old tables cannot be referenced by such columns in new tables. See http://dev.mysql.com/doc/refman/5.1/en/innodb-foreign-key-constraints.html for correct foreign key definition. ------------ I searched around on StackOverflow and elsewhere online - came across a helpful blog post here with pointers on how to resolve this error - but I can't figure out what's going wrong. Any help would be appreciated!

    Read the article

  • Apache2 Segfault - need help interpreting this coredump (suspect cause is memcache / php session related)

    - by WayneDV
    Three Apache2 web servers running a PHP 5.2.3 web site. We're using Memcache to cache rendered pages but also as the storage engine of the PHP Sessions. At peak traffic times we're getting Apache segmentation faults on all three web servers and all HTTPD child processes segfault. My gut tells me that the increased Memcache traffic is stopping PHP sessions from being created or cleaned up and thus the processes die. Is it possible for someone to confirm that from the following? : #0 _zend_mm_free_int (heap=0x7fb67a075820, p=0x7fb67a011538) at /usr/src/debug/php-5.3.3/Zend/zend_alloc.c:2018 #1 0x00007fb665d02e82 in mmc_buffer_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:50 #2 mmc_request_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:169 #3 0x00007fb665d031ea in mmc_pool_free (pool=0x7fb67a00e458) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:917 #4 0x00007fb665d0a2f1 in ps_close_memcache (mod_data=0x7fb66d625440) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_session.c:185 #5 0x00007fb66d1b0935 in php_session_save_current_state () at /usr/src/debug/php-5.3.3/ext/session/session.c:625 #6 php_session_flush () at /usr/src/debug/php-5.3.3/ext/session/session.c:1517 #7 0x00007fb66d1b0c1b in zm_deactivate_session (type=<value optimized out>, module_number=<value optimized out>) at /usr/src/debug/php-5.3.3/ext/session/session.c:2171 #8 0x00007fb66d2a719c in module_registry_cleanup (module=<value optimized out>) at /usr/src/debug/php-5.3.3/Zend/zend_API.c:2150 #9 0x00007fb66d2b1994 in zend_hash_reverse_apply (ht=0x7fb66d629d60, apply_func=0x7fb66d2a7180 <module_registry_cleanup>) at /usr/src/debug/php-5.3.3/Zend/zend_hash.c:755 #10 0x00007fb66d2a5c0d in zend_deactivate_modules () at /usr/src/debug/php-5.3.3/Zend/zend.c:866 #11 0x00007fb66d2541b5 in php_request_shutdown (dummy=<value optimized out>) at /usr/src/debug/php-5.3.3/main/main.c:1607 #12 0x00007fb66d32e037 in php_apache_request_dtor (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:509 #13 php_handler (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:681 #14 0x00007fb6784166f0 in ap_run_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:158 #15 0x00007fb678419f58 in ap_invoke_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:372 #16 0x00007fb6784254f0 in ap_process_request (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/modules/http/http_request.c:282 #17 0x00007fb678422418 in ap_process_http_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/modules/http/http_core.c:190 #18 0x00007fb67841e1b8 in ap_run_process_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/server/connection.c:43 #19 0x00007fb678429f4b in child_main (child_num_arg=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:662 #20 0x00007fb67842a21a in make_child (s=0x7fb679cd7860, slot=153) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:758 #21 0x00007fb67842aea4 in perform_idle_server_maintenance (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:893 #22 ap_mpm_run (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:1097 #23 0x00007fb678402890 in main (argc=1, argv=0x7fff6fecacb8) at /usr/src/debug/httpd-2.2.15/server/main.c:740 PHP.INI Follows: [PHP] engine = On short_open_tag = On asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED display_errors = Off display_startup_errors = Off log_errors = Off log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = Off html_errors = Off variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off file_uploads = On upload_max_filesize = 2M allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 sendmail_path = /usr/sbin/sendmail -t -i mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [MySQL] mysql.allow_persistent = On mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_links = -1 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.save_path = "/var/lib/php/session" session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 1 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.entropy_file = session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 /etc/php.d/memcached.ini : session.save_path="tcp://memcache1:11211?persistent=1&weight=1&timeout=3&retry_interval=15"

    Read the article

  • Background Image not showing up in IE8

    - by Davey
    So I have a tiny header image that repeats on the x axis, but for some reason it won't show up in IE8. Anyone know a work around? Thanks in advanced. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta content='' name='description' /> <meta content='' name='keywords' /> <link rel="stylesheet" type="text/css" href="style.css" media="screen" /> <title>Book Site</title> </head> <body> <div id="wrapper"> <div id="header"> <div id="title"> <span class="maintitle">Site Title Goes Here</span> <br /> <span class="subtitle">Transitional Justice, Post-Conflict Reconstruction & Reconciliation in Rwanda and Beyond Phil Clark and Zachary D. Kaufman, editors</span> </div> <img class="thebook" src="images/thebook.png" /> <span class="bookblurb"> <span class="bookbuy">Buy the book</span> get it online <br /> from Columbia, Hurst or your favorite reseller </span> </div> <div id="navbar"> <ul> <li>HOME</li> <li>ABOUT THE BOOK</li> <li>AUTHORS</li> <li>NEWS & EVENTS</li> <li>KIGALI PUBLIC LIBRARY</li> <li>CONTACT US</li> </ul> </div> <div id="content"> <div id="blockone"> <div id="polaroid"> <img class="polaroid" src="images/polaroid.png" /> <br /> <span class="roidplace">Gisimba Memorial Centre</span> <br /> <span class="roidname">Kigali, Rwanda</span> </div> <div id="textblockone"> <h3>An incisive analysis of genocide and its aftermath</h3> <br /> <span class="description">In After Genocide leading scholars and practitioners analyse the political, legal and regional impact of events in post-genocide Rwanda within the broader themes of transitional justice, reconstruction and reconciliation. Given the forthcoming fifteenth anniversary of the Rwandan genocide, and continued mass violence in Africa, especially in Darfur, the Democratic Republic of Congo (DRC) and northern Uganda, this volume is unquestionably of continuing relevance. </span> </div> </div> <div id="form"> <div id="statement"> This book should be labeled for the mature individual only. But for that mature individual it is of extreme interest. It shows, far from any Manichean stereotyping, the many facets of having to try to live in an impossibly complex social and human situation. Highly recommended. <br /><br /> <span class="author">-Grard Prunier</span> <br /><span class="bookname">The Rwanda Crisis: History of a Genocide (Hurst, 1995)</span> </div> <div id="contactform"> <span class="contactus">Contact us for additional information and site updates</span> <br /> <span class="theform"> <form class="forming"> Name: <input type="text" name="firstname" /> <br /> Title: <input type="text" name="title" /> <br /> Institution: <input type="text" name="institution" /> <br /> Email: <input type="text" name="email" /> <br /> Message: <input type="text" name="message" class="message" /> </form> </span> </div> </div> </div> <div id="footer"> <p class="footernav">&copy; 2008 After Genocide <span class="footerlinks">Sitemap | Terms | Privacy | Contact </span> <span class="plug">Web design by <span class="avity">Avity</span> </p> </div> </div> </body> </html> ----------------css------------------- html, body { margin:0; padding:0; background-color:#fdffe3; font-family: Arial, Helvetica, sans-serif; } #wrapper { width:1020px; margin:0 auto; } /*begin header style*/ #header { background:url("images/headback.png")repeat-x; width:1020px; height:120px; font-family:arial; position:relative; } #title { width:565px; height:100px; float:left; margin:20px 0 0 100px; } .maintitle { font-size:40px; } .subtitle { font-size:13px; } .thebook { float:left; margin:10px 0 0 30px; border:2px solid #666666; } .bookblurb { float:left; width:110px; margin:15px 0 0 15px; font-size:13px; } .bookbuy { font-weight:bold; font-size:14px; } /*end header style*/ /*begin navigation style*/ #navbar { margin:5px 0 0 0; height: 30px; width: 1020px; background-color: #3a3e30; } #navbar ul { padding: 0px; font-family: Arial, Helvetica, sans-serif; font-size: 12px; color: #FFF; line-height: 30px; white-space: nowrap; margin:0 0 0 140px; } #navbar ul li { list-style-type: none; display: inline; margin:0 40px 0 0; } /*end navigation style*/ /*begin content style*/ #content { width:775px; margin:0 auto; } #blockone { margin:25px 0 0 0; } #polaroid { float:left; width:230px; } .roidplace { font-weight:bold; font-size:11px; } .roidname { font-size:11px; margin:0 0 0 40px; } #textblockone { width:745px; margin:0 0 0 0; font-family: Arial, Helvetica, sans-serif; } .description { font-size:13px; } #form { background:url("images/formbackround.png") no-repeat; width:758px; height:231px; margin:80px 0 0 10px; } #statement { width:320px; margin:30px 0 0 30px; position:absolute; font-size:15px; font-style:italic; float:left; } .author { font-weight:bold; font-size:14; } .bookname { font-weight:bold; font-size:11px; color:#3f91ad; } #contactform { float:right; width:320px; margin:20px 30px 0 0; } .contactus { font-weight:bold; font-size:12px; } .theform { } .forming { } .message { height:50px; } #footer { width:1020px; height:65px; background-color:#dfdacc; margin:35px 0 0 0; font-size:13px; font-weight:bold; } .footernav { margin:30px 0 0 150px; position:absolute; width:1020px; } .footerlinks { margin:0 10px 0 10px; color:#0f77a9; } .plug { margin:0 0 0 175px; } .avity { color:#0f77a9; } Live site: http://cheapramen.com/testsite/

    Read the article

  • I don't understand how work call_once

    - by SABROG
    Please help me understand how work call_once Here is thread-safe code. I don't understand why this need Thread Local Storage and global_epoch variables. Variable _fast_pthread_once_per_thread_epoch can be changed to constant/enum like {FAST_PTHREAD_ONCE_INIT, BEING_INITIALIZED, FINISH_INITIALIZED}. Why needed count calls in global_epoch? I think this code can be rewriting with logc: if flag FINISH_INITIALIZED do nothing, else go to block with mutexes and this all. #ifndef FAST_PTHREAD_ONCE_H #define FAST_PTHREAD_ONCE_H #include #include typedef sig_atomic_t fast_pthread_once_t; #define FAST_PTHREAD_ONCE_INIT SIG_ATOMIC_MAX extern __thread fast_pthread_once_t _fast_pthread_once_per_thread_epoch; #ifdef __cplusplus extern "C" { #endif extern void fast_pthread_once( pthread_once_t *once, void (*func)(void) ); inline static void fast_pthread_once_inline( fast_pthread_once_t *once, void (*func)(void) ) { fast_pthread_once_t x = *once; /* unprotected access */ if ( x _fast_pthread_once_per_thread_epoch ) { fast_pthread_once( once, func ); } } #ifdef __cplusplus } #endif #endif FAST_PTHREAD_ONCE_H Source fast_pthread_once.c The source is written in C. The lines of the primary function are numbered for reference in the subsequent correctness argument. #include "fast_pthread_once.h" #include static pthread_mutex_t mu = PTHREAD_MUTEX_INITIALIZER; /* protects global_epoch and all fast_pthread_once_t writes */ static pthread_cond_t cv = PTHREAD_COND_INITIALIZER; /* signalled whenever a fast_pthread_once_t is finalized */ #define BEING_INITIALIZED (FAST_PTHREAD_ONCE_INIT - 1) static fast_pthread_once_t global_epoch = 0; /* under mu */ __thread fast_pthread_once_t _fast_pthread_once_per_thread_epoch; static void check( int x ) { if ( x == 0 ) abort(); } void fast_pthread_once( fast_pthread_once_t *once, void (*func)(void) ) { /*01*/ fast_pthread_once_t x = *once; /* unprotected access */ /*02*/ if ( x _fast_pthread_once_per_thread_epoch ) { /*03*/ check( pthread_mutex_lock(µ) == 0 ); /*04*/ if ( *once == FAST_PTHREAD_ONCE_INIT ) { /*05*/ *once = BEING_INITIALIZED; /*06*/ check( pthread_mutex_unlock(µ) == 0 ); /*07*/ (*func)(); /*08*/ check( pthread_mutex_lock(µ) == 0 ); /*09*/ global_epoch++; /*10*/ *once = global_epoch; /*11*/ check( pthread_cond_broadcast(&cv;) == 0 ); /*12*/ } else { /*13*/ while ( *once == BEING_INITIALIZED ) { /*14*/ check( pthread_cond_wait(&cv;, µ) == 0 ); /*15*/ } /*16*/ } /*17*/ _fast_pthread_once_per_thread_epoch = global_epoch; /*18*/ check (pthread_mutex_unlock(µ) == 0); } } This code from BOOST: #ifndef BOOST_THREAD_PTHREAD_ONCE_HPP #define BOOST_THREAD_PTHREAD_ONCE_HPP // once.hpp // // (C) Copyright 2007-8 Anthony Williams // // Distributed under the Boost Software License, Version 1.0. (See // accompanying file LICENSE_1_0.txt or copy at // http://www.boost.org/LICENSE_1_0.txt) #include #include #include #include "pthread_mutex_scoped_lock.hpp" #include #include #include namespace boost { struct once_flag { boost::uintmax_t epoch; }; namespace detail { BOOST_THREAD_DECL boost::uintmax_t& get_once_per_thread_epoch(); BOOST_THREAD_DECL extern boost::uintmax_t once_global_epoch; BOOST_THREAD_DECL extern pthread_mutex_t once_epoch_mutex; BOOST_THREAD_DECL extern pthread_cond_t once_epoch_cv; } #define BOOST_ONCE_INITIAL_FLAG_VALUE 0 #define BOOST_ONCE_INIT {BOOST_ONCE_INITIAL_FLAG_VALUE} // Based on Mike Burrows fast_pthread_once algorithm as described in // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2444.html template void call_once(once_flag& flag,Function f) { static boost::uintmax_t const uninitialized_flag=BOOST_ONCE_INITIAL_FLAG_VALUE; static boost::uintmax_t const being_initialized=uninitialized_flag+1; boost::uintmax_t const epoch=flag.epoch; boost::uintmax_t& this_thread_epoch=detail::get_once_per_thread_epoch(); if(epoch #endif I right understand, boost don't use atomic operation, so code from boost not thread-safe?

    Read the article

  • How can I make this Java code run faster?

    - by Martin Wiboe
    Hello all, I am trying to make a Java port of a simple feed-forward neural network. This obviously involves lots of numeric calculations, so I am trying to optimize my central loop as much as possible. The results should be correct within the limits of the float data type. My current code looks as follows (error handling & initialization removed): /** * Simple implementation of a feedforward neural network. The network supports * including a bias neuron with a constant output of 1.0 and weighted synapses * to hidden and output layers. * * @author Martin Wiboe */ public class FeedForwardNetwork { private final int outputNeurons; // No of neurons in output layer private final int inputNeurons; // No of neurons in input layer private int largestLayerNeurons; // No of neurons in largest layer private final int numberLayers; // No of layers private final int[] neuronCounts; // Neuron count in each layer, 0 is input // layer. private final float[][][] fWeights; // Weights between neurons. // fWeight[fromLayer][fromNeuron][toNeuron] // is the weight from fromNeuron in // fromLayer to toNeuron in layer // fromLayer+1. private float[][] neuronOutput; // Temporary storage of output from previous layer public float[] compute(float[] input) { // Copy input values to input layer output for (int i = 0; i < inputNeurons; i++) { neuronOutput[0][i] = input[i]; } // Loop through layers for (int layer = 1; layer < numberLayers; layer++) { // Loop over neurons in the layer and determine weighted input sum for (int neuron = 0; neuron < neuronCounts[layer]; neuron++) { // Bias neuron is the last neuron in the previous layer int biasNeuron = neuronCounts[layer - 1]; // Get weighted input from bias neuron - output is always 1.0 float activation = 1.0F * fWeights[layer - 1][biasNeuron][neuron]; // Get weighted inputs from rest of neurons in previous layer for (int inputNeuron = 0; inputNeuron < biasNeuron; inputNeuron++) { activation += neuronOutput[layer-1][inputNeuron] * fWeights[layer - 1][inputNeuron][neuron]; } // Store neuron output for next round of computation neuronOutput[layer][neuron] = sigmoid(activation); } } // Return output from network = output from last layer float[] result = new float[outputNeurons]; for (int i = 0; i < outputNeurons; i++) result[i] = neuronOutput[numberLayers - 1][i]; return result; } private final static float sigmoid(final float input) { return (float) (1.0F / (1.0F + Math.exp(-1.0F * input))); } } I am running the JVM with the -server option, and as of now my code is between 25% and 50% slower than similar C code. What can I do to improve this situation? Thank you, Martin Wiboe

    Read the article

  • Set up linux box for hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP/MySQL: To upgrade PHP and MySQL to the latest versions, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! Add IUS repository to our package manager cd /tmp wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. # list all the packages in the IUS repository; use this to find PHP/MySQL version and libraries you want to install Remove old version of PHP and install newer version from IUS rpm -qa | grep php # to list all of the installed php packages we want to remove yum shell # open an interactive yum shell remove php-common php-mysql php-cli #remove installed PHP components install php53 php53-mysql php53-cli php53-common #add packages you want transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) Upgrade MySQL from IUS repository /etc/init.d/mysqld stop rpm -qa | grep mysql # to see installed mysql packages yum shell remove mysql mysql-server #remove installed MySQL components install mysql51 mysql51-server mysql51-devel transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell service mysqld start mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project Upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Install rssh (restricted shell) to provide scp and sftp access, without allowing ssh login cd /tmp wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm useradd -m -d /home/dev -s /usr/bin/rssh dev passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. vi /etc/rssh.conf Uncomment or add: allowscp allowsftp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). rssh instructions appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html Set up virtual interfaces ifconfig eth1:1 192.168.1.3 up #start up the virtual interface cd /etc/sysconfig/network-scripts/ cp ifcfg-eth1 ifcfg-eth1:1 #copy default script and match name to our virtual interface vi ifcfg-eth1:1 #modify eth1:1 script #ifcfg-eth1:1 | modify so it looks like this: DEVICE=eth1:1 IPADDR=192.168.1.3 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes NAME=eth1:1 Add more Virtual interfaces as needed by repeating. Because of the ONBOOT=yes line in the ifcfg-eth1:1 file, this interface will be brought up when the system boots, or the network starts/restarts. service network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] Bringing up interface eth1: [ OK ] ping 192.168.1.3 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.105 ms And this is where I'm at. I will keep editing this as I make progress. Any tips on how to Configure virtual interfaces/ip based virtual hosts for SSL, setting up a CA, or anything else would be appreciated.

    Read the article

  • Error using Session in IIS7

    - by flashnik
    After deployment of my website to IIS I'm getting a following error message when trying to access session: Session state can only be used when enableSessionState is set to true, either in a configuration file or in the Page directive. Please also make sure that System.Web.SessionStateModule or a custom session state module is included in the \\ section in the application configuration. I access it in Page_Load or PreRender events (I tried both versions). With VS Dev Server it works without a problem. I tried both InProc an SessionState storage, 1 and multiple woker processes. I added a enableSessionState = "true" to my webpage explicitly. Here is part of web.config: <system.web> <globalization culture="ru-RU" uiCulture="ru-RU" /> <compilation debug="true" defaultLanguage="c#"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions.Design, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A" /> <add assembly="System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> </assemblies> </compilation> <pages enableEventValidation="false" enableSessionState="true"> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx" /> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false" /> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="SearchUrlRewriter" type="Synonymizer.SearchUrlRewriter, Synonymizer, Version=1.0.0.0, Culture=neutral" /> <add name="Session" type="System.Web.SessionStateModule" /> </httpModules> <sessionState cookieless="UseCookies" cookieName="My_SessionId" mode="InProc" stateNetworkTimeout="5" /> <customErrors mode="Off" /> </system.web> What else do I need to do to make it work?? UPD I tried to monitor if IIS accesses aspnet_client folder with ProcMon and didn't get any access.

    Read the article

  • Did I find a bug in PHP's `crypt()`?

    - by Nathan Long
    I think I may have found a bug in PHP's crypt() function under Windows. However: I recognize that it's probably my fault. PHP is used by millions and worked on by thousands; my code is used by tens and worked on by me. (This argument is best explained on Coding Horror.) So I'm asking for help: show me my fault. I've been trying to find it for a few days now, with no luck. The setup I'm using a Windows server installation with Apache 2.2.14 (Win32) and PHP 5.3.2. My development box runs Windows XP Professional; the 'production' server (this is an intranet setup) runs Windows Storage Server 2003. The problem happens on both. I don't see anything in php.ini related to crypt(), but will happily answer questions about my config. The problem Several scripts in my PHP app occasionally hang: the page sits there on 'waiting for localhost' and never finishes. Each of these scripts uses crypt to hash a user's password before storing it in the database, or, in the case of the login page, to hash the entered password before comparing it to the version stored in the database. Since the login page is the simplest, I focused on it for testing. I repeatedly logged in, and found that it would hang maybe 4 out of 10 times. As an experiment, I changed the login page to use the plain text password and changed my password in the database to its plain text version. The page stopped hanging. I saw that PHP's latest version lists this bugfix: Fixed bug #51059 (crypt crashes when invalid salt are [sic] given). So I created a very simple test script, as follows, using the same salt given in an official example: $foo = crypt('rasmuslerdorf','r1'); echo $foo; This page, too, will hang, if I reload it like crazy. I only see it hanging in Chrome, but regardless of browser, the effect on Apache is the same. Effect on Apache When these pages hang, Apache's server-status page (which I explained here, regarding a different problem) increments the number of requests being processed and decrements the number of idle workers. The requests being processed almost all have a status of 'Sending Reply,' though sometimes for a moment they will show either 'Reading request' or 'keepalive (read).' Eventually, Apache may crash. When it does, the Windows crash report looks like this: szAppName: httpd.exe szAppVer: 2.2.14.0 szModName: php5ts.dll szModVer: 5.3.1.0 // OK, this report was before I upgraded to PHP 5.3.2, // but that didn't fix it offset: 00a2615 Is it my fault? I'm tempted to file a bug report to PHP on this. The argument against it is, as stated above, that bugs are nearly always my fault. However, my argument in favor of 'it's PHP's fault' is: I'm using Windows, whereas most servers use Linux (I don't get to choose this), so the chances are greater that I've found an edge case There was recently a bug with crypt(), so maybe it still has issues I have made the simplest test case I can, and I still have the problem Can anyone duplicate this? Can you suggest where I've gone wrong? Should I file the bug after all? Thanks in advance for any help you may give.

    Read the article

  • CentOS - Add additional hard drive raid arrays on Dell Perc 5/i card

    - by Quanano
    We have a Dell Poweredge 2900 system with Dell Perc 5/i card and 4 SAS hard drives attached, with NTFS partitions on them. We installed CentOS on one raid array on this controller with a different controller and it is working fine. We are now trying to access the drives shown above and they are not being shown in /dev as sdb, etc. sda is the drive that we installed centos on and it has sda1, sda2, sda3, etc. The CDROM has been picked up as well. If I scan for scsi devices then the perc and adaptec controllers are both found. sg0 is the CDROM and sg2 is the centos installed, however I think sg1 is the other drive but I cannot see anyway to mount the partitions, as only the drive is listed in /dev. Thanks. EXTRA INFO fdisk -l: Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x11e3119f Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 8845 70528000 8e Linux LVM Disk /dev/mapper/vg_lal2server-lv_root: 34.4 GB, 34431041536 bytes 255 heads, 63 sectors/track, 4186 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_root doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_swap: 21.1 GB, 21139292160 bytes 255 heads, 63 sectors/track, 2570 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_swap doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_home: 16.6 GB, 16647192576 bytes 255 heads, 63 sectors/track, 2023 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_home doesn't contain a valid partition table These are all from the install hdd not the additional hard drives modprobe a320raid FATAL: Module a320raid not found. lsscsi -v: [0:0:0:0] cd/dvd TSSTcorp CDRWDVD TS-H492C DE02 /dev/sr0 dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0] [4:0:10:0] enclosu DP BACKPLANE 1.05 - dir: /sys/bus/scsi/devices/4:0:10:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:0:10/4:0:10:0] [4:2:0:0] disk DELL PERC 5/i 1.03 /dev/sda dir: /sys/bus/scsi/devices/4:2:0:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:2:0/4:2:0:0] . lsmod: Module Size Used by fuse 66285 0 des_generic 16604 0 ecb 2209 0 md4 3461 0 nls_utf8 1455 0 cifs 278370 0 autofs4 26888 4 ipt_REJECT 2383 0 ip6t_REJECT 4628 2 nf_conntrack_ipv6 8748 2 nf_defrag_ipv6 12182 1 nf_conntrack_ipv6 xt_state 1492 2 nf_conntrack 79453 2 nf_conntrack_ipv6,xt_state ip6table_filter 2889 1 ip6_tables 19458 1 ip6table_filter ipv6 322029 31 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6 bnx2 79618 0 ses 6859 0 enclosure 8395 1 ses dcdbas 9219 0 serio_raw 4818 0 sg 30124 0 iTCO_wdt 13662 0 iTCO_vendor_support 3088 1 iTCO_wdt i5000_edac 8867 0 edac_core 46773 3 i5000_edac i5k_amb 5105 0 shpchp 33482 0 ext4 364410 3 mbcache 8144 1 ext4 jbd2 88738 1 ext4 sd_mod 39488 3 crc_t10dif 1541 1 sd_mod sr_mod 16228 0 cdrom 39771 1 sr_mod megaraid_sas 77090 2 aic79xx 129492 0 scsi_transport_spi 26151 1 aic79xx pata_acpi 3701 0 ata_generic 3837 0 ata_piix 22846 0 radeon 1023359 1 ttm 70328 1 radeon drm_kms_helper 33236 1 radeon drm 230675 3 radeon,ttm,drm_kms_helper i2c_algo_bit 5762 1 radeon i2c_core 31276 4 radeon,drm_kms_helper,drm,i2c_algo_bit dm_mirror 14101 0 dm_region_hash 12170 1 dm_mirror dm_log 10122 2 dm_mirror,dm_region_hash dm_mod 81500 11 dm_mirror,dm_log blkid: /dev/sda1: UUID="bc4777d9-ae2c-4c58-96ea-cedb342b8338" TYPE="ext4" /dev/sda2: UUID="j2wRZr-Mlko-QWBR-BndC-V2uN-vdhO-iKCuYu" TYPE="LVM2_member" /dev/mapper/vg_lal2server-lv_root: UUID="9238208a-1daf-4c3c-aa9b-469f0387ebee" TYPE="ext4" /dev/mapper/vg_lal2server-lv_swap: UUID="dbefb39c-5871-4bc9-b767-1ef18f12bd3d" TYPE="swap" /dev/mapper/vg_lal2server-lv_home: UUID="ec698993-08b7-443e-84f0-9f9cb31c5da8" TYPE="ext4" dmesg shows: megaraid_sas: fw state:c0000000 megasas: fwstate:c0000000, dis_OCR=0 scsi2 : LSI SAS based MegaRAID driver scsi 2:0:0:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:1:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:2:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:3:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:4:0: Direct-Access HITACHI HUS154545VLS300 D590 PQ: 0 ANSI: 5 scsi 2:0:5:0: Direct-Access HITACHI HUS154545VLS300 D590 PQ: 0 ANSI: 5 scsi 2:0:8:0: Direct-Access FUJITSU MBA3073RC D305 PQ: 0 ANSI: 5 scsi 2:0:9:0: Direct-Access FUJITSU MBA3073RC D305 PQ: 0 ANSI: 5 i.e. the 3 RAID Arrays Seagate Hitatchi and Fujitsu hard drives respectively. FURTHER UPDATE I have installed the megaraid storage manager console and connected to the server. It appears that the two CentOS installation hard drives are OK. The other 6 drives, one raid array of 4 and one raid array of 2 disks. The other drives are listed as (Foreign) Unconfigured Good.

    Read the article

  • Limiting TCP sends with a "to-be-sent" queue and other design issues.

    - by Poni
    Hello all! This question is the result of two other questions I've asked in the last few days. I'm creating a new question because I think it's related to the "next step" in my understanding of how to control the flow of my send/receive, something I didn't get a full answer to yet. The other related questions are: http://stackoverflow.com/questions/3028376/an-iocp-documentation-interpretation-question-buffer-ownership-ambiguity http://stackoverflow.com/questions/3028998/non-blocking-tcp-buffer-issues In summary, I'm using Windows I/O Completion Ports. I have several threads that process notifications from the completion port. I believe the question is platform-independent and would have the same answer as if to do the same thing on a *nix, *BSD, Solaris system. So, I need to have my own flow control system. Fine. So I send send and send, a lot. How do I know when to start queueing the sends, as the receiver side is limited to X amount? Let's take an example (closest thing to my question): FTP protocol. I have two servers; One is on a 100Mb link and the other is on a 10Mb link. I order the 100Mb one to send to the other one (the 10Mb linked one) a 1GB file. It finishes with an average transfer rate of 1.25MB/s. How did the sender (the 100Mb linked one) knew when to hold the sending, so the slower one wouldn't be flooded? Another way to ask this: Can I get a "hold-your-sendings" notification from the remote side? Is it built-in in TCP or the so called "reliable network protocol" needs me to do so? Again, I have a loop with many sends to a remote server, and at some point, within that loop I'll have to determine if I should queue that send or I can pass it on to the transport layer (TCP). How do I do that? What would you do? Of course that when I get a completion notification from IOCP that the send was done I'll issue other pending sends, that's clear. Another design question related to this: Since I am to use a custom buffers with a send queue, and these buffers are being freed to be reused (thus not using the "delete" keyword) when a "send-done" notification has been arrived, I'll have to use a mutual exlusion on that buffer pool. Using a mutex slows things down, so I've been thinking; Why not have each thread have its own buffers pool, thus accessing it , at least when getting the required buffers for a send operation, will require no mutex, because it belongs to that thread only. The buffers pool is located at the thread local storage (TLS) level. No mutual pool implies no lock needed, implies faster operations BUT also implies more memory used by the app, because even if one thread already allocated 1000 buffers, the other one that is sending right now and need 1000 buffers to send something will need to allocated these to its own. This is a long question and I hope none got hurt (: Thank you all!

    Read the article

  • Error using Session in IIS

    - by flashnik
    After deployment of my website to IIS I'm getting a following error message when trying to access session: Session state can only be used when enableSessionState is set to true, either in a configuration file or in the Page directive. Please also make sure that System.Web.SessionStateModule or a custom session state module is included in the \\ section in the application configuration. I access it in Page_Load or PreRender events (I tried both versions). With VS Dev Server it works without a problem. I tried both InProc an SessionState storage, 1 and multiple woker processes. I added a enableSessionState = "true" to my webpage explicitly. Here is part of web.config: <system.web> <globalization culture="ru-RU" uiCulture="ru-RU" /> <compilation debug="true" defaultLanguage="c#"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions.Design, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Design, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A" /> <add assembly="System.Windows.Forms, Version=2.0.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> </assemblies> </compilation> <pages enableEventValidation="false" enableSessionState="true"> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx" /> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false" /> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="SearchUrlRewriter" type="Synonymizer.SearchUrlRewriter, Synonymizer, Version=1.0.0.0, Culture=neutral" /> <add name="Session" type="System.Web.SessionStateModule" /> </httpModules> <sessionState cookieless="UseCookies" cookieName="My_SessionId" mode="InProc" stateNetworkTimeout="5" /> <customErrors mode="Off" /> </system.web> What else do I need to do to make it work??

    Read the article

  • Moving a Drupal between linux servers, best practice to avoid file-ownership problems

    - by zero
    I want to port over a Drupal commons 6x24 from a local LAMP-stack to a production webserver. Both systems run OpenSuse Linux. How do I do this, what are the most important steps. How should I handle file-ownership. It's important for me to have to have full control of the file ownership. If I use the wwwrun account, I frequently run into problems, due to a very strict webserver-admin. See for example the long history of looking for fixes and solutions see this thread and even more interesting see this very long and impressive thread here. All troubles I run into have to do with file-owernship and permissions. This is my current setup; Note: This was just a quick hacked installation - quick and dirty. Well my interest is after the general options i have in the port of a drupal from linux to linux linux-vi17:/srv/www/htdocs/com624 # ls -l insgesamt 224 -rwxrwxrwx 1 root www 45285 19. Jan 00:54 CHANGELOG.txt -rwxrwxrwx 1 root www 925 19. Jan 00:54 COPYRIGHT.txt -rwxrwxrwx 1 root www 206 19. Jan 00:54 cron.php drwxrwxrwx 2 root www 4096 19. Jan 00:54 includes -rwxrwxrwx 1 root www 923 19. Jan 00:54 index.php -rwxrwxrwx 1 root www 1244 19. Jan 00:54 INSTALL.mysql.txt -rwxrwxrwx 1 root www 1011 19. Jan 00:54 INSTALL.pgsql.txt -rwxrwxrwx 1 root www 47073 19. Jan 00:54 install.php -rwxrwxrwx 1 root www 15572 19. Jan 00:54 INSTALL.txt -rwxrwxrwx 1 root www 14940 19. Jan 00:54 LICENSE.txt -rwxrwxrwx 1 root www 1858 19. Jan 00:54 MAINTAINERS.txt drwxrwxrwx 3 root www 4096 19. Jan 00:54 misc drwxrwxrwx 35 root www 4096 19. Jan 00:54 modules drwxrwxrwx 4 root www 4096 19. Jan 00:54 profiles -rwxrwxrwx 1 root www 1470 19. Jan 00:54 robots.txt drwxrwxrwx 2 root www 4096 19. Jan 00:54 scripts drwxrwxrwx 4 root www 4096 19. Jan 00:54 sites drwxrwxrwx 7 root www 4096 19. Jan 00:54 themes -rwxrwxrwx 1 root www 26250 19. Jan 00:54 update.php -rwxrwxrwx 1 root www 4864 19. Jan 00:54 UPGRADE.txt -rwxrwxrwx 1 root www 294 19. Jan 00:54 xmlrpc.php linux-vi17:/srv/www/htdocs/com624 # thx to BetaRides answer here a quick overview on the drush functionality with rsync http://drush.ws/ core-rsync Rsync the Drupal tree to/from another server using ssh. Examples: drush rsync @dev @stage Rsync Drupal root from dev to stage (one of which must be local). drush rsync ./ @stage:%files/img Rsync all files in the current directory to the 'img' directory in the file storage folder on stage. Arguments: source May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. destination May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. Options: --mode The unary flags to pass to rsync; --mode=rultz implies rsync -rultz. Default is -az. --RSYNC-FLAG Most rsync flags passed to drush sync will be passed on to rsync. See rsync documentation. --exclude-conf Excludes settings.php from being rsynced. Default. --include-conf Allow settings.php to be rsynced --exclude-files Exclude the files directory. --exclude-sites Exclude all directories in "sites/" except for "sites/all". --exclude-other-sites Exclude all directories in "sites/" except for "sites/all" and the site directory for the site being synced. Note: if the site directory is different between the source and destination, use --exclude-sites followed by "drush rsync @from:%site @to:%site" --exclude-paths List of paths to exclude, seperated by : (Unix-based systems) or ; (Windows). --include-paths List of paths to include, seperated by : (Unix-based systems) or ; (Windows). Topics: docs-aliases Site aliases overview with examples Aliases: rsync

    Read the article

  • How to add addtional disks to a Windows 2008 KVM based Guest?

    - by taazaa
    I have a Win 2008 KVM based guest VM running on a Ubuntu 10 host. It is a raw image of 22G. I want to add a "data" drive which would show up as "D:\" drive on the guest. I first created a raw image using: qemu-img create -f raw ~/vmdisk2.img 50G Then, tried attaching it using virsh attach-disk. When that did not work, I tried editing the xml file of the VM directly. Both did not seem to work. I would greatly appreciate any help on how to do this and what the best practice is. I want to keep the base image small, so that I can clone it (hopefully) and then attach necessary storage based on the application at hand. Update: The xml of the vm before adding the second drive: <domain type='kvm'> <name>win08e-vm1</name> <uuid>183a4ba0-1c0b-0b04-ad01-aa7c3a4cb390</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='pc-0.12'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='localtime'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/win08e-vm1.img'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/home/taazaa/iso/Win08ER264.iso'/> <target dev='hdc' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='1' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:7f:a7:ae'/> <source bridge='br0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/> <video> <model type='vga' vram='9216' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> </domain> Thanks!

    Read the article

  • extensible database design: automatic ALTER TABLE or serialize() field BLOB ?

    - by mario
    I want an adaptable database scheme. But still use a simple table data gateway in my application, where I just pass an $data[] array for storing. The basic columns are settled in the initial table scheme. There will however arise a couple of meta fields later (ca 10-20). I want some flexibility there and not adapt the database manually each time, or -worse- change the application logic just because of new fields. So now there are two options which seem workable yet not overkill. But I'm not sure about the scalability or database drawbacks. (1) Automatic ALTER TABLE. Whenever the $data array is to be saved, the keys are compared against the current database columns. New columns get defined before the $data is INSERTed into the table. Actually seems simple enough in test code: function save($data, $table="forum") { // columns if ($new_fields = array_diff(array_keys($data), known_fields($table))) { extend_schema($table, $new_fields, $data); } // save $columns = implode("`, `", array_keys($data)); $qm = str_repeat(",?", count(array_keys($data)) - 1); echo ("INSERT INTO `$table` (`$columns`) VALUES (?$qm);"); function known_fields($table) { return unserialize(@file_get_contents("db:$table")) ?: array("id"); function extend_schema($table, $new_fields, $data) { foreach ($new_fields as $field) { echo("ALTER TABLE `$table` ADD COLUMN `$field` VARCHAR;"); Since it is mostly meta information fields, adding them just as VARCHAR seems sufficient. Nobody will query by them anyway. So the database is really just used as storage here. However, while I might want to add a lot of new $data fields on the go, they will not always be populated. (2) serialize() fields into BLOB. Any new/extraneous meta fields could be opaque to the database. Simply sorting out the virtual fields from the real database columns is simple. And the meta fields can just be serialize()d into a blob/text field then: function ext_save($data, $table="forum") { $db_fields = array("id", "content", "flags", "ext"); // disjoin foreach (array_diff(array_keys($data),$db_fields) as $key) { $data["ext"][$key] = $data[$key]; unset($data[$key]); } $data["ext"] = serialize($data["ext"]); Unserializing and unpacking this 'ext' column on read queries is a minor overhead. The advantage is that there won't be any sparsely filled columns in the database, so I guess it's compacter and faster than the AUTO ALTER TABLE approach. Of course, this method prevents ever using one of the new fields in a WHERE or GROUP BY clause. But I think none of the possible meta fields (user_agent, author_ip, author_img, votes, hits, last_modified, ..) would/should ever be used there anyway. So I currently prefer the 'ext' blob approach, even if it's a one-way ticket. How are such columns called usually? (looking for examples/doc) Would you use XML serialization for (very theoretical) in-database queries? The adapting table scheme seems a "cleaner" interface, even if most columns might remain empty then. How does that impact speed? How many such sparse VARCHAR fields can MySQL/innodb stomach? But most importantly: Is there any standard implementation for this? A pseudo ORM with automatic ALTER TABLE tricks? Storing a simple column list seems workable, but something like pdo::getColumnMeta would be more robust.

    Read the article

  • The best JSF coding pattern for editing JPA entities using @RequestScoped only

    - by AlanObject
    I am in a project where I will write a lot of pages like this, so I want to use the most efficient (to write) coding pattern. Background: In the past I have used CODI's @ViewAccessScoped to preserve state between requests, and more recently I have started using flash scoped objects to save state. I can't use JSF @ViewScoped because I use CDI and they don't play well together. So I want to see if I can do this with only @RequestScoped backing beans. The page is designed like this (the p namespace is Primefaces): <f:metadata> <f:viewParam name="ID" value="#{backing.id}" /> </f:metadata> .... <h1>Edit Object Page</h1> <h:form id="formObj" rendered="#{backing.accessOK}"> <p:panelGrid columns="2"> <h:outputLabel value="Field #1:"/> <p:inputText value="#{backing.record.field1}" /> (more input fields) <h:outputLabel value="Action:" /> <h:panelGroup> <p:commandButton value="Save" action="#{backing.save}" /> <p:commandButton value="Cancel" action="backing.cancel" /> </h:panelGroup> </p:panelGrid> <p:messages showDetail="true" showSummary="true" /> </h:form> If the page is requested, the method accessOK() has the ability to keep the h:form from being rendered. Instead, the p:messages is shown with whatever FacesMessage(s) the accessOK() method cares to set. The pattern for the bean backing looks like this: @Named @RequestScoped public class Backing { private long id; private SomeJPAEntity record; private Boolean accessOK; public long getId() { return id; } public void setId(long value) { id = value; } public boolean accessOK() { if (accessOK != null) return accessOK; if (getRecord() == null) { // add a FacesMessage that explains the record // does not exist return accessOK = false; // note single = } // do any other access checks, such as write permissions return accessOK = true; } public SomeJPAEntity getRecord() { if (record != null) return record; if (getId() > 0) record = // get the record from DB else record = new SomeJPAEntity(); return record; } public String execute() { if (!accessOK()) return null; // bad edit // do other integrity checks here. If fail, set FacesMessages // and return null; if (getId() > 0) // merge the record back into the data base else // persist the record } } Here is what goes wrong with this model. When the Save button is clicked, a new instance of Backing is built, and then there are a lot of calls to the getRecord() getter before the setID() setter is called. So the logic in getRecord() breaks because it cannot rely on the id property being valid when it is called. When this was a @ViewAccessScoped (or ViewScoped) backing bean, then both the id and record properties are already set when the form is processed with the commandButton. Alternatively you can save those properties in flash storage but that has its own problems I want to avoid. So is there a way to make this programming model work within the specification?

    Read the article

  • How do I make java wait for boolean to run funciton

    - by TWeeKeD
    I'm sure this is pretty simple but I can't figure out and it sucks I'm up on suck on (what should be) an easy step. ok. I have a method that runs one function that give a response. this method actually handles the uploading of the file so o it takes a second to give a response. I need this response in the following method. sendPicMsg needs to complete and then forward it's response to sendMessage. Please help. b1.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { if(!uploadMsgPic.equalsIgnoreCase("")){ Log.v("response","Pic in storage"); sendPicMsg(); sendMessage(); }else{ sendMessage(); } 1st Method public void sendPicMsg(){ Log.v("response", "sendPicMsg Loaded"); if(!uploadMsgPic.equalsIgnoreCase("")){ final SharedPreferences preferences = this.getActivity().getSharedPreferences("MyPreferences", getActivity().MODE_PRIVATE); AsyncHttpClient client3 = new AsyncHttpClient(); RequestParams params3 = new RequestParams(); File file = new File(uploadMsgPic); try { File f = new File(uploadMsgPic.replace(".", "1.")); f.createNewFile(); //Convert bitmap to byte array Bitmap bitmap = decodeFile(file,400); ByteArrayOutputStream bos = new ByteArrayOutputStream(); bitmap.compress(CompressFormat.PNG, 0 /*ignored for PNG*/, bos); byte[] bitmapdata = bos.toByteArray(); //write the bytes in file FileOutputStream fos = new FileOutputStream(f); fos.write(bitmapdata); params3.put("file", f); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } params3.put("email", preferences.getString("loggedin_user", "")); params3.put("webversion", "1"); client3.post("http://peekatu.com/apiweb/msgPic_upload.php",params3, new AsyncHttpResponseHandler() { @Override public void onSuccess(String response) { Log.v("response", "Upload Complete"); refreshChat(); //responseString = response; Log.v("response","msgPic has been uploaded"+response); //parseChatMessages(response); response=picurl; uploadMsgPic = ""; if(picurl!=null){ Log.v("response","picurl is set"); } if(picurl==null){ Log.v("response", "picurl no ready"); }; } }); sendMessage(); } } 2nd Method public void sendMessage(){ final SharedPreferences preferences = this.getActivity().getSharedPreferences("MyPreferences", getActivity().MODE_PRIVATE); if(preferences.getString("Username", "").length()<=0){ editText1.setText(""); Toast.makeText(this.getActivity(), "Please Login to send messages.", 2); return; } AsyncHttpClient client = new AsyncHttpClient(); RequestParams params = new RequestParams(); if(type.equalsIgnoreCase("3")){ params.put("toid",user); params.put("action", "sendprivate"); }else{ params.put("room", preferences.getString("selected_room", "Adult Lobby")); params.put("action", "insert"); } Log.v("response", "Sending message "+editText1.getText().toString()); params.put("message",editText1.getText().toString() ); params.put("media", picurl); params.put("email", preferences.getString("loggedin_user", "")); params.put("webversion", "1"); client.post("http://peekatu.com/apiweb/messagetest.php",params, new AsyncHttpResponseHandler() { @Override public void onSuccess(String response) { refreshChat(); //responseString = response; Log.v("response", response); //parseChatMessages(response); if(picurl!=null) Log.v("response", picurl); } }); editText1.setText(""); lv.setSelection(adapter.getCount() - 1); }

    Read the article

  • Can't save my picture

    - by mamii
    I want to save the image that I draw, but I always failure is reported. I have tested and tried but I can correct any errors. Therefore, I appeal to you. This store is for me as a "cancer sore". And what is the drawing application without the possibility shranjevnja? sucks: D Question: What is wrong with my code for storage? or anything else? Posts: 09-12 07:30:34.346: E / Panel (8003): IOEception 09-12 07:30:34.346: E / Panel (8003): java.io.IOException: Parent directory of file does not exist: / sdcard/anppp/2012Sep1273034.png 09-12 07:30:34.346: E / Panel (8003): at java.io.File.createNewFile (File.java: 1263) 09-12 07:30:34.346: E / Panel (8003): at aa.bb.cc.Panel.saveapp (Panel.java: 67) 09-12 07:30:34.346: E / Panel (8003): at aa.bb.cc.AndroidPaint.onOptionsItemSelected (AndroidPaint.java: 94) 09-12 07:30:34.346: E / Panel (8003): at android.app.Activity.onMenuItemSelected (Activity.java: 2170) 09-12 07:30:34.346: E / Panel (8003): at com.android.internal.policy.impl.PhoneWindow.onMenuItemSelected (PhoneWindow.java: 730) 09-12 07:30:34.346: E / Panel (8003): at com.android.internal.view.menu.MenuItemImpl.invoke (MenuItemImpl.java: 139) 09-12 07:30:34.346: E / Panel (8003): at com.android.internal.view.menu.MenuBuilder.performItemAction (MenuBuilder.java: 855) 09-12 07:30:34.346: E / Panel (8003): at com.android.internal.view.menu.ExpandedMenuView.invokeItem (ExpandedMenuView.java: 89) 09-12 07:30:34.346: E / Panel (8003): at com.android.internal.view.menu.ExpandedMenuView.onItemClick (ExpandedMenuView.java: 93) 09-12 07:30:34.346: E / Panel (8003): at android.widget.AdapterView.performItemClick (AdapterView.java: 284) 09-12 07:30:34.346: E / Panel (8003): at android.widget.ListView.performItemClick (ListView.java: 3285) 09-12 07:30:34.346: E / Panel (8003): at android.widget.AbsListView $ PerformClick.run (AbsListView.java: 1640) 09-12 07:30:34.346: E / Panel (8003): at android.os.Handler.handleCallback (Handler.java: 587) 09-12 07:30:34.346: E / Panel (8003): at android.os.Handler.dispatchMessage (Handler.java: 92) 09-12 07:30:34.346: E / Panel (8003): at android.os.Looper.loop (Looper.java: 123) 09-12 07:30:34.346: E / Panel (8003): at android.app.ActivityThread.main (ActivityThread.java: 4363) 09-12 07:30:34.346: E / Panel (8003): at java.lang.reflect.Method.invokeNative (Native Method) 09-12 07:30:34.346: E / Panel (8003): at java.lang.reflect.Method.invoke (Method.java: 521) 09-12 07:30:34.346: E / Panel (8003): at com.android.internal.os.ZygoteInit $ MethodAndArgsCaller.run (ZygoteInit.java: 860) 09-12 07:30:34.346: E / Panel (8003): at com.android.internal.os.ZygoteInit.main (ZygoteInit.java: 618) 09-12 07:30:34.346: E / Panel (8003): at dalvik.system.NativeStart.main (Native Method) There is code: private Bitmap mBitmap; private Canvas mCanvas; private Bitmap tmpBitmap; private Canvas tmpCanvas; private DrawHandler mDrawHandler; private Canvas tCanvas; private String mImagePath = Environment.getExternalStorageDirectory() + "/anppp"; private File file; public void saveapp() { Calendar currentDate = Calendar.getInstance(); SimpleDateFormat formatter= new SimpleDateFormat("yyyyMMMddHmmss"); String dateNow = formatter.format(currentDate.getTime()); file = new File(mImagePath + "/" + dateNow +".png"); FileOutputStream fos; try { file.createNewFile(); fos = new FileOutputStream(file); tmpBitmap.compress(Bitmap.CompressFormat.PNG, 100, fos); fos.close(); } catch (FileNotFoundException e) { Log.e("Panel", "FileNotFoundException", e); } catch (IOException e) { Log.e("Panel", "IOEception", e); } } That's it .. I do not know what could be wrong ;(

    Read the article

  • ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager

    - by rajbk
    This post walks you through creating a UI for paging, sorting and filtering a list of data items. It makes use of the excellent MVCContrib Grid and Pager Html UI helpers. A sample project is attached at the bottom. Our UI will eventually look like this. The application will make use of the Northwind database. The top portion of the page has a filter area region. The filter region is enclosed in a form tag. The select lists are wired up with jQuery to auto post back the form. The page has a pager region at the top and bottom of the product list. The product list has a link to display more details about a given product. The column headings are clickable for sorting and an icon shows the sort direction. Strongly Typed View Models The views are written to expect strongly typed objects. We suffix these strongly typed objects with ViewModel since they are designed specifically for passing data down to the view.  The following listing shows the ProductViewModel. This class will be used to hold information about a Product. We use attributes to specify if the property should be hidden and what its heading in the table should be. This metadata will be used by the MvcContrib Grid to render the table. Some of the properties are hidden from the UI ([ScaffoldColumn(false)) but are needed because we will be using those for filtering when writing our LINQ query. public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList = productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The following diagram shows the rest of the key ViewModels in our design. We have a container class called ProductListContainerViewModel which has nested classes. The ProductPagedList is of type IPagination<ProductViewModel>. The MvcContrib expects the IPagination<T> interface to determine the page number and page size of the collection we are working with. You convert any IEnumerable<T> into an IPagination<T> by calling the AsPagination extension method in the MvcContrib library. It also creates a paged set of type ProductViewModel. The ProductFilterViewModel class will hold information about the different select lists and the ProductName being searched on. It will also hold state of any previously selected item in the lists and the previous search criteria (you will recall that this type of state information was stored in Viewstate when working with WebForms). With MVC there is no state storage and so all state has to be fetched and passed back to the view. The GridSortOptions is a type defined in the MvcContrib library and is used by the Grid to determine the current column being sorted on and the current sort direction. The following shows the view and partial views used to render our UI. The Index view expects a type ProductListContainerViewModel which we described earlier. <%Html.RenderPartial("SearchFilters", Model.ProductFilterViewModel); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> <% Html.RenderPartial("SearchResults", Model); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> The View contains a partial view “SearchFilters” and passes it the ProductViewFilterContainer. The SearchFilter uses this Model to render all the search lists and textbox. The partial view “Pager” uses the ProductPageList which implements the interface IPagination. The “Pager” view contains the MvcContrib Pager helper used to render the paging information. This view is repeated twice since we want the pager UI to be available at the top and bottom of the product list. The Pager partial view is located in the Shared directory so that it can be reused across Views. The partial view “SearchResults” uses the ProductListContainer model. This partial view contains the MvcContrib Grid which needs both the ProdctPagedList and GridSortOptions to render itself. The Controller Action An example of a request like this: /Products?productName=test&supplierId=29&categoryId=4. The application receives this GET request and maps it to the Index method of the ProductController. Within the action we create an IQueryable<ProductViewModel> by calling the GetProductsProjected() method. /// <summary> /// This method takes in a filter list, paging/sort options and applies /// them to an IQueryable of type ProductViewModel /// </summary> /// <returns> /// The return object is a container that holds the sorted/paged list, /// state for the fiters and state about the current sorted column /// </returns> public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The supplier, category and productname filters are applied to this IQueryable if any are present in the request. The ProductPagedList class is created by applying a sort order and calling the AsPagination method. Finally the ProductListContainerViewModel class is created and returned to the view. You have seen how to use strongly typed views with the MvcContrib Grid and Pager to render a clean lightweight UI with strongly typed views. You also saw how to use partial views to get data from the strongly typed model passed to it from the parent view. The code also shows you how to use jQuery to auto post back. The sample is attached below. Don’t forget to change your connection string to point to the server containing the Northwind database. NorthwindSales_MvcContrib.zip My name is Kobayashi. I work for Keyser Soze.

    Read the article

  • Run Windows in Ubuntu with VMware Player

    - by Matthew Guay
    Are you an enthusiast who loves their Ubuntu Linux experience but still needs to use Windows programs?  Here’s how you can get the full Windows experience on Ubuntu with the free VMware Player. Linux has become increasingly consumer friendly, but still, the wide majority of commercial software is only available for Windows and Macs.  Dual-booting between Windows and Linux has been a popular option for years, but this is a frustrating solution since you have to reboot into the other operating system each time you want to run a specific application.  With virtualization, you’ll never have to make this tradeoff.  VMware Player makes it quick and easy to install any edition of Windows in a virtual machine.  With VMware’s great integration tools, you can copy and paste between your Linux and Windows programs and even run native Windows applications side-by-side with Linux ones. Getting Started Download the latest version of VMware Player for Linux, and select either the 32-bit or 64-bit version, depending on your system.  VMware Player is a free download, but requires registration.  Sign in with your VMware account, or create a new one if you don’t already have one. VMware Player is fairly easy to install on Linux, but you will need to start out the installation from the terminal.  First, enter the following to make sure the installer is marked as executable, substituting version/build_number for the version number on the end of the file you downloaded. chmod +x ./VMware-Player-version/build_number.bundle Then, enter the following to start the install, again substituting your version number: gksudo bash ./VMware-Player-version/build_number.bundle You may have to enter your administrator password to start the installation, and then the VMware Player graphical installer will open.  Choose whether you want to check for product updates and submit usage data to VMware, and then proceed with the install as normal. VMware Player installed in only a few minutes in our tests, and was immediately ready to run, no reboot required.  You can now launch it from your Ubuntu menu: click Applications \ System Tools \ VMware Player. You’ll need to accept the license agreement the first time you run it. Welcome to VMware Player!  Now you can create new virtual machines and run pre-built ones on your Ubuntu desktop. Install Windows in VMware Player on Ubuntu Now that you’ve got VMware setup, it’s time to put it to work.  Click the Create a New Virtual Machine as above to start making a Windows virtual machine. In the dialog that opens, select your installer disk or ISO image file that you want to install Windows from.  In this example, we’re select a Windows 7 ISO.  VMware will automatically detect the operating system on the disk or image.  Click Next to continue. Enter your Windows product key, select the edition of Windows to install, and enter your name and password. You can leave the product key field blank and enter it later.  VMware will ask if you want to continue without a product key, so just click Yes to continue. Now enter a name for your virtual machine and select where you want to save it.  Note: This will take up at least 15Gb of space on your hard drive during the install, so make sure to save it on a drive with sufficient storage space. You can choose how large you want your virtual hard drive to be; the default is 40Gb, but you can choose a different size if you wish.  The entire amount will not be used up on your hard drive initially, but the virtual drive will increase in size up to your maximum as you add files.  Additionally, you can choose if you want the virtual disk stored as a single file or as multiple files.  You will see the best performance by keeping the virtual disk as one file, but the virtual machine will be more portable if it is broken into smaller files, so choose the option that will work best for your needs. Finally, review your settings, and if everything looks good, click Finish to create the virtual machine. VMware will take over now, and install Windows without any further input using its Easy Install.  This is one of VMware’s best features, and is the main reason we find it the easiest desktop virtualization solution to use.   Installing VMware Tools VMware Player doesn’t include the VMware Tools by default; instead, it automatically downloads them for the operating system you’re installing.  Once you’ve downloaded them, it will use those tools anytime you install that OS.  If this is your first Windows virtual machine to install, you may be prompted to download and install them while Windows is installing.  Click Download and Install so your Easy Install will finish successfully. VMware will then download and install the tools.  You may need to enter your administrative password to complete the install. Other than this, you can leave your Windows install unattended; VMware will get everything installed and running on its own. Our test setup took about 30 minutes, and when it was done we were greeted with the Windows desktop ready to use, complete with drivers and the VMware tools.  The only thing missing was the Aero glass feature.  VMware Player is supposed to support the Aero glass effects in virtual machines, and although this works every time when we use VMware Player on Windows, we could not get it to work in Linux.  Other than that, Windows is fully ready to use.  You can copy and paste text, images, or files between Ubuntu and Windows, or simply drag-and-drop files between the two. Unity Mode Using Windows in a window is awkward, and makes your Windows programs feel out of place and hard to use.  This is where Unity mode comes in.  Click Virtual Machine in VMware’s menu, and select Enter Unity. Your Windows desktop will now disappear, and you’ll see a new Windows menu underneath your Ubuntu menu.  This works the same as your Windows Start Menu, and you can open your Windows applications and files directly from it. By default, programs from Windows will have a colored border and a VMware badge in the corner.  You can turn this off from the VMware settings pane.  Click Virtual Machine in VMware’s menu and select Virtual Machine Settings.  Select Unity under the Options tab, and uncheck the Show borders and Show badges boxes if you don’t want them. Unity makes your Windows programs feel at home in Ubuntu.  Here we have Word 2010 and IE8 open beside the Ubuntu Help application.  Notice that the Windows applications show up in the taskbar on the bottom just like the Linux programs.  If you’re using the Compiz graphics effects in Ubuntu, your Windows programs will use them too, including the popular wobbly windows effect. You can switch back to running Windows inside VMware Player’s window by clicking the Exit Unity button in the VMware window. Now, whenever you want to run Windows applications in Linux, you can quickly launch it from VMware Player. Conclusion VMware Player is a great way to run Windows on your Linux computer.  It makes it extremely easy to get Windows installed and running, lets you run your Windows programs seamlessly alongside your Linux ones.  VMware products work great in our experience, and VMware Player on Linux was no exception. If you’re a Windows user and you’d like to run Ubuntu on Windows, check out our article on how to Run Ubuntu in Windows with VMware Player. Link Download VMware Player 3 (Registration required) Download Windows 7 Enterprise 90-day trial Similar Articles Productive Geek Tips Enable Copy and Paste from Ubuntu VMware GuestInstall VMware Tools on Ubuntu Edgy EftRestart the Ubuntu Gnome User Interface QuicklyHow to Add a Program to the Ubuntu Startup List (After Login)How To Run Ubuntu in Windows 7 with VMware Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor

    Read the article

  • NoSQL with MongoDB, NoRM and ASP.NET MVC

    - by shiju
     In this post, I will give an introduction to how to work on NoSQL and document database with MongoDB , NoRM and ASP.Net MVC 2. NoSQL and Document Database The NoSQL movement is getting big attention in this year and people are widely talking about document databases and NoSQL along with web application scalability. According to Wikipedia, "NoSQL is a movement promoting a loosely defined class of non-relational data stores that break with a long history of relational databases. These data stores may not require fixed table schemas, usually avoid join operations and typically scale horizontally. Academics and papers typically refer to these databases as structured storage". Document databases are schema free so that you can focus on the problem domain and don't have to worry about updating the schema when your domain is evolving. This enables truly a domain driven development. One key pain point of relational database is the synchronization of database schema with your domain entities when your domain is evolving.There are lots of NoSQL implementations are available and both CouchDB and MongoDB got my attention. While evaluating both CouchDB and MongoDB, I found that CouchDB can’t perform dynamic queries and later I picked MongoDB over CouchDB. There are many .Net drivers available for MongoDB document database. MongoDB MongoDB is an open source, scalable, high-performance, schema-free, document-oriented database written in the C++ programming language. It has been developed since October 2007 by 10gen. MongoDB stores your data as binary JSON (BSON) format . MongoDB has been getting a lot of attention and you can see the some of the list of production deployments from here - http://www.mongodb.org/display/DOCS/Production+Deployments NoRM – C# driver for MongoDB NoRM is a C# driver for MongoDB with LINQ support. NoRM project is available on Github at http://github.com/atheken/NoRM. Demo with ASP.NET MVC I will show a simple demo with MongoDB, NoRM and ASP.NET MVC. To work with MongoDB and  NoRM, do the following steps Download the MongoDB databse For Windows 32 bit, download from http://downloads.mongodb.org/win32/mongodb-win32-i386-1.4.1.zip  and for Windows 64 bit, download  from http://downloads.mongodb.org/win32/mongodb-win32-x86_64-1.4.1.zip . The zip contains the mongod.exe for run the server and mongo.exe for the client Download the NorM driver for MongoDB at http://github.com/atheken/NoRM Create a directory call C:\data\db. This is the default location of MongoDB database. You can override the behavior. Run C:\Mongo\bin\mongod.exe. This will start the MongoDb server Now I am going to demonstrate how to program with MongoDb and NoRM in an ASP.NET MVC application.Let’s write a domain class public class Category {            [MongoIdentifier]public ObjectId Id { get; set; } [Required(ErrorMessage = "Name Required")][StringLength(25, ErrorMessage = "Must be less than 25 characters")]public string Name { get; set;}public string Description { get; set; }}  ObjectId is a NoRM type that represents a MongoDB ObjectId. NoRM will automatically update the Id becasue it is decorated by the MongoIdentifier attribute. The next step is to create a mongosession class. This will do the all interactions to the MongoDB. internal class MongoSession<TEntity> : IDisposable{    private readonly MongoQueryProvider provider;     public MongoSession()    {        this.provider = new MongoQueryProvider("Expense");    }     public IQueryable<TEntity> Queryable    {        get { return new MongoQuery<TEntity>(this.provider); }    }     public MongoQueryProvider Provider    {        get { return this.provider; }    }     public void Add<T>(T item) where T : class, new()    {        this.provider.DB.GetCollection<T>().Insert(item);    }     public void Dispose()    {        this.provider.Server.Dispose();     }    public void Delete<T>(T item) where T : class, new()    {        this.provider.DB.GetCollection<T>().Delete(item);    }     public void Drop<T>()    {        this.provider.DB.DropCollection(typeof(T).Name);    }     public void Save<T>(T item) where T : class,new()    {        this.provider.DB.GetCollection<T>().Save(item);                }  }    The MongoSession constrcutor will create an instance of MongoQueryProvider that supports the LINQ expression and also create a database with name "Expense". If database is exists, it will use existing database, otherwise it will create a new databse with name  "Expense". The Save method can be used for both Insert and Update operations. If the object is new one, it will create a new record and otherwise it will update the document with given ObjectId.  Let’s create ASP.NET MVC controller actions for CRUD operations for the domain class Category public class CategoryController : Controller{ //Index - Get the category listpublic ActionResult Index(){    using (var session = new MongoSession<Category>())    {        var categories = session.Queryable.AsEnumerable<Category>();        return View(categories);    }} //edit a single category[HttpGet]public ActionResult Edit(ObjectId id) {     using (var session = new MongoSession<Category>())    {        var category = session.Queryable              .Where(c => c.Id == id)              .FirstOrDefault();         return View("Save",category);    } }// GET: /Category/Create[HttpGet]public ActionResult Create(){    var category = new Category();    return View("Save", category);}//insert or update a category[HttpPost]public ActionResult Save(Category category){    if (!ModelState.IsValid)    {        return View("Save", category);    }    using (var session = new MongoSession<Category>())    {        session.Save(category);        return RedirectToAction("Index");    } }//Delete category[HttpPost]public ActionResult Delete(ObjectId Id){    using (var session = new MongoSession<Category>())    {        var category = session.Queryable              .Where(c => c.Id == Id)              .FirstOrDefault();        session.Delete(category);        var categories = session.Queryable.AsEnumerable<Category>();        return PartialView("CategoryList", categories);    } }        }  You can easily work on MongoDB with NoRM and can use with ASP.NET MVC applications. I have created a repository on CodePlex at http://mongomvc.codeplex.com and you can download the source code of the ASP.NET MVC application from here

    Read the article

  • Entity Association Mapping with Code First Part 1 : Mapping Complex Types

    - by mortezam
    Last week the CTP5 build of the new Entity Framework Code First has been released by data team at Microsoft. Entity Framework Code-First provides a pretty powerful code-centric way to work with the databases. When it comes to associations, it brings ultimate flexibility. I’m a big fan of the EF Code First approach and am planning to explain association mapping with code first in a series of blog posts and this one is dedicated to Complex Types. If you are new to Code First approach, you can find a great walkthrough here. In order to build a solid foundation for our discussion, we will start by learning about some of the core concepts around the relationship mapping.   What is Mapping?Mapping is the act of determining how objects and their relationships are persisted in permanent data storage, in our case, relational databases. What is Relationship mapping?A mapping that describes how to persist a relationship (association, aggregation, or composition) between two or more objects. Types of RelationshipsThere are two categories of object relationships that we need to be concerned with when mapping associations. The first category is based on multiplicity and it includes three types: One-to-one relationships: This is a relationship where the maximums of each of its multiplicities is one. One-to-many relationships: Also known as a many-to-one relationship, this occurs when the maximum of one multiplicity is one and the other is greater than one. Many-to-many relationships: This is a relationship where the maximum of both multiplicities is greater than one. The second category is based on directionality and it contains two types: Uni-directional relationships: when an object knows about the object(s) it is related to but the other object(s) do not know of the original object. To put this in EF terminology, when a navigation property exists only on one of the association ends and not on the both. Bi-directional relationships: When the objects on both end of the relationship know of each other (i.e. a navigation property defined on both ends). How Object Relationships Are Implemented in POCO domain models?When the multiplicity is one (e.g. 0..1 or 1) the relationship is implemented by defining a navigation property that reference the other object (e.g. an Address property on User class). When the multiplicity is many (e.g. 0..*, 1..*) the relationship is implemented via an ICollection of the type of other object. How Relational Database Relationships Are Implemented? Relationships in relational databases are maintained through the use of Foreign Keys. A foreign key is a data attribute(s) that appears in one table and must be the primary key or other candidate key in another table. With a one-to-one relationship the foreign key needs to be implemented by one of the tables. To implement a one-to-many relationship we implement a foreign key from the “one table” to the “many table”. We could also choose to implement a one-to-many relationship via an associative table (aka Join table), effectively making it a many-to-many relationship. Introducing the ModelNow, let's review the model that we are going to use in order to implement Complex Type with Code First. It's a simple object model which consist of two classes: User and Address. Each user could have one billing address. The Address information of a User is modeled as a separate class as you can see in the UML model below: In object-modeling terms, this association is a kind of aggregation—a part-of relationship. Aggregation is a strong form of association; it has some additional semantics with regard to the lifecycle of objects. In this case, we have an even stronger form, composition, where the lifecycle of the part is fully dependent upon the lifecycle of the whole. Fine-grained domain models The motivation behind this design was to achieve Fine-grained domain models. In crude terms, fine-grained means “more classes than tables”. For example, a user may have both a billing address and a home address. In the database, you may have a single User table with the columns BillingStreet, BillingCity, and BillingPostalCode along with HomeStreet, HomeCity, and HomePostalCode. There are good reasons to use this somewhat denormalized relational model (performance, for one). In our object model, we can use the same approach, representing the two addresses as six string-valued properties of the User class. But it’s much better to model this using an Address class, where User has the BillingAddress and HomeAddress properties. This object model achieves improved cohesion and greater code reuse and is more understandable. Complex Types: Splitting a Table Across Multiple Types Back to our model, there is no difference between this composition and other weaker styles of association when it comes to the actual C# implementation. But in the context of ORM, there is a big difference: A composed class is often a candidate Complex Type. But C# has no concept of composition—a class or property can’t be marked as a composition. The only difference is the object identifier: a complex type has no individual identity (i.e. no AddressId defined on Address class) which make sense because when it comes to the database everything is going to be saved into one single table. How to implement a Complex Types with Code First Code First has a concept of Complex Type Discovery that works based on a set of Conventions. The convention is that if Code First discovers a class where a primary key cannot be inferred, and no primary key is registered through Data Annotations or the fluent API, then the type will be automatically registered as a complex type. Complex type detection also requires that the type does not have properties that reference entity types (i.e. all the properties must be scalar types) and is not referenced from a collection property on another type. Here is the implementation: public class User{    public int UserId { get; set; }    public string FirstName { get; set; }    public string LastName { get; set; }    public string Username { get; set; }    public Address Address { get; set; }} public class Address {     public string Street { get; set; }     public string City { get; set; }            public string PostalCode { get; set; }        }public class EntityMappingContext : DbContext {     public DbSet<User> Users { get; set; }        } With code first, this is all of the code we need to write to create a complex type, we do not need to configure any additional database schema mapping information through Data Annotations or the fluent API. Database SchemaThe mapping result for this object model is as follows: Limitations of this mappingThere are two important limitations to classes mapped as Complex Types: Shared references is not possible: The Address Complex Type doesn’t have its own database identity (primary key) and so can’t be referred to by any object other than the containing instance of User (e.g. a Shipping class that also needs to reference the same User Address). No elegant way to represent a null reference There is no elegant way to represent a null reference to an Address. When reading from database, EF Code First always initialize Address object even if values in all mapped columns of the complex type are null. This means that if you store a complex type object with all null property values, EF Code First returns a initialized complex type when the owning entity object is retrieved from the database. SummaryIn this post we learned about fine-grained domain models which complex type is just one example of it. Fine-grained is fully supported by EF Code First and is known as the most important requirement for a rich domain model. Complex type is usually the simplest way to represent one-to-one relationships and because the lifecycle is almost always dependent in such a case, it’s either an aggregation or a composition in UML. In the next posts we will revisit the same domain model and will learn about other ways to map a one-to-one association that does not have the limitations of the complex types. References ADO.NET team blog Mapping Objects to Relational Databases Java Persistence with Hibernate

    Read the article

  • Gone With the Wind?

    - by antony.reynolds
    Where Have All the Composites Gone? I was just asked to help out with an interesting problem at a customer.  All their composites had disappeared from the EM console, none of them showed as loading in the log files and there was an ominous error message in the logs. Symptoms After a server restart the customer noticed that none of his composites were available, they didn’t show in the EM console and in the log files they saw this error message: SEVERE: WLSFabricKernelInitializer.getCompositeList Error during parsing and processing of deployed-composites.xml file This indicates some sort of problem when parsing the deployed-composites.xml file.  This is very bad because the deployed-composites.xml file is basically the table of contents that tells SOA Infrastructure what composites to load and where to find them in MDS.  If you can’t read this file you can’t load any composites and your SOA Server now has all the utility of a chocolate teapot. Verification We can look at the deployed-composites.xml file from MDS either by connecting JDeveloper to MDS, exporting the file using WLST or exporting the whole soa-infra MDS partition by using EM->SOA->soa-infra->Administration->MDS Configuration.  Exporting via EM is probably the easiest because it then prepares you to fix the problem later.  After exporting the partition to local storage on the SOA Server I then ran an XSLT transform across the file deployed-composites/deployed-composites.xml. <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://www.w3.org/1999/xhtml">     <xsl:output indent="yes"/>     <xsl:template match="/">         <testResult>             <composite-series>                 <xsl:attribute name="elementCount"><xsl:value-of select="count(deployed-composites/composite-series)"/></xsl:attribute>                 <xsl:attribute name="nameAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series[@name])"/></xsl:attribute>                 <xsl:attribute name="defaultAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series[@default])"/></xsl:attribute>                 <composite-revision>                     <xsl:attribute name="elementCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision)"/></xsl:attribute>                     <xsl:attribute name="dnAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision[@dn])"/></xsl:attribute>                     <xsl:attribute name="stateAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision[@state])"/></xsl:attribute>                     <xsl:attribute name="modeAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision[@mode])"/></xsl:attribute>                     <xsl:attribute name="locationAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision[@location])"/></xsl:attribute>                     <composite>                         <xsl:attribute name="elementCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision/composite)"/></xsl:attribute>                         <xsl:attribute name="dnAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision/composite[@dn])"/></xsl:attribute>                         <xsl:attribute name="deployedTimeAttributeCount"><xsl:value-of select="count(deployed-composites/composite-series/composite-revision/composite[@deployedTime])"/></xsl:attribute>                     </composite>                 </composite-revision>                 <xsl:apply-templates select="deployed-composites/composite-series"/>             </composite-series>         </testResult>     </xsl:template>     <xsl:template match="composite-series">             <xsl:if test="not(@name) or not(@default) or composite-revision[not(@dn) or not(@state) or not(@mode) or not(@location)]">                 <ErrorNode>                     <xsl:attribute name="elementPos"><xsl:value-of select="position()"/></xsl:attribute>                     <xsl:copy-of select="."/>                 </ErrorNode>             </xsl:if>     </xsl:template> </xsl:stylesheet> The output from this is not pretty but it shows any <composite-series> tags that are missing expected attributes (name and default).  It also shows how many composites are in the file (111) and how many revisions of those composites (115). <?xml version="1.0" encoding="UTF-8"?> <testResult xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://www.w3.org/1999/xhtml">    <composite-series elementCount="111" nameAttributeCount="110" defaultAttributeCount="110">       <composite-revision elementCount="115" dnAttributeCount="114" stateAttributeCount="115"                           modeAttributeCount="115"                           locationAttributeCount="114">          <composite elementCount="115" dnAttributeCount="114" deployedTimeAttributeCount="115"/>       </composite-revision>       <ErrorNode elementPos="82">          <composite-series xmlns="">             <composite-revision state="on" mode="active">                <composite deployedTime="2010-12-15T11:50:16.067+01:00"/>             </composite-revision>          </composite-series>       </ErrorNode>    </composite-series> </testResult> From this I could see that one of the <composite-series> elements (number 82 of 111) seemed to be corrupt. Having found the problem I now needed to fix it. Fixing the Problem The solution was really quite easy.  First for safeties sake I took a backup of the exported MDS partition.  I then edited the deployed-composites/deployed-composites.xml file to remove the offending <composite-series> tag. Finally I restarted the SOA domain and was rewarded by seeing that the deployed composites were now visible. Summary One possible cause of not being able to see deployed composites after a SOA 11g system restart is a corrupt deployed-composites.xml file.  Retrieving this file from MDS, repairing it, and replacing it back into MDS can solve the problem.  This still leaves the problem of how did this file become corrupt!

    Read the article

  • Conheça a nova Windows Azure

    - by Leniel Macaferi
    Hoje estamos lançando um grande conjunto de melhorias para a Windows Azure. A seguir está um breve resumo de apenas algumas destas melhorias: Novo Portal de Administração e Ferramentas de Linha de Comando O lançamento de hoje vem com um novo portal para a Windows Azure, o qual lhe permitirá gerenciar todos os recursos e serviços oferecidos na Windows Azure de uma forma perfeitamente integrada. O portal é muito rápido e fluido, suporta filtragem e classificação dos dados (o que o torna muito fácil de usar em implantações/instalações de grande porte), funciona em todos os navegadores, e oferece um monte de ótimos e novos recursos - incluindo suporte nativo à VM (máquina virtual), Web site, Storage (armazenamento), e monitoramento de Serviços hospedados na Nuvem. O novo portal é construído em cima de uma API de gerenciamento baseada no modelo REST dentro da Windows Azure - e tudo o que você pode fazer através do portal também pode ser feito através de programação acessando esta Web API. Também estamos lançando hoje ferramentas de linha de comando (que, igualmente ao portal, chamam as APIs de Gerenciamento REST) para tornar ainda ainda mais fácil a criação de scripts e a automatização de suas tarefas de administração. Estamos oferecendo para download um conjunto de ferramentas para o Powershell (Windows) e Bash (Mac e Linux). Como nossos SDKs, o código destas ferramentas está hospedado no GitHub sob uma licença Apache 2. Máquinas Virtuais ( Virtual Machines [ VM ] ) A Windows Azure agora suporta a capacidade de implantar e executar VMs duráveis/permanentes ??na nuvem. Você pode criar facilmente essas VMs usando uma nova Galeria de Imagens embutida no novo Portal da Windows Azure ou, alternativamente, você pode fazer o upload e executar suas próprias imagens VHD customizadas. Máquinas virtuais são duráveis ??(o que significa que qualquer coisa que você instalar dentro delas persistirá entre as reinicializações) e você pode usar qualquer sistema operacional nelas. Nossa galeria de imagens nativa inclui imagens do Windows Server (incluindo o novo Windows Server 2012 RC), bem como imagens do Linux (incluindo Ubuntu, CentOS, e as distribuições SUSE). Depois de criar uma instância de uma VM você pode facilmente usar o Terminal Server ou SSH para acessá-las a fim de configurar e personalizar a máquina virtual da maneira como você quiser (e, opcionalmente, capturar uma snapshot (cópia instantânea da imagem atual) para usar ao criar novas instâncias de VMs). Isto te proporciona a flexibilidade de executar praticamente qualquer carga de trabalho dentro da plataforma Windows Azure.   A novo Portal da Windows Azure fornece um rico conjunto de recursos para o gerenciamento de Máquinas Virtuais - incluindo a capacidade de monitorar e controlar a utilização dos recursos dentro delas.  Nosso novo suporte à Máquinas Virtuais também permite a capacidade de facilmente conectar múltiplos discos nas VMs (os quais você pode então montar e formatar como unidades de disco). Opcionalmente, você pode ativar o suporte à replicação geográfica (geo-replication) para estes discos - o que fará com que a Windows Azure continuamente replique o seu armazenamento em um data center secundário (criando um backup), localizado a pelo menos 640 quilômetros de distância do seu data-center principal. Nós usamos o mesmo formato VHD que é suportado com a virtualização do Windows hoje (o qual nós lançamos como uma especificação aberta), de modo a permitir que você facilmente migre cargas de trabalho existentes que você já tenha virtualizado na Windows Azure.  Também tornamos fácil fazer o download de VHDs da Windows Azure, o que também oferece a flexibilidade para facilmente migrar cargas de trabalho das VMs baseadas na nuvem para um ambiente local. Tudo o que você precisa fazer é baixar o arquivo VHD e inicializá-lo localmente - nenhuma etapa de importação/exportação é necessária. Web Sites A Windows Azure agora suporta a capacidade de rapidamente e facilmente implantar web-sites ASP.NET, Node.js e PHP em um ambiente na nuvem altamente escalável que te permite começar pequeno (e de maneira gratuita) de modo que você possa em seguida, adaptar/escalar sua aplicação de acordo com o crescimento do seu tráfego. Você pode criar um novo web site na Azure e tê-lo pronto para implantação em menos de 10 segundos: O novo Portal da Windows Azure oferece suporte integrado para a administração de Web sites, incluindo a capacidade de monitorar e acompanhar a utilização dos recursos em tempo real: Você pode fazer o deploy (implantação) para web-sites em segundos usando FTP, Git, TFS e Web Deploy. Também estamos lançando atualizações para as ferramentas do Visual Studio e da Web Matrix que permitem aos desenvolvedores uma fácil instalação das aplicações ASP.NET nesta nova oferta. O suporte de publicação do VS e da Web Matrix inclui a capacidade de implantar bancos de dados SQL como parte da implantação do site - bem como a capacidade de realizar a atualização incremental do esquema do banco de dados com uma implantação realizada posteriormente. Você pode integrar a publicação de aplicações web com o controle de código fonte ao selecionar os links "Set up TFS publishing" (Configurar publicação TFS) ou "Set up Git publishing" (Configurar publicação Git) que estão presentes no dashboard de um web-site: Ao fazer isso, você habilitará a integração com o nosso novo serviço online TFS (que permite um fluxo de trabalho do TFS completo - incluindo um build elástico e suporte a testes), ou você pode criar um repositório Git e referenciá-lo como um remote para executar implantações automáticas. Uma vez que você executar uma implantação usando TFS ou Git, a tab/guia de implantações/instalações irá acompanhar as implantações que você fizer, e permitirá que você selecione uma implantação mais antiga (ou mais recente) para que você possa rapidamente voltar o seu site para um estado anterior do seu código. Isso proporciona uma experiência de fluxo de trabalho muito poderosa.   A Windows Azure agora permite que você implante até 10 web-sites em um ambiente de hospedagem gratuito e compartilhado entre múltiplos usuários e bancos de dados (onde um site que você implantar será um dos vários sites rodando em um conjunto compartilhado de recursos do servidor). Isso te fornece uma maneira fácil para começar a desenvolver projetos sem nenhum custo envolvido. Você pode, opcionalmente, fazer o upgrade do seus sites para que os mesmos sejam executados em um "modo reservado" que os isola, de modo que você seja o único cliente dentro de uma máquina virtual: E você pode adaptar elasticamente a quantidade de recursos que os seus sites utilizam - o que te permite por exemplo aumentar a capacidade da sua instância reservada/particular de acordo com o aumento do seu tráfego: A Windows Azure controla automaticamente o balanceamento de carga do tráfego entre as instâncias das VMs, e você tem as mesmas opções de implantação super rápidas (FTP, Git, TFS e Web Deploy), independentemente de quantas instâncias reservadas você usar. Com a Windows Azure você paga por capacidade de processamento por hora - o que te permite dimensionar para cima e para baixo seus recursos para atender apenas o que você precisa. Serviços da Nuvem (Cloud Services) e Cache Distribuído (Distributed Caching) A Windows Azure também suporta a capacidade de construir serviços que rodam na nuvem que suportam ricas arquiteturas multicamadas, gerenciamento automatizado de aplicações, e que podem ser adaptados para implantações extremamente grandes. Anteriormente nós nos referíamos a esta capacidade como "serviços hospedados" - com o lançamento desta semana estamos agora rebatizando esta capacidade como "serviços da nuvem". Nós também estamos permitindo um monte de novos recursos com eles. Cache Distribuído Um dos novos recursos muito legais que estão sendo habilitados com os serviços da nuvem é uma nova capacidade de cache distribuído que te permite usar e configurar um cache distribuído de baixa latência, armazenado na memória (in-memory) dentro de suas aplicações. Esse cache é isolado para uso apenas por suas aplicações, e não possui limites de corte. Esse cache pode crescer e diminuir dinamicamente e elasticamente (sem que você tenha que reimplantar a sua aplicação ou fazer alterações no código), e suporta toda a riqueza da API do Servidor de Cache AppFabric (incluindo regiões, alta disponibilidade, notificações, cache local e muito mais). Além de suportar a API do Servidor de Cache AppFabric, esta nova capacidade de cache pode agora também suportar o protocolo Memcached - o que te permite apontar código escrito para o Memcached para o cache distribuído (sem que alterações de código sejam necessárias). O novo cache distribuído pode ser configurado para ser executado em uma de duas maneiras: 1) Utilizando uma abordagem de cache co-localizado (co-located). Nesta opção você aloca um percentual de memória dos seus roles web e worker existentes para que o mesmo seja usado ??pelo cache, e então o cache junta a memória em um grande cache distribuído.  Qualquer dado colocado no cache por uma instância do role pode ser acessado por outras instâncias do role em sua aplicação - independentemente de os dados cacheados estarem armazenados neste ou em outro role. O grande benefício da opção de cache "co-localizado" é que ele é gratuito (você não precisa pagar nada para ativá-lo) e ele te permite usar o que poderia ser de outra forma memória não utilizada dentro das VMs da sua aplicação. 2) Alternativamente, você pode adicionar "cache worker roles" no seu serviço na nuvem que são utilizados unicamente para o cache. Estes também serão unidos em um grande anel de cache distribuído que outros roles dentro da sua aplicação podem acessar. Você pode usar esses roles para cachear dezenas ou centenas de GBs de dados na memória de forma extramente eficaz - e o cache pode ser aumentado ou diminuído elasticamente durante o tempo de execução dentro da sua aplicação: Novos SDKs e Ferramentas de Suporte Nós atualizamos todos os SDKs (kits para desenvolvimento de software) da Windows Azure com o lançamento de hoje para incluir novos recursos e capacidades. Nossos SDKs estão agora disponíveis em vários idiomas, e todo o código fonte deles está publicado sob uma licença Apache 2 e é mantido em repositórios no GitHub. O SDK .NET para Azure tem em particular um monte de grandes melhorias com o lançamento de hoje, e agora inclui suporte para ferramentas, tanto para o VS 2010 quanto para o VS 2012 RC. Estamos agora também entregando downloads do SDK para Windows, Mac e Linux nos idiomas que são oferecidos em todos esses sistemas - de modo a permitir que os desenvolvedores possam criar aplicações Windows Azure usando qualquer sistema operacional durante o desenvolvimento. Muito, Muito Mais O resumo acima é apenas uma pequena lista de algumas das melhorias que estão sendo entregues de uma forma preliminar ou definitiva hoje - há muito mais incluído no lançamento de hoje. Dentre estas melhorias posso citar novas capacidades para Virtual Private Networking (Redes Privadas Virtuais), novo runtime do Service Bus e respectivas ferramentas de suporte, o preview público dos novos Azure Media Services, novos Data Centers, upgrade significante para o hardware de armazenamento e rede, SQL Reporting Services, novos recursos de Identidade, suporte para mais de 40 novos países e territórios, e muito, muito mais. Você pode aprender mais sobre a Windows Azure e se cadastrar para experimentá-la gratuitamente em http://windowsazure.com.  Você também pode assistir a uma apresentação ao vivo que estarei realizando às 1pm PDT (17:00Hs de Brasília), hoje 7 de Junho (hoje mais tarde), onde eu vou passar por todos os novos recursos. Estaremos abrindo as novas funcionalidades as quais me referi acima para uso público poucas horas após o término da apresentação. Nós estamos realmente animados para ver as grandes aplicações que você construirá com estes novos recursos. Espero que ajude, - Scott   Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • Ajax Control Toolkit May 2012 Release

    - by Stephen.Walther
    I’m happy to announce the May 2012 release of the Ajax Control Toolkit. This newest release of the Ajax Control Toolkit includes a new file upload control which displays file upload progress. We’ve also added several significant enhancements to the existing HtmlEditorExtender control such as support for uploading images and Source View. You can download and start using the newest version of the Ajax Control Toolkit by entering the following command in the Library Package Manager console in Visual Studio: Install-Package AjaxControlToolkit Alternatively, you can download the latest version of the Ajax Control Toolkit from CodePlex: http://AjaxControlToolkit.CodePlex.com The New Ajax File Upload Control The most requested new feature for the Ajax Control Toolkit (according to the CodePlex Issue Tracker) has been support for file upload with progress. We worked hard over the last few months to create an entirely new file upload control which displays upload progress. Here is a sample which illustrates how you can use the new AjaxFileUpload control: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="01_FileUpload.aspx.cs" Inherits="WebApplication1._01_FileUpload" %> <html> <head runat="server"> <title>Simple File Upload</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager runat="server" /> <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" runat="server" /> </div> </form> </body> </html> The page above includes a ToolkitScriptManager control. This control is required to use any of the controls in the Ajax Control Toolkit because this control is responsible for loading all of the scripts required by a control. The page also contains an AjaxFileUpload control. The UploadComplete event is handled in the code-behind for the page: namespace WebApplication1 { public partial class _01_FileUpload : System.Web.UI.Page { protected void ajaxUpload1_OnUploadComplete(object sender, AjaxControlToolkit.AjaxFileUploadEventArgs e) { // Generate file path string filePath = "~/Images/" + e.FileName; // Save upload file to the file system ajaxUpload1.SaveAs(MapPath(filePath)); } } } The UploadComplete handler saves each uploaded file by calling the AjaxFileUpload control’s SaveAs() method with a full file path. Here’s a video which illustrates the process of uploading a file: Warning: in order to write to the Images folder on a production IIS server, you need Write permissions on the Images folder. You need to provide permissions for the IIS Application Pool account to write to the Images folder. To learn more, see: http://learn.iis.net/page.aspx/624/application-pool-identities/ Showing File Upload Progress The new AjaxFileUpload control takes advantage of HTML5 upload progress events (described in the XMLHttpRequest Level 2 standard). This standard is supported by Firefox 8+, Chrome 16+, Safari 5+, and Internet Explorer 10+. In other words, the standard is supported by the most recent versions of all browsers except for Internet Explorer which will support the standard with the release of Internet Explorer 10. The AjaxFileUpload control works with all browsers, even browsers which do not support the new XMLHttpRequest Level 2 standard. If you use the AjaxFileUpload control with a downlevel browser – such as Internet Explorer 9 — then you get a simple throbber image during a file upload instead of a progress indicator. Here’s how you specify a throbber image when declaring the AjaxFileUpload control: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="02_FileUpload.aspx.cs" Inherits="WebApplication1._02_FileUpload" %> <html> <head id="Head1" runat="server"> <title>File Upload with Throbber</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server" /> <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" ThrobberID="MyThrobber" runat="server" /> <asp:Image id="MyThrobber" ImageUrl="ajax-loader.gif" Style="display:None" runat="server" /> </div> </form> </body> </html> Notice that the page above includes an image with the Id MyThrobber. This image is displayed while files are being uploaded. I use the website http://AjaxLoad.info to generate animated busy wait images. Drag-And-Drop File Upload If you are using an uplevel browser then you can drag-and-drop the files which you want to upload onto the AjaxFileUpload control. The following video illustrates how drag-and-drop works: Remember that drag-and-drop will not work on Internet Explorer 9 or older. Accepting Multiple Files By default, the AjaxFileUpload control enables you to upload multiple files at a time. When you open the file dialog, use the CTRL or SHIFT key to select multiple files. If you want to restrict the number of files that can be uploaded then use the MaximumNumberOfFiles property like this: <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" ThrobberID="throbber" MaximumNumberOfFiles="1" runat="server" /> In the code above, the maximum number of files which can be uploaded is restricted to a single file. Restricting Uploaded File Types You might want to allow only certain types of files to be uploaded. For example, you might want to accept only image uploads. In that case, you can use the AllowedFileTypes property to provide a list of allowed file types like this: <ajaxToolkit:AjaxFileUpload id="ajaxUpload1" OnUploadComplete="ajaxUpload1_OnUploadComplete" ThrobberID="throbber" AllowedFileTypes="jpg,jpeg,gif,png" runat="server" /> The code above prevents any files except jpeg, gif, and png files from being uploaded. Enhancements to the HTMLEditorExtender Over the past months, we spent a considerable amount of time making bug fixes and feature enhancements to the existing HtmlEditorExtender control. I want to focus on two of the most significant enhancements that we made to the control: support for Source View and support for uploading images. Adding Source View Support to the HtmlEditorExtender When you click the Source View tag, the HtmlEditorExtender changes modes and displays the HTML source of the contents contained in the TextBox being extended. You can use Source View to make fine-grain changes to HTML before submitting the HTML to the server. For reasons of backwards compatibility, the Source View tab is disabled by default. To enable Source View, you need to declare your HtmlEditorExtender with the DisplaySourceTab property like this: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="05_SourceView.aspx.cs" Inherits="WebApplication1._05_SourceView" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head id="Head1" runat="server"> <title>HtmlEditorExtender with Source View</title> </head> <body> <form id="form1" runat="server"> <div> <ajaxToolkit:ToolkitScriptManager ID="ToolkitScriptManager1" runat="server" /> <asp:TextBox id="txtComments" TextMode="MultiLine" Columns="60" Rows="10" Runat="server" /> <ajaxToolkit:HtmlEditorExtender id="HEE1" TargetControlID="txtComments" DisplaySourceTab="true" runat="server" /> </div> </form> </body> </html> The page above includes a ToolkitScriptManager, TextBox, and HtmlEditorExtender control. The HtmlEditorExtender extends the TextBox so that it supports rich text editing. Notice that the HtmlEditorExtender includes a DisplaySourceTab property. This property causes a button to appear at the bottom of the HtmlEditorExtender which enables you to switch to Source View: Note: when using the HtmlEditorExtender, we recommend that you set the DOCTYPE for the document. Otherwise, you can encounter weird formatting issues. Accepting Image Uploads We also enhanced the HtmlEditorExtender to support image uploads (another very highly requested feature at CodePlex). The following video illustrates the experience of adding an image to the editor: Once again, for backwards compatibility reasons, support for image uploads is disabled by default. Here’s how you can declare the HtmlEditorExtender so that it supports image uploads: <ajaxToolkit:HtmlEditorExtender id="MyHtmlEditorExtender" TargetControlID="txtComments" OnImageUploadComplete="MyHtmlEditorExtender_ImageUploadComplete" DisplaySourceTab="true" runat="server" > <Toolbar> <ajaxToolkit:Bold /> <ajaxToolkit:Italic /> <ajaxToolkit:Underline /> <ajaxToolkit:InsertImage /> </Toolbar> </ajaxToolkit:HtmlEditorExtender> There are two things that you should notice about the code above. First, notice that an InsertImage toolbar button is added to the HtmlEditorExtender toolbar. This HtmlEditorExtender will render toolbar buttons for bold, italic, underline, and insert image. Second, notice that the HtmlEditorExtender includes an event handler for the ImageUploadComplete event. The code for this event handler is below: using System.Web.UI; using AjaxControlToolkit; namespace WebApplication1 { public partial class _06_ImageUpload : System.Web.UI.Page { protected void MyHtmlEditorExtender_ImageUploadComplete(object sender, AjaxFileUploadEventArgs e) { // Generate file path string filePath = "~/Images/" + e.FileName; // Save uploaded file to the file system var ajaxFileUpload = (AjaxFileUpload)sender; ajaxFileUpload.SaveAs(MapPath(filePath)); // Update client with saved image path e.PostedUrl = Page.ResolveUrl(filePath); } } } Within the ImageUploadComplete event handler, you need to do two things: 1) Save the uploaded image (for example, to the file system, a database, or Azure storage) 2) Provide the URL to the saved image so the image can be displayed within the HtmlEditorExtender In the code above, the uploaded image is saved to the ~/Images folder. The path of the saved image is returned to the client by setting the AjaxFileUploadEventArgs PostedUrl property. Not surprisingly, under the covers, the HtmlEditorExtender uses the AjaxFileUpload. You can get a direct reference to the AjaxFileUpload control used by an HtmlEditorExtender by using the following code: void Page_Load() { var ajaxFileUpload = MyHtmlEditorExtender.AjaxFileUpload; ajaxFileUpload.AllowedFileTypes = "jpg,jpeg"; } The code above illustrates how you can restrict the types of images that can be uploaded to the HtmlEditorExtender. This code prevents anything but jpeg images from being uploaded. Summary This was the most difficult release of the Ajax Control Toolkit to date. We iterated through several designs for the AjaxFileUpload control – with each iteration, the goal was to make the AjaxFileUpload control easier for developers to use. My hope is that we were able to create a control which Web Forms developers will find very intuitive. I want to thank the developers on the Superexpert.com team for their hard work on this release.

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >