Search Results

Search found 19975 results on 799 pages for 'disk queue length'.

Page 187/799 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • How do I repair the corrupted files found by sfc /scannow? "Windows Resource Protection found corrupt files but was unable to fix some of them."

    - by galacticninja
    After running chkdsk C: /F /R and finding out that my hard disk has 24 KB in bad sectors (log is posted below), I decided to run Windows 7's System File Checker utility (sfc /scannow). SFC showed the ff. message after I ran it: "Windows Resource Protection found corrupt files but was unable to fix some of them. Details are included in the CBS.Log windir\Logs\CBS\CBS.log." Since the CBS.log file is too large, I ran findstr /c:"[SR]" %windir%\Logs\CBS\CBS.log >"%userprofile%\Desktop\sfcdetails.txt" (as per Microsoft's KB 928228 article) to only get the log text pertaining to the corrupt files. (log is also posted below) How do I troubleshoot and repair the corrupted files mentioned by sfc /scannow? My OS is Windows 7, 64-bit. chkdsk log Checking file system on C: The type of the file system is NTFS. A disk check has been scheduled. Windows will now check the disk. CHKDSK is verifying files (stage 1 of 5)... 936192 file records processed. File verification completed. 25238 large file records processed. 0 bad file records processed. 4 EA records processed. 44 reparse records processed. CHKDSK is verifying indexes (stage 2 of 5)... 1051640 index entries processed. Index verification completed. 0 unindexed files scanned. 0 unindexed files recovered. CHKDSK is verifying security descriptors (stage 3 of 5)... 936192 file SDs/SIDs processed. Cleaning up 24 unused index entries from index $SII of file 0x9. Cleaning up 24 unused index entries from index $SDH of file 0x9. Cleaning up 24 unused security descriptors. Security descriptor verification completed. 57725 data files processed. CHKDSK is verifying Usn Journal... 36994248 USN bytes processed. Usn Journal verification completed. CHKDSK is verifying file data (stage 4 of 5)... 936176 files processed. File data verification completed. CHKDSK is verifying free space (stage 5 of 5)... 306238 free clusters processed. Free space verification is complete. Adding 1 bad clusters to the Bad Clusters File. Correcting errors in the Volume Bitmap. Windows has made corrections to the file system. 488282111 KB total disk space. 485595420 KB in 766458 files. 401856 KB in 57726 indexes. 24 KB in bad sectors. 1059863 KB in use by the system. 65536 KB occupied by the log file. 1224948 KB available on disk. 4096 bytes in each allocation unit. 122070527 total allocation units on disk. 306237 allocation units available on disk. Internal Info: 00 49 0e 00 81 93 0c 00 34 01 17 00 00 00 00 00 .I......4....... 6b 29 00 00 2c 00 00 00 00 00 00 00 00 00 00 00 k)..,........... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ sfc /scannow log (through findstr /c:"[SR]" %windir%\Logs\CBS\CBS.log >"%userprofile%\Desktop\sfcdetails.txt") Note: The full log is at http://pastebin.com/raw.php?i=gTEGZmWj . I've only quoted parts of the full log below (mostly from the last part), as the full log won't fit within the character limit for questions. I've added it to serve as a preview. ... 2013-12-28 19:37:50, Info CSI00000542 [SR] Beginning Verify and Repair transaction 2013-12-28 19:37:55, Info CSI00000544 [SR] Verify complete 2013-12-28 19:37:56, Info CSI00000545 [SR] Verifying 95 (0x000000000000005f) components 2013-12-28 19:37:56, Info CSI00000546 [SR] Beginning Verify and Repair transaction 2013-12-28 19:38:03, Info CSI00000548 [SR] Verify complete 2013-12-28 19:38:03, Info CSI00000549 [SR] Repairing 43 (0x000000000000002b) components 2013-12-28 19:38:03, Info CSI0000054a [SR] Beginning Verify and Repair transaction ... 2013-12-28 19:38:15, Info CSI00000730 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:62{31}]"GroupPolicy-Admin-Gpedit-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000733 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:30{15}]"frs-core-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000736 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:26{13}]"gpmgmt-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000739 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:74{37}]"MediaServer-ASPAdmin-Migration-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000073c [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:36{18}]"Ldap-Client-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000073f [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:38{19}]"iSNS_Service-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000742 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:76{38}]"MediaServer-Multicast-Migration-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000745 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:78{39}]"Kerberos-Key-Distribution-Center-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000748 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:86{43}]"GroupPolicy-CSE-SoftwareInstallation-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000074b [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:28{14}]"ieframe-dl.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000074e [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:76{38}]"GroupPolicy-Admin-Gpedit-Snapin-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000751 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:32{16}]"IPSec-Svc-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000754 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:22{11}]"HTTP-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000757 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:56{28}]"MediaServer-Migration-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000075a [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:26{13}]"GPBase-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI0000075d [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:38{19}]"IasMigPlugin-DL.man"; source file in store is also corrupted 2013-12-28 19:38:15, Info CSI00000760 [SR] Could not reproject corrupted file [ml:520{260},l:84{42}]"\??\C:\Windows\System32\migwiz\dlmanifests"\[l:50{25}]"International-Core-DL.man"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI00000762 [SR] Cannot repair member file [l:24{12}]"wbemdisp.dll" of Microsoft-Windows-WMI-Scripting, Version = 6.1.7600.16385, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI00000763 [SR] This component was referenced by [l:202{101}]"Microsoft-Windows-Foundation-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.WindowsFoundationDelivery" 2013-12-28 19:38:16, Info CSI00000766 [SR] Could not reproject corrupted file [ml:58{29},l:56{28}]"\??\C:\Windows\SysWOW64\wbem"\[l:24{12}]"wbemdisp.dll"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI00000768 [SR] Cannot repair member file [l:56{28}]"Microsoft.MediaCenter.UI.dll" of Microsoft.MediaCenter.UI, Version = 6.1.7601.17514, pA = PROCESSOR_ARCHITECTURE_MSIL (8), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI00000769 [SR] This component was referenced by [l:176{88}]"Microsoft-Windows-MediaCenter-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.MediaCenter" 2013-12-28 19:38:16, Info CSI0000076c [SR] Could not reproject corrupted file [ml:520{260},l:40{20}]"\??\C:\Windows\ehome"\[l:56{28}]"Microsoft.MediaCenter.UI.dll"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI0000076e [SR] Cannot repair member file [l:24{12}]"ReAgentc.exe" of Microsoft-Windows-WinRE-RecoveryTools, Version = 6.1.7601.17514, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI0000076f [SR] This component was referenced by [l:202{101}]"Microsoft-Windows-Foundation-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.WindowsFoundationDelivery" 2013-12-28 19:38:16, Info CSI00000772 [SR] Could not reproject corrupted file [ml:48{24},l:46{23}]"\??\C:\Windows\SysWOW64"\[l:24{12}]"ReAgentc.exe"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI00000774 [SR] Cannot repair member file [l:82{41}]"System.Management.Automation.dll-Help.xml" of Microsoft-Windows-PowerShell-PreLoc.Resources, Version = 6.1.7600.16385, pA = PROCESSOR_ARCHITECTURE_AMD64 (9), Culture = [l:10{5}]"en-US", VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI00000775 [SR] This component was referenced by [l:266{133}]"Microsoft-Windows-Client-Features-Package~31bf3856ad364e35~amd64~en-US~6.1.7601.17514.Microsoft-Windows-Client-Features-Language-Pack" 2013-12-28 19:38:16, Info CSI00000778 [SR] Could not reproject corrupted file [ml:520{260},l:104{52}]"\??\C:\Windows\System32\WindowsPowerShell\v1.0\en-US"\[l:82{41}]"System.Management.Automation.dll-Help.xml"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI0000077a [SR] Cannot repair member file [l:18{9}]"hlink.dll" of Microsoft-Windows-HLink, Version = 6.1.7600.16385, pA = PROCESSOR_ARCHITECTURE_INTEL (0), Culture neutral, VersionScope = 1 nonSxS, PublicKeyToken = {l:8 b:31bf3856ad364e35}, Type neutral, TypeName neutral, PublicKey neutral in the store, hash mismatch 2013-12-28 19:38:16, Info CSI0000077b [SR] This component was referenced by [l:202{101}]"Microsoft-Windows-Foundation-Package~31bf3856ad364e35~amd64~~6.1.7601.17514.WindowsFoundationDelivery" 2013-12-28 19:38:16, Info CSI0000077e [SR] Could not reproject corrupted file [ml:48{24},l:46{23}]"\??\C:\Windows\SysWOW64"\[l:18{9}]"hlink.dll"; source file in store is also corrupted 2013-12-28 19:38:16, Info CSI00000780 [SR] Repair complete 2013-12-28 19:38:16, Info CSI00000781 [SR] Committing transaction 2013-12-28 19:38:19, Info CSI00000785 [SR] Verify and Repair Transaction completed. All files and registry keys listed in this transaction have been successfully repaired

    Read the article

  • MySQL query, 2 similar servers, 2 minute difference in execution times

    - by mr12086
    I had a similar question on stack overflow, but it seems to be more server/mysql setup related than coding. The queries below all execute instantly on our development server where as they can take upto 2 minutes 20 seconds. The query execution time seems to be affected by home ambiguous the LIKE string's are. If they closely match a country that has few matches it will take less time, and if you use something like 'ge' for germany - it will take longer to execute. But this doesn't always work out like that, at times its quite erratic. Sending data appears to be the culprit but why and what does that mean. Also memory on production looks to be quite low (free memory)? Production: Intel Quad Xeon E3-1220 3.1GHz 4GB DDR3 2x 1TB SATA in RAID1 Network speed 100Mb Ubuntu Development Intel Core i3-2100, 2C/4T, 3.10GHz 500 GB SATA - No RAID 4GB DDR3 UPDATE 2 : mysqltuner output: [prod] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.61-0ubuntu0.10.04.1 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 103M (Tables: 180) [--] Data in InnoDB tables: 491M (Tables: 19) [!!] Total fragmented tables: 38 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 77d 4h 6m 1s (53M q [7.968 qps], 14M conn, TX: 87B, RX: 12B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (12K/53M) [OK] Highest usage of available connections: 22% (34/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/10.6M [OK] Key buffer hit rate: 98.7% (162M cached / 2M reads) [OK] Query cache efficiency: 20.7% (7M cached / 36M selects) [!!] Query cache prunes per day: 3934 [OK] Sorts requiring temporary tables: 1% (3K temp sorts / 230K sorts) [!!] Joins performed without indexes: 71068 [OK] Temporary tables created on disk: 24% (3M on disk / 13M total) [OK] Thread cache hit rate: 99% (690 created / 14M connections) [!!] Table cache hit rate: 0% (64 open / 85M opened) [OK] Open file limit used: 12% (128/1K) [OK] Table locks acquired immediately: 99% (16M immediate / 16M locks) [!!] InnoDB data size / buffer pool: 491.9M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) join_buffer_size (> 128.0K, or always use indexes with joins) table_cache (> 64) innodb_buffer_pool_size (>= 491M) [dev] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.62-0ubuntu0.11.10.1 [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 185M (Tables: 632) [--] Data in InnoDB tables: 967M (Tables: 38) [!!] Total fragmented tables: 73 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 2h 26m 9s (5K q [0.058 qps], 1K conn, TX: 4M, RX: 1M) [--] Reads / Writes: 99% / 1% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (0/5K) [OK] Highest usage of available connections: 1% (2/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/18.6M [OK] Key buffer hit rate: 99.9% (60K cached / 36 reads) [OK] Query cache efficiency: 44.5% (1K cached / 2K selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 44 sorts) [OK] Temporary tables created on disk: 24% (162 on disk / 666 total) [OK] Thread cache hit rate: 99% (2 created / 1K connections) [!!] Table cache hit rate: 1% (64 open / 4K opened) [OK] Open file limit used: 8% (88/1K) [OK] Table locks acquired immediately: 100% (1K immediate / 1K locks) [!!] InnoDB data size / buffer pool: 967.7M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: table_cache (> 64) innodb_buffer_pool_size (>= 967M) UPDATE 1: When testing the queries listed here there is usually no more than one other query taking place, and usually none. Because production is actually handling apache requests that development gets very few of as it's only myself and 1 other who accesses it - could the 4GB of RAM be getting exhausted by using the single machine for both apache and mysql server? Production: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 24872 MB in 2.00 seconds = 12450.72 MB/sec Timing buffered disk reads: 368 MB in 3.00 seconds = 122.49 MB/sec sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 24786 MB in 2.00 seconds = 12407.22 MB/sec Timing buffered disk reads: 350 MB in 3.00 seconds = 116.53 MB/sec Server version(mysql + ubuntu versions): 5.1.61-0ubuntu0.10.04.1 Development: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 10632 MB in 2.00 seconds = 5319.40 MB/sec Timing buffered disk reads: 400 MB in 3.01 seconds = 132.85 MB/sec Server version(mysql + ubuntu versions): 5.1.62-0ubuntu0.11.10.1 ORIGINAL DATA : This query is NOT the query in question but is related so ill post it. SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' And the explain plan for the above query is, run on both dev and production produce the same plan. +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | 1 | SIMPLE | p2 | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index | | 1 | SIMPLE | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | const | 796 | Using where | | 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using index | | 1 | SIMPLE | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 1 | SIMPLE | f2 | ref | form_project_id | form_project_id | 4 | const | 15 | Using where | | 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ This query takes 2 minutes ~20 seconds to execute. The query that is ACTUALLY being run on the server is this one: SELECT COUNT(*) AS num_results FROM (SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' GROUP BY f.form_question_has_answer_id;) dctrn_count_query; With explain plans (again same on dev and production): +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Select tables optimized away | | 2 | DERIVED | p2 | const | PRIMARY | PRIMARY | 4 | | 1 | Using index | | 2 | DERIVED | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | | 797 | Using where | | 2 | DERIVED | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id,project_company_has_user_garbage_collection | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 2 | DERIVED | f2 | ref | form_project_id | form_project_id | 4 | | 15 | Using where | | 2 | DERIVED | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | | 2 | DERIVED | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_user_id | 1 | Using where; Using index | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ On the production server the information I have is as follows. Upon execution: +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (2 min 14.28 sec) Show profile: +--------------------------------+------------+ | Status | Duration | +--------------------------------+------------+ | starting | 0.000016 | | checking query cache for query | 0.000057 | | Opening tables | 0.004388 | | System lock | 0.000003 | | Table lock | 0.000036 | | init | 0.000030 | | optimizing | 0.000016 | | statistics | 0.000111 | | preparing | 0.000022 | | executing | 0.000004 | | Sorting result | 0.000002 | | Sending data | 136.213836 | | end | 0.000007 | | query end | 0.000002 | | freeing items | 0.004273 | | storing result in query cache | 0.000010 | | logging slow query | 0.000001 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +--------------------------------+------------+ On development the results are as follows. +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (0.08 sec) Again the profile for this query: +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000022 | | checking query cache for query | 0.000148 | | Opening tables | 0.000025 | | System lock | 0.000008 | | Table lock | 0.000101 | | optimizing | 0.000035 | | statistics | 0.001019 | | preparing | 0.000047 | | executing | 0.000008 | | Sorting result | 0.000005 | | Sending data | 0.086565 | | init | 0.000015 | | optimizing | 0.000006 | | executing | 0.000020 | | end | 0.000004 | | query end | 0.000004 | | freeing items | 0.000028 | | storing result in query cache | 0.000005 | | removing tmp table | 0.000008 | | closing tables | 0.000008 | | logging slow query | 0.000002 | | cleaning up | 0.000005 | +--------------------------------+----------+ If i remove user and/or project innerjoins the query is reduced to 30s. Last bit of information I have: Mysqlserver and Apache are on the same box, there is only one box for production. Production output from top: before & after. top - 15:43:25 up 78 days, 12:11, 4 users, load average: 1.42, 0.99, 0.78 Tasks: 162 total, 2 running, 160 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 50.4%sy, 0.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3772580k used, 265288k free, 243704k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207944k cached top - 15:44:31 up 78 days, 12:13, 4 users, load average: 1.94, 1.23, 0.87 Tasks: 160 total, 2 running, 157 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 50.6%sy, 0.0%ni, 49.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3834300k used, 203568k free, 243736k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207804k cached But this isn't a good representation of production's normal status so here is a grab of it from today outside of executing the queries. top - 11:04:58 up 79 days, 7:33, 4 users, load average: 0.39, 0.58, 0.76 Tasks: 156 total, 1 running, 155 sleeping, 0 stopped, 0 zombie Cpu(s): 3.3%us, 2.8%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3676136k used, 361732k free, 271480k buffers Swap: 3905528k total, 268736k used, 3636792k free, 1063432k cached Development: This one doesn't change during or after. top - 15:47:07 up 110 days, 22:11, 7 users, load average: 0.17, 0.07, 0.06 Tasks: 210 total, 2 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4111972k total, 1821100k used, 2290872k free, 238860k buffers Swap: 4183036k total, 66472k used, 4116564k free, 921072k cached

    Read the article

  • DataContractSerializer and deserializing web service response types

    - by matra
    Hi, I am using calling web services and using WCF generated service-reference on the client. I have saved XML responses that are received from test service to disk (without SOAP envelope and body tags) I would like to load them from disk and create objects from them. Lets' take the following method from my web service: SomeMethodResponse SomeMethod(SomeMethodRequest req) I manually (through SOAP UI) save the response to disk to file, Sample response: < SomeMethodResponse xmlns="http://myNamespace"> <SomeMember1>value</SomeMember1> </SomeMethodResponse xmlns="http://myNamespace"> Then I try to deserialize the object from file using: DataContractSerializer dcs = new DataContractSerializer(typeof(SomeMethodResponse)) This fails – the serializer complains with the error, that it is expecting element in namespace 'http://schemas.datacontract.org/2004/07', but found element in 'http://myNamespace'. Question: Why does the DataContractSerializer not use the namespace, that is declared on SomeMethodResponseType with XmlTypeAttribute(Namespace="http://myNamespace")? I can work around this by explicitly providing the namespace and the root element to DataContractSerializer constructor. But then it fails with message similar to: Error in line X position Y (last line of the XMLdocument). 'EndElement' 'SomeMethodResponse from namespace 'httpmyNapespace’ is not expected. Expecting element 'someNameField'. SomeName is an element in the XSD that web service is using. It is also a property on the SomeMethodResponse type, backed by the private field called someNameField. It looks like DataContractSerializer is trying to deserialize the fields in addition to properties. How can I deserailize XML that I have saved from disk and get back the object of same type that SomeMethod is returning? Thanks, Matra

    Read the article

  • webservice CopyIntoItems is not working to upload file to sharepoint

    - by Joeri
    The following piece of C# is always failing with 1 Unknown Object reference not set to an instance of an object Anybody some idea what i am missing? try { //Copy WebService Settings String strUserName = "abc"; String strPassword = "abc"; String strDomain = "SVR03"; String FileName = "Filename.xls"; WebReference.Copy copyService = new WebReference.Copy(); copyService.Url = "http://192.168.11.253/_vti_bin/copy.asmx"; copyService.Credentials = new NetworkCredential (strUserName, strPassword, strDomain); // Filestream of attachment FileStream MyFile = new FileStream(@"C:\temp\28200.xls", FileMode.Open, FileAccess.Read); // Read the attachment in to a variable byte[] Contents = new byte[MyFile.Length]; MyFile.Read(Contents, 0, (int)MyFile.Length); MyFile.Close(); //Change file name if not exist then create new one String[] destinationUrl = { "http://192.168.11.253/Shared Documents/28200.xls" }; // Setup some SharePoint metadata fields WebReference.FieldInformation fieldInfo = new WebReference.FieldInformation(); WebReference.FieldInformation[] ListFields = { fieldInfo }; //Copy the document from Local to SharePoint WebReference.CopyResult[] result; uint NewListId = copyService.CopyIntoItems (FileName, destinationUrl, ListFields, Contents, out result); if (result.Length < 1) Console.WriteLine("Unable to create a document library item"); else { Console.WriteLine( result.Length ); Console.WriteLine( result[0].ErrorCode ); Console.WriteLine( result[0].ErrorMessage ); Console.WriteLine( result[0].DestinationUrl); } } catch (Exception ex) { Console.WriteLine("Exception: {0}", ex.Message); }

    Read the article

  • how to send image to remote server using webservices in android only save to byte array retrieve ima

    - by narasimha
    hi sir i am implemented this code public class ImageTest extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); ImageView image = (ImageView) findViewById(R.id.picview); EditText value=(EditText)findViewById(R.id.EditText01); FileInputStream in; BufferedInputStream buf; try { in = new FileInputStream("/sdcard/pictures/1.jpg"); buf = new BufferedInputStream(in,1070); System.out.println("1.................."+buf); byte[] bMapArray= new byte[buf.available()]; buf.read(bMapArray); System.out.println("2................."+buf.read(bMapArray)); Bitmap bMap = BitmapFactory.decodeByteArray(bMapArray, 0, bMapArray.length); for (int i = 0; i < bMapArray.length; i++) { System.out.print(bMapArray[i]); } System.out.println("3......................"+bMap); System.out.println("4........bitmaparray"+bMap.extractAlpha()); System.out.println("5......................"+bMapArray); System.out.println("6......................"+ bMapArray.length); image.setImageBitmap(bMap); value.setText(bMapArray.length); if (in != null) { in.close(); } if (buf != null) { buf.close(); } } catch (Exception e) { Log.e("Error reading file", e.toString()); } } } 04-14 11:46:16.543: INFO/System.out(736): 2.................-1 3......................android.graphics.Bitmap@435a2d98 4........bitmaparrayandroid.graphics.Bitmap@435a3310 5......................[B@435a2758 6......................1035

    Read the article

  • Failed sending bytes array to JAX-WS web service on Axis

    - by user304309
    Hi I have made a small example to show my problem. Here is my web-service: package service; import javax.jws.WebMethod; import javax.jws.WebService; @WebService public class BytesService { @WebMethod public String redirectString(String string){ return string+" - is what you sended"; } @WebMethod public byte[] redirectBytes(byte[] bytes) { System.out.println("### redirectBytes"); System.out.println("### bytes lenght:" + bytes.length); System.out.println("### message" + new String(bytes)); return bytes; } @WebMethod public byte[] genBytes() { byte[] bytes = "Hello".getBytes(); return bytes; } } I pack it in jar file and store in "axis2-1.5.1/repository/servicejars" folder. Then I generate client Proxy using Eclipse for EE default utils. And use it in my code in the following way: BytesService service = new BytesServiceProxy(); System.out.println("Redirect string"); System.out.println(service.redirectString("Hello")); System.out.println("Redirect bytes"); byte[] param = { (byte)21, (byte)22, (byte)23 }; System.out.println(param.length); param = service.redirectBytes(param); System.out.println(param.length); System.out.println("Gen bytes"); param = service.genBytes(); System.out.println(param.length); And here is what my client prints: Redirect string Hello - is what you sended Redirect bytes 3 0 Gen bytes 5 And on server I have: ### redirectBytes ### bytes lenght:0 ### message So byte array can normally be transfered from service, but is not accepted from the client. And it works fine with strings. Now I use Base64Encoder, but I dislike this solution.

    Read the article

  • Tiff Analyzer

    - by Kevin
    I am writing a program to convert some data, mainly a bunch of Tiff images. Some of the Tiffs seems to have a minor problem with them. They show up fine in some viewers (Irfanview, client's old system) but not in others (Client's new system, Window's picture and fax viewer). I have manually looked at the binary data and all the tags seem ok. Can anyone recommend an app that can analyze it and tell me what, if anything, is wrong with it? Also, for clarity sake, I'm only converting the data about the images which is stored seperately in a database and copying the images, I'm not editting the images myself, so I'm pretty sure I'm not messing them up. UDPATE: For anyone interested, here are the tags from a good and bad file: BAD Tag Type Length Value 256 Image Width SHORT 1 1652 257 Image Length SHORT 1 704 258 Bits Per Sample SHORT 1 1 259 Compression SHORT 1 4 262 Photometric SHORT 1 0 266 Fill Order SHORT 1 1 273 Strip Offsets LONG 1 210 (d2 Hex) 274 Orientation SHORT 1 3 277 Samples Per Pixel SHORT 1 1 278 Rows Per Strip SHORT 1 450 279 Strip Byte Counts LONG 1 7264 (1c60 Hex) 282 X Resolution RATIONAL 1 <194 200 / 1 = 200.000 283 Y Resolution RATIONAL 1 <202 200 / 1 = 200.000 284 Planar Configuration SHORT 1 1 296 Resolution Unit SHORT 1 2 Good Tag Type Length Value 254 New Subfile Type LONG 1 0 (0 Hex) 256 Image Width SHORT 1 1193 257 Image Length SHORT 1 788 258 Bits Per Sample SHORT 1 1 259 Compression SHORT 1 4 262 Photometric SHORT 1 0 266 Fill Order SHORT 1 1 270 Image Description ASCII 45 256 273 Strip Offsets LONG 1 1118 (45e Hex) 274 Orientation SHORT 1 1 277 Samples Per Pixel SHORT 1 1 278 Rows Per Strip LONG 1 788 (314 Hex) 279 Strip Byte Counts LONG 1 496 (1f0 Hex) 280 Min Sample Value SHORT 1 0 281 Max Sample Value SHORT 1 1 282 X Resolution RATIONAL 1 <301 200 / 1 = 200.000 283 Y Resolution RATIONAL 1 <309 200 / 1 = 200.000 284 Planar Configuration SHORT 1 1 293 Group 4 Options LONG 1 0 (0 Hex) 296 Resolution Unit SHORT 1 2

    Read the article

  • We have multiple app servers running against a single database. How do I ensure that each row in a q

    - by Dave
    We have about 7 app servers running .NET windows services that ping a single sql server 2005 queue table and fetch a fixed amount of records to process at fixed intervals. The amount of records to process and the amount of time between fetches are both configurable and are initially set to 100 and 30 seconds initially. Currently, my queue table has an int status column which can be either "Ready, Processing, Complete, Error". The proc that fetches the records has a sql transaction with the following code inside the transaction: 1) Fetch x number of records into temp table where the status is "Ready". The select uses a holdlock hint 2) Update the status on those records in the Queue table to "Processing" The .NET services do some processing that may take seconds or even minutes per record. Another proc is called per record that simply updates the status to "Complete". The update proc has no transaction as I'm leaning on the implicit transaction as part of the update clause here. I don't know the traffic exceptions for this but figure it will be under 10k records per day. Is this the best way to handle this scenario? If so, are there any details that I've left out, such as a hint here or there? Thanks! Dave

    Read the article

  • DataTable.Select Behaves Strangely Using ISNULL Operator on NULL DateTime Column

    - by Paul Williams
    I have a DataTable with a DateTime column, "DateCol", that can be DBNull. The DataTable has one row in it with a NULL value in this column. I am trying to query rows that have either DBNull value in this column or a date that is greater than today's date. Today's date is 5/11/2010. I built a query to select the rows I want, but it did not work as expected. The query was: string query = "ISNULL(DateCol, '" + DateTime.MaxValue + "'") > "' + DateTime.Today "'" This results in the following query: "ISNULL(DateCol, '12/31/9999 11:59:59 PM') > '5/11/2010'" When I run this query, I get no results. It took me a while to figure out why. What follows is my investigation in the Visual Studio immediate window: > dt.Rows.Count 1 > dt.Rows[0]["DateCol"] {} > dt.Rows[0]["DateCol"] == DBNull.Value true > dt.Select("ISNULL(DateCol,'12/31/9999 11:59:59 PM') > '5/11/2010'").Length 0 <-- I expected 1 Trial and error showed a difference in the date checks at the following boundary: > dt.Select("ISNULL(DateCol, '12/31/9999 11:59:59 PM') > '2/1/2000'").Length 0 > dt.Select("ISNULL(DateCol, '12/31/9999 11:59:59 PM') > '1/31/2000'").Length 1 <-- this was the expected answer The query works fine if I wrap the DateTime field in # instead of quotes. > dt.Select("ISNULL(DateCol, #12/31/9999#) > #5/11/2010#").Length 1 My machine's regional settings is currently set to EN-US, and the short date format is M/d/yyyy. Why did the original query return the wrong results? Why would it work fine if the date was compared against 1/31/2000 but not against 2/1/2000?

    Read the article

  • How to do SoapClient on php

    - by Jin Yong
    I'm new in soapclient, I have tried to do some study online and also tried coding on soap, but seem this is still not working to me, just wandering anyone here can point out and perhaps give me some example how can I actually use the soapclint to get the feedback from the following web server? POST /webservices/tempconvert.asmx HTTP/1.1 Host: www.w3schools.com Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: "http://tempuri.org/CelsiusToFahrenheit" <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <CelsiusToFahrenheit xmlns="http://tempuri.org/"> <Celsius>string</Celsius> </CelsiusToFahrenheit> </soap:Body> </soap:Envelope> HTTP/1.1 200 OK Content-Type: text/xml; charset=utf-8 Content-Length: length <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <CelsiusToFahrenheitResponse xmlns="http://tempuri.org/"> <CelsiusToFahrenheitResult>string</CelsiusToFahrenheitResult> </CelsiusToFahrenheitResponse> </soap:Body> </soap:Envelope> <?php $url = "http://www.w3schools.com/webservices/tempconvert.asmx?WSDL"; $client = new SoapClient($url); ?> What should I do for the next steps so that I can get the respond ??

    Read the article

  • When does a PHP <5.3.0 daemon script receive signals?

    - by MidnightLightning
    I've got a PHP script in the works that is a job worker; its main task is to check a database table for new jobs, and if there are any, to act on them. But jobs will be coming in in bursts, with long gaps in between, so I devised a sleep cycle like: while(true) { if ($jobs = get_new_jobs()) { // Act upon the jobs } else { // No new jobs now sleep(30); } } Good, but in some cases that means there might be a 30 second lag before a new job is acted upon. Since this is a daemon script, I figured I'd try the pcntl_signal hook to catch a SIGUSR1 signal to nudge the script to wake up, like: $_isAwake = true; function user_sig($signo) { global $_isAwake; daemon_log("Caught SIGUSR1"); $_isAwake = true; } pcntl_signal(SIGUSR1, 'user_sig'); while(true) { if ($jobs = get_new_jobs()) { // Act upon the jobs } else { // No new jobs now daemon_log("No new jobs, sleeping..."); $_isAwake = false; $ts = time(); while(time() < $ts+30) { sleep(1); if ($_isAwake) break; // Did a signal happen while we were sleeping? If so, stop sleeping } $_isAwake = true; } } I broke the sleep(30) up into smaller sleep bits, in case a signal doesn't interrupt a sleep() command, thinking that this would cause at most a one-second delay, but in the log file, I'm seeing that the SIGUSR1 isn't being caught until after the full 30 seconds has passed (and maybe the outer while loop resets). I found the pcntl_signal_dispatch command, but that's only for PHP 5.3 and higher. If I were using that version, I could stick a call to that command before the if ($_isAwake) call, but as it currently stands I'm on 5.2.13. On what sort of situations is the signals queue interpreted in PHP versions without the means to explicitly call the queue parsing? Could I put in some other useless command in that sleep loop that would trigger a signal queue parse within there?

    Read the article

  • jQuery autocomplete: taking JSON input and setting multiple fields from single field

    - by Sly
    I am trying to get the jQuery autocomplete plugin to take a local JSON variable as input. Once the user has selected one option from the autocomplete list, I want the adjacent address fields to be autopopulated. Here's the JSON variable that declared as a global variable in the of the HTML file: var JSON_address={"1":{"origin":{"nametag":"Home","street":"Easy St","city":"Emerald City","state":"CA","zip":"9xxxx"},"destination":{"nametag":"Work","street":"Factory St","city":"San Francisco","state":"CA","zip":"94104"}},"2":{"origin":{"nametag":"Work","street":"Umpa Loompa St","city":"San Francisco","state":"CA","zip":"94104"},"destination":{"nametag":"Home","street":"Easy St","city":"Emerald City ","state":"CA","zip":"9xxxx"}}}</script> I want the first field to display a list of "origin" nametags: "Home", "Work". Then when "Home" is selected, adjacent fields are automatically populated with Street: Easy St, City: Emerald City, etc. Here's the code I have for the autocomplete: $(document).ready(function(){ $("#origin_nametag_id").autocomplete(JSON_address, { autoFill:true, minChars:0, dataType: 'json', parse: function(data) { var rows = new Array(); for (var i=0; i<=data.length; i++) { rows[rows.length] = { data:data[i], value:data[i].origin.nametag, result:data[i].origin.nametag }; } return rows; } }).change(function(){ $("#street_address_id").autocomplete({ dataType: 'json', parse: function(data) { var rows = new Array(); for (var i=0; i<=data.length; i++) { rows[rows.length] = { data:data[i], value:data[i].origin.street, result:data[i].origin.street }; } return rows; } }); }); }); So this question really has two subparts: 1) How do you get autocomplete to process the multi-dimensional JSON object? (When the field is clicked and text entered, nothing happens - no list) 2) How do you get the other fields (street, city, etc) to populate based upon the origin nametag sub-array? Thanks in advance!

    Read the article

  • Visual studio 2010 setup project problem.

    - by Guru
    Hi there, I've made an application that uses .NET framework 3.5 SP1 and SQL Server 2008 Express. Application is fine and now i'm going to to make a setup project for this. When I first build my setup it was fine as all the prerequisites were not included in setup. But I want my setup to install .NET 3.5 SP1 and SQL SERVER 2008 Express also. So for this I've changed the options in setup project's properties from "Download prerequisites from following location" to "Download prerequisites from the same location as my application". In addition to that I've also checked the options above like .NET 3.5 SP1 and SQL Server 2008 Express etc. After doing all this I build my project again. This time I'm Getting 57 Errors. Error 1 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\aspnet.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup Error 2 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\aspnet_64.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup Error 3 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\clr.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup Error 4 The install location for prerequisites has not been set to 'component vendor's web site' and the file 'DotNetFX35SP1\dotNetFX20\clr_64.msp' in item '.NET Framework 3.5 SP1' can not be located on disk. See Help for more information. D:\MindStrike Setup\MindStrike Setup.vdproj MindStrike Setup As the question will become too large so I'm just pasting 3 errors but there are totally 57 errors. Please help me . Thanks in advance Guru

    Read the article

  • Record/Playback with AudioQueue on iPhone

    - by Biranchi
    Hi, I am currently using Audio Queues on the iPhone to record and playback audio. What I would like to be able to do is to record some audio, allow the user to pause the record queue, and to seek back and forward through the audio to select a position from where they can start recording from again. I have got over the seeking issue by making the playback AudioQueueBuffer sizes small enough so that the play audio queue callback happens at a rate that allows the user to use a slider control to hear the audio as they adjust the slider back and forth. I think I can achieve the recording at a new position by setting the inStartingPacket parameter of the AudioFileWritePackets function that I call from the Audio Recording Queue callback. The trouble is this only inserts audio over the previously recorded audio. The file length obviously doesn't change so if the user were to go backwards and record less audio than before, the old audio still remains after the end of the newly recorded audio. Is there a way I can get the AudioFile to truncate at the point the user starts to insert the new audio, is there some other way I can remove the old audio starting at the insert position or is there a better way about going about this task? Thanks

    Read the article

  • Seeking help with a MT design pattern

    - by SamG
    I have a queue of 1000 work items and a n-proc machine (assume n = 4).The main thread spawns n (=4) worker threads at a time ( 25 outer iterations) and waits for all threads to complete before processing the next n (=4) items until the entire queue is processed for(i= 0 to queue.Length / numprocs) for(j= 0 to numprocs) { CreateThread(WorkerThread,WorkItem) } WaitForMultipleObjects(threadHandle[]) The work done by each (worker) thread is not homogeneous.Therefore in 1 batch (of n) if thread 1 spends 1000 s doing work and rest of the 3 threads only 1 s , above design is inefficient,becaue after 1 sec other 3 processors are idling. Besides there is no pooling - 1000 distinct threads are being created How do I use the NT thread pool (I am not familiar enough- hence the long winded question) and QueueUserWorkitem to achieve the above. The following constraints should hold The main thread requires that all worker items are processed before it can proceed.So I would think that a waitall like construct above is required I want to create as many threads as processors (ie not 1000 threads at a time) Also I dont want to create 1000 distinct events, pass to the worker thread, and wait on all events using the QueueUserWorkitem API or otherwise Exisitng code is in C++.Prefer C++ because I dont know c# I suspect that the above is a very common pattern and was looking for input from you folks.

    Read the article

  • Does GC guarantee that cleared References are enqueued to ReferenceQueue in topological order?

    - by Dimitris Andreou
    Say there are two objects, A and B, and there is a pointer A.x --> B, and we create, say, WeakReferences to both A and B, with an associated ReferenceQueue. Assume that both A and B become unreachable. Intuitively B cannot be considered unreachable before A is. In such a case, do we somehow get a guarantee that the respective references will be enqueued in the intuitive (topological when there are no cycles) order in the ReferenceQueue? I.e. ref(A) before ref(B). I don't know - what if the GC marked a bunch of objects as unreachable, and then enqueued them in no particular order? I was reviewing Finalizer.java of guava, seeing this snippet: private void cleanUp(Reference<?> reference) throws ShutDown { ... if (reference == frqReference) { /* * The client no longer has a reference to the * FinalizableReferenceQueue. We can stop. */ throw new ShutDown(); } frqReference is a PhantomReference to the used ReferenceQueue, so if this is GC'ed, no Finalizable{Weak, Soft, Phantom}References can be alive, since they reference the queue. So they have to be GC'ed before the queue itself can be GC'ed - but still, do we get the guarantee that these references will be enqueued to the ReferenceQueue at the order they get "garbage collected" (as if they get GC'ed one by one)? The code implies that there is some kind of guarantee, otherwise unprocessed references could theoretically remain in the queue. Thanks

    Read the article

  • (PL/SQL)Oracle stored procedure parameter size limit(VARCHAR2)

    - by Jude Lee
    Hello, there. I have an problem with my oracle stored procedure as following. codes : CREATE OR REPLACE PROCEDURE APP.pr_ap_gateway ( aiov_req IN OUT VARCHAR2, aov_rep01 OUT VARCHAR2, aov_rep02 OUT VARCHAR2, aov_rep03 OUT VARCHAR2, aov_rep04 OUT VARCHAR2, aov_rep05 OUT VARCHAR2 ) IS v_header VARCHAR (100); v_case VARCHAR (4); v_err_no_case VARCHAR (5) := '00004'; BEGIN DBMS_OUTPUT.ENABLE (1000000); aov_rep01 := lpad(' ', 190, ' '); dbms_output.put_line('>> ['||length(aov_rep01)||']'); aov_rep01 := lpad(' ', 199, ' '); dbms_output.put_line('>> ['||length(aov_rep01)||']'); aov_rep01 := lpad(' ', 200, ' '); dbms_output.put_line('>> ['||length(aov_rep01)||']'); aov_rep01 := lpad(' ', 201, ' '); dbms_output.put_line('>> ['||length(aov_rep01)||']'); END pr_ap_gateway; / results : >> [190] >> [199] >> [200] and then error 'buffer overflow' ORA-06502: PL/SQL: ?? ?? ? ??: ??? ??? ?? ???? I know that VARCHAR2 type can contain 32KB in PL/SQL. But, in my test, VARCHAR2 parameter contains only 200 Bytes. What's wrong with this situation? This procedure will called by java daemon program. So, There's no declaration of parameters size before calling procedure. Thanks in advance for your reply.

    Read the article

  • Using a handle to collect output from CreateProcess()

    - by Stef
    Hi I am using CreateProcess() to run an external console application in Windows from my GUI application. I would like to somehow gather the output to know whether there were errors. Now I know I have to do something with hStdOutput, but I fail to understand what. I am new to c++ and an inexperienced programmer and I actually don't know what to do with a handle or how to light a pipe. How do I get the output to some kind of variable (or file)? This is what I have a the moment: void email::run(string path,string cmd){ WCHAR * ppath=new(nothrow) WCHAR[path.length()*2]; memset(ppath,' ',path.length()*2); WCHAR * pcmd= new(nothrow) WCHAR[cmd.length()*2]; memset(pcmd,' ',cmd.length()*2); string tempstr; ToWCHAR(path,ppath); //creates WCHAR from my std::string ToWCHAR(cmd,pcmd); STARTUPINFO info={sizeof(info)}; info.dwFlags = STARTF_USESHOWWINDOW; //hide process PROCESS_INFORMATION processInfo; if (CreateProcess(ppath,pcmd, NULL, NULL, FALSE, 0, NULL, NULL, &info, &processInfo)) { ::WaitForSingleObject(processInfo.hProcess, INFINITE); CloseHandle(processInfo.hProcess); CloseHandle(processInfo.hThread); } delete[](ppath); delete[](pcmd); } This code probably makes any decent programmer scream, but (I shouldn't even say it:) It works ;-) The Question: How do I use hStdOutput to read the output to a file (for instance)?

    Read the article

  • How to use JQUERY to filter table rows dynamically using multiple form inputs

    - by tresstylez
    I'm displaying a table with multiple rows and columns. I'm using a JQUERY plugin called uiTableFilter which uses a text field input and filters (shows/hides) the table rows based on the input you provide. All you do is specify a column you want to filter on, and it will display only rows that have the text field input in that column. Simple and works fine. I want to add a SECOND text input field that will help me narrow the results down even further. So, for instance if I had a PETS table and one column was petType and one was petColor -- I could type in CAT into the first text field, to show ALL cats, and then in the 2nd text field, I could type black, and the resulting table would display only rows where BLACK CATS were found. Basically, a subset. Here is the JQUERY I'm using: $("#typeFilter").live('keyup', function() { if ($(this).val().length > 2 || $(this).val().length == 0) { var newTable = $('#pets'); $.uiTableFilter( theTable, this.value, "petType" ); } }) // end typefilter $("#colorFilter").live('keyup', function() { if ($(this).val().length > 2 || $(this).val().length == 0) { var newTable = $('#pets'); $.uiTableFilter( newTable, this.value, "petColor" ); } }) // end colorfilter Problem is, I can use one filter, and it will display the correct subset of table rows, but when I provide input for the other filter, it doesn't seem to recognize the visible table rows that are remaining from the previous column, but instead it appears that it does an entirely new filtering of the original table. If 10 rows are returned after applying one filter, the 2nd filter should only apply to THOSE 10 rows. I've tried LIVE and BIND, but not working. Can anyone shed some light on where I'm going wrong? Thanks!

    Read the article

  • Out of Core Implementation of a Quadtree

    - by Nima
    Hi, I am trying to build a Quadtree data structure(or let's just say a tree) on the secondary memory(Hard Disk). I have a C++ program to do so and I use fopen to create the files. Also, I am using tesseral coding to store each cell in a file named with its corresponding code to store it on the disk in one directory. The problem is that after creating about 1,100 files, fopen just returns NULL and stops creating new files. I can create further files manually in that directory, but using C++ it can not create any further files. I know about max limit of inode on ext3 filesystem which is (from Wikipedia) 32,000 but mine is way less than that, also note that I can create files manually on the disk; just not through fopen. Also, I really appreciate any idea regarding the best way to store a very dynamic quadtree on disk(I need the nodes to be in separate files and the quadtree might have a depth of 50). Using nested directories is one idea, but I think it will slow down the performance because of following the links on the filesystem to access the file. Thanks, Nima

    Read the article

  • In C#, what thread will Events be handled in?

    - by Ben
    Hi, I have attempted to implement a producer/consumer pattern in c#. I have a consumer thread that monitors a shared queue, and a producer thread that places items onto the shared queue. The producer thread is subscribed to receive data...that is, it has an event handler, and just sits around and waits for an OnData event to fire (the data is being sent from a 3rd party api). When it gets the data, it sticks it on the queue so the consumer can handle it. When the OnData event does fire in the producer, I had expected it to be handled by my producer thread. But that doesn't seem to be what is happening. The OnData event seems as if it's being handled on a new thread instead! Is this how .net always works...events are handled on their own thread? Can I control what thread will handle events when they're raised? What if hundreds of events are raised near-simultaneously...would each have its own thread? Thank in advance! Ben

    Read the article

  • Problem using custom HttpHandler to process requests for both .aspx and non-extension pages in IIS7

    - by Noel
    I am trying to process both ".aspx" and non-extension page requests (i.e. both contact.aspx and /contact/) using a custom HttpHandler in IIS7. My handler works just fine in either one case or the other, but as soon as I try to process both cases, it only works for one. Please see Handlers snippet from my web.config below: If i keep only mapping to "*.aspx" then all .aspx requests are processed correctly, but obviously extensionless requests won't work: <add name="AllPages.ASPX" path="*.aspx" verb="*" type="Test.PageHandlerFactory, Test" preCondition="" /> If i change the mapping to "*" then all extensionless requests are processed correctly, but ".aspx" requests that should still be handled by this handler stop working. Note that i added the StaticFiles entry in order to process files that are on disk like images, css, js, etc. <add name="WebResource" path="WebResource.axd" verb="GET" type="System.Web.Handlers.AssemblyResourceLoader" /> <add name="StaticFiles" verb="GET,HEAD" path="*.*" type="System.Web.StaticFileHandler" resourceType="File" /> <add name="AllPages" path="*" verb="*" type="Test.PageHandlerFactory, Test" preCondition="" /> The crazy thing is that when i load an ".aspx" request (with the 2nd configuration shown) IIS7 gives a 404 not found error. The error also says that the request is processed by the StaticFiles handler. But I made sure to add resourceType="File" to the StaticFileHandler in order to avoid this. According to MS this means the request is only for "physical files on disk". Am i misreading/interpreting the "on disk" part? My .aspx file isn't on disk, that's why i want to use the handler in the first place.

    Read the article

  • Semaphore race condition?

    - by poindexter12
    I have created a "Manager" class that contains a limited set of resources. The resources are stored in the "Manager" as a Queue. I initialize the Queue and a Semaphore to the same size, using the semaphore to block a thread if there are no resources available. I have multiple threads calling into this class to request a resource. Here is the psuedo code: public IResource RequestResource() { IResource resource = null; _semaphore.WaitOne(); lock (_syncLock) { resource = _resources.Dequeue(); } return resource; } public void ReleaseResource(IResource resource) { lock (_syncLock) { _resources.Enqueue(resource); } _semaphore.Release(); } While running this application, it seems to run fine for a while. Then, it seems like my Queue is giving out the same object. Does this seem like its possible? I'm pulling my hair out over here, and any help would be greatly appreciated. Feel free to ask for more information if you need it. Thanks!

    Read the article

  • I'm getting an error in my Java code but I can't see whats wrong with it. Help?

    - by Fraz
    The error i'm getting is in the fillPayroll() method in the while loop where it says payroll.add(employee). The error says I can't invoke add() on an array type Person but the Employee class inherits from Person so I thought this would be possible. Can anyone clarify this for me? import java.io.*; import java.util.*; public class Payroll { private int monthlyPay, tax; private Person [] payroll = new Person [1]; //Method adds person to payroll array public void add(Person person) { if(payroll[0] == null) //If array is empty, fill first element with person { payroll[payroll.length-1] = person; } else //Creates copy of payroll with new person added { Person [] newPayroll = new Person [payroll.length+1]; for(int i = 0;i<payroll.length;i++) { newPayroll[i] = payroll[i]; } newPayroll[newPayroll.length] = person; payroll = newPayroll; } } public void fillPayroll() { try { FileReader fromEmployee = new FileReader ("EmployeeData.txt"); Scanner data = new Scanner(fromEmployee); Employee employee = new Employee(); while (data.hasNextLine()) { employee.readData(data.nextLine()); payroll.add(employee); } } catch (FileNotFoundException e) { System.out.println("Error: File Not Found"); } } }

    Read the article

  • SQL Server Clustered Index: (Physical) Data Page Order

    - by scherand
    I am struggling understanding what a clustered index in SQL Server 2005 is. I read the MSDN article Clustered Index Structures (among other things) but I am still unsure if I understand it correctly. The (main) question is: what happens if I insert a row (with a "low" key) into a table with a clustered index? The above mentioned MSDN article states: The pages in the data chain and the rows in them are ordered on the value of the clustered index key. And Using Clustered Indexes for example states: For example, if a record is added to the table that is close to the beginning of the sequentially ordered list, any records in the table after that record will need to shift to allow the record to be inserted. Does this mean that if I insert a row with a very "low" key into a table that already contains a gazillion rows literally all rows are physically shifted on disk? I cannot believe that. This would take ages, no? Or is it rather (as I suspect) that there are two scenarios depending on how "full" the first data page is. A) If the page has enough free space to accommodate the record it is placed into the existing data page and data might be (physically) reordered within that page. B) If the page does not have enough free space for the record a new data page would be created (anywhere on the disk!) and "linked" to the front of the leaf level of the B-Tree? This would then mean the "physical order" of the data is restricted to the "page level" (i.e. within a data page) but not to the pages residing on consecutive blocks on the physical hard drive. The data pages are then just linked together in the correct order. Or formulated in an alternative way: if SQL Server needs to read the first N rows of a table that has a clustered index it can read data pages sequentially (following the links) but these pages are not (necessarily) block wise in sequence on disk (so the disk head has to move "randomly"). How close am I? :)

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >