Search Results

Search found 10391 results on 416 pages for 'sys dm exec requests'.

Page 74/416 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • SCD2 + Merge Statement + SQL Server

    - by Nev_Rahd
    I am trying work out with MERGE statment to Insert / Update Dimension Table of Type SCD2 My source is a Table var to Merge with Dimension table. My MERGE statement is throwing an error as: The target table 'DM.DATA_ERROR.ERROR_DIMENSION' of the INSERT statement cannot be on either side of a (primary key, foreign key) relationship when the FROM clause contains a nested INSERT, UPDATE, DELETE, or MERGE statement. Found reference constraint 'FK_ERROR_DIMENSION_to_AUDIT_CreatedBy'. My MERGE Statement: DECLARE @DATAERROROBJECT AS [ERROR_DIMENSION] INSERT INTO DM.DATA_ERROR.ERROR_DIMENSION SELECT ERROR_CODE, DATA_STREAM_ID, [ERROR_SEVERITY], DATA_QUALITY_RATING, ERROR_LONG_DESCRIPTION, ERROR_DESCRIPTION, VALIDATION_RULE, ERROR_TYPE, ERROR_CLASS, VALID_FROM, VALID_TO, CURR_FLAG, CREATED_BY_AUDIT_SK, UPDATED_BY_AUDIT_SK FROM (MERGE DM.DATA_ERROR.ERROR_DIMENSION ED USING @DATAERROROBJECT OBJ ON(ED.ERROR_CODE = OBJ.ERROR_CODE AND ED.DATA_STREAM_ID = OBJ.DATA_STREAM_ID) WHEN NOT MATCHED THEN INSERT VALUES( OBJ.ERROR_CODE ,OBJ.DATA_STREAM_ID ,OBJ.[ERROR_SEVERITY] ,OBJ.DATA_QUALITY_RATING ,OBJ.ERROR_LONG_DESCRIPTION ,OBJ.ERROR_DESCRIPTION ,OBJ.VALIDATION_RULE ,OBJ.ERROR_TYPE ,OBJ.ERROR_CLASS ,GETDATE() ,'9999-12-13' ,'Y' ,1 ,1 ) WHEN MATCHED AND ED.CURR_FLAG = 'Y' AND ( ED.[ERROR_SEVERITY] <> OBJ.[ERROR_SEVERITY] OR ED.[DATA_QUALITY_RATING] <> OBJ.[DATA_QUALITY_RATING] OR ED.[ERROR_LONG_DESCRIPTION] <> OBJ.[ERROR_LONG_DESCRIPTION] OR ED.[ERROR_DESCRIPTION] <> OBJ.[ERROR_DESCRIPTION] OR ED.[VALIDATION_RULE] <> OBJ.[VALIDATION_RULE] OR ED.[ERROR_TYPE] <> OBJ.[ERROR_TYPE] OR ED.[ERROR_CLASS] <> OBJ.[ERROR_CLASS] ) THEN UPDATE SET ED.CURR_FLAG = 'N', ED.VALID_TO = GETDATE() OUTPUT $ACTION ACTION_OUT, OBJ.ERROR_CODE ERROR_CODE, OBJ.DATA_STREAM_ID DATA_STREAM_ID, OBJ.[ERROR_SEVERITY] [ERROR_SEVERITY], OBJ.DATA_QUALITY_RATING DATA_QUALITY_RATING, OBJ.ERROR_LONG_DESCRIPTION ERROR_LONG_DESCRIPTION, OBJ.ERROR_DESCRIPTION ERROR_DESCRIPTION, OBJ.VALIDATION_RULE VALIDATION_RULE, OBJ.ERROR_TYPE ERROR_TYPE, OBJ.ERROR_CLASS ERROR_CLASS, GETDATE() VALID_FROM, '9999-12-31' VALID_TO, 'Y' CURR_FLAG, 555 CREATED_BY_AUDIT_SK, 555 UPDATED_BY_AUDIT_SK ) AS MERGE_OUT WHERE MERGE_OUT.ACTION_OUT = 'UPDATE'; What am I doing wrong ?

    Read the article

  • SCD2 + Merge Statement + MSSQL

    - by Nev_Rahd
    I am trying work out with MERGE statment to Insert / Update Dimension Table of Type SCD2 My source is a Table var to Merge with Dimension table. My Merget statement is throwing an error as: The target table 'DM.DATA_ERROR.ERROR_DIMENSION' of the INSERT statement cannot be on either side of a (primary key, foreign key) relationship when the FROM clause contains a nested INSERT, UPDATE, DELETE, or MERGE statement. Found reference constraint 'FK_ERROR_DIMENSION_to_AUDIT_CreatedBy'. My Merge Statement: DECLARE @DATAERROROBJECT AS [ERROR_DIMENSION] INSERT INTO DM.DATA_ERROR.ERROR_DIMENSION SELECT ERROR_CODE, DATA_STREAM_ID, [ERROR_SEVERITY], DATA_QUALITY_RATING, ERROR_LONG_DESCRIPTION, ERROR_DESCRIPTION, VALIDATION_RULE, ERROR_TYPE, ERROR_CLASS, VALID_FROM, VALID_TO, CURR_FLAG, CREATED_BY_AUDIT_SK, UPDATED_BY_AUDIT_SK FROM (MERGE DM.DATA_ERROR.ERROR_DIMENSION ED USING @DATAERROROBJECT OBJ ON(ED.ERROR_CODE = OBJ.ERROR_CODE AND ED.DATA_STREAM_ID = OBJ.DATA_STREAM_ID) WHEN NOT MATCHED THEN INSERT VALUES( OBJ.ERROR_CODE ,OBJ.DATA_STREAM_ID ,OBJ.[ERROR_SEVERITY] ,OBJ.DATA_QUALITY_RATING ,OBJ.ERROR_LONG_DESCRIPTION ,OBJ.ERROR_DESCRIPTION ,OBJ.VALIDATION_RULE ,OBJ.ERROR_TYPE ,OBJ.ERROR_CLASS ,GETDATE() ,'9999-12-13' ,'Y' ,1 ,1 ) WHEN MATCHED AND ED.CURR_FLAG = 'Y' AND ( ED.[ERROR_SEVERITY] <> OBJ.[ERROR_SEVERITY] OR ED.[DATA_QUALITY_RATING] <> OBJ.[DATA_QUALITY_RATING] OR ED.[ERROR_LONG_DESCRIPTION] <> OBJ.[ERROR_LONG_DESCRIPTION] OR ED.[ERROR_DESCRIPTION] <> OBJ.[ERROR_DESCRIPTION] OR ED.[VALIDATION_RULE] <> OBJ.[VALIDATION_RULE] OR ED.[ERROR_TYPE] <> OBJ.[ERROR_TYPE] OR ED.[ERROR_CLASS] <> OBJ.[ERROR_CLASS] ) THEN UPDATE SET ED.CURR_FLAG = 'N', ED.VALID_TO = GETDATE() OUTPUT $ACTION ACTION_OUT, OBJ.ERROR_CODE ERROR_CODE, OBJ.DATA_STREAM_ID DATA_STREAM_ID, OBJ.[ERROR_SEVERITY] [ERROR_SEVERITY], OBJ.DATA_QUALITY_RATING DATA_QUALITY_RATING, OBJ.ERROR_LONG_DESCRIPTION ERROR_LONG_DESCRIPTION, OBJ.ERROR_DESCRIPTION ERROR_DESCRIPTION, OBJ.VALIDATION_RULE VALIDATION_RULE, OBJ.ERROR_TYPE ERROR_TYPE, OBJ.ERROR_CLASS ERROR_CLASS, GETDATE() VALID_FROM, '9999-12-31' VALID_TO, 'Y' CURR_FLAG, 555 CREATED_BY_AUDIT_SK, 555 UPDATED_BY_AUDIT_SK ) AS MERGE_OUT WHERE MERGE_OUT.ACTION_OUT = 'UPDATE'; What am i doing wrong ?

    Read the article

  • SQL SERVER – Update Statistics are Sampled By Default

    - by pinaldave
    After reading my earlier post SQL SERVER – Create Primary Key with Specific Name when Creating Table on Statistics, I have received another question by a blog reader. The question is as follows: Question: Are the statistics sampled by default? Answer: Yes. The sampling rate can be specified by the user and it can be anywhere between a very low value to 100%. Let us do a small experiment to verify if the auto update on statistics is left on. Also, let’s examine a very large table that is created and statistics by default- whether the statistics are sampled or not. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Million Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 1000000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO Now let us observe the result of the DBCC SHOW_STATISTICS. The result shows that Resultset is for sure sampling for a large dataset. The percentage of sampling is based on data distribution as well as the kind of data in the table. Before dropping the table, let us check first the size of the table. The size of the table is 35 MB. Now, let us run the above code with lesser number of the rows. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Hundred Thousand Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 100000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO You can see that Rows Sampled is just the same as Rows of the table. In this case, the sample rate is 100%. Before dropping the table, let us also check the size of the table. The size of the table is less than 4 MB. Let us compare the Result set just for a valid reference. Test 1: Total Rows: 1000000, Rows Sampled: 255420, Size of the Table: 35.516 MB Test 2: Total Rows: 100000, Rows Sampled: 100000, Size of the Table: 3.555 MB The reason behind the sample in the Test1 is that the data space is larger than 8 MB, and therefore it uses more than 1024 data pages. If the data space is smaller than 8 MB and uses less than 1024 data pages, then the sampling does not happen. Sampling aids in reducing excessive data scan; however, sometimes it reduces the accuracy of the data as well. Please note that this is just a sample test and there is no way it can be claimed as a benchmark test. The result can be dissimilar on different machines. There are lots of other information can be included when talking about this subject. I will write detail post covering all the subject very soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • SQL SERVER – SOS_SCHEDULER_YIELD – Wait Type – Day 8 of 28

    - by pinaldave
    This is a very interesting wait type and quite often seen as one of the top wait types. Let us discuss this today. From Book On-Line: Occurs when a task voluntarily yields the scheduler for other tasks to execute. During this wait the task is waiting for its quantum to be renewed. SOS_SCHEDULER_YIELD Explanation: SQL Server has multiple threads, and the basic working methodology for SQL Server is that SQL Server does not let any “runnable” thread to starve. Now let us assume SQL Server OS is very busy running threads on all the scheduler. There are always new threads coming up which are ready to run (in other words, runnable). Thread management of the SQL Server is decided by SQL Server and not the operating system. SQL Server runs on non-preemptive mode most of the time, meaning the threads are co-operative and can let other threads to run from time to time by yielding itself. When any thread yields itself for another thread, it creates this wait. If there are more threads, it clearly indicates that the CPU is under pressure. You can fun the following DMV to see how many runnable task counts there are in your system. SELECT scheduler_id, current_tasks_count, runnable_tasks_count, work_queue_count, pending_disk_io_count FROM sys.dm_os_schedulers WHERE scheduler_id < 255 GO If you notice a two-digit number in runnable_tasks_count continuously for long time (not once in a while), you will know that there is CPU pressure. The two-digit number is usually considered as a bad thing; you can read the description of the above DMV over here. Additionally, there are several other counters (%Processor Time and other processor related counters), through which you can refer to so you can validate CPU pressure along with the method explained above. Reducing SOS_SCHEDULER_YIELD wait: This is the trickiest part of this procedure. As discussed, this particular wait type relates to CPU pressure. Increasing more CPU is the solution in simple terms; however, it is not easy to implement this solution. There are other things that you can consider when this wait type is very high. Here is the query where you can find the most expensive query related to CPU from the cache Note: The query that used lots of resources but is not cached will not be caught here. SELECT SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1, ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.TEXT) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)+1), qs.execution_count, qs.total_logical_reads, qs.last_logical_reads, qs.total_logical_writes, qs.last_logical_writes, qs.total_worker_time, qs.last_worker_time, qs.total_elapsed_time/1000000 total_elapsed_time_in_S, qs.last_elapsed_time/1000000 last_elapsed_time_in_S, qs.last_execution_time, qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_worker_time DESC -- CPU time You can find the most expensive queries that are utilizing lots of CPU (from the cache) and you can tune them accordingly. Moreover, you can find the longest running query and attempt to tune them if there is any processor offending code. Additionally, pay attention to total_worker_time because if that is also consistently higher, then  the CPU under too much pressure. You can also check perfmon counters of compilations as they tend to use good amount of CPU. Index rebuild is also a CPU intensive process but we should consider that main cause for this query because that is indeed needed on high transactions OLTP system utilized to reduce fragmentations. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All of the discussions of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    After having excellent response to my quiz – Why SELECT * throws an error but SELECT COUNT(*) does not?I have decided to ask another puzzling question to all of you. I am running this test on SQL Server 2008 R2. Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Note: Auto Update Statistics and Auto Create Statistics for database is TRUE Expected Result – Statistics should be updated – SQL SERVER – When are Statistics Updated – What triggers Statistics to Update Now the question is why the statistics are not updated? The common answer is – we can update the statistics ourselves using UPDATE STATISTICS TableName WITH FULLSCAN, ALL However, the solution I am looking is where statistics should be updated automatically based on algorithm mentioned here. Now the solution is to ____________________. Vinod Kumar is not allowed to take participate over here as he is the one who has helped me to build this puzzle. I will publish the solution on next week. Please leave a comment and if your comment consist valid answer, I will publish with due credit. Here is the script to reproduce the scenario which I mentioned. -- Execution Plans Difference -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table - none listed sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -------------------------------------------------------------- -- Round 2 -- Insert Ten Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 10000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -- You will notice that Statistics are still updated with 1000 rows -- Clean up Database DROP TABLE ExecTable GO USE MASTER GO ALTER DATABASE SampleDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO DROP DATABASE SampleDB GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • RPi and Java Embedded GPIO: Writing Java code to blink LED

    - by hinkmond
    So, you've followed the previous steps to install Java Embedded on your Raspberry Pi ?, you went to Fry's and picked up some jumper wires, LEDs, and resistors ?, you hooked up the wires, LED, and resistor the the correct pins ?, and now you want to start programming in Java on your RPi? Yes? ???????! OK, then... Here we go. You can use the following source code to blink your first LED on your RPi using Java. In the code you can see that I'm not using any complicated gpio libraries like wiringpi or pi4j, and I'm not doing any low-level pin manipulation like you can in C. And, I'm not using python (hell no!). This is Java programming, so we keep it simple (and more readable) than those other programming languages. See: Write Java code to do this In the Java code, I'm opening up the RPi Debian Wheezy well-defined file handles to control the GPIO ports. First I'm resetting everything using the unexport/export file handles. (On the RPi, if you open the well-defined file handles and write certain ASCII text to them, you can drive your GPIO to perform certain operations. See this GPIO reference). Next, I write a "1" then "0" to the value file handle of the GPIO0 port (see the previous pinout diagram). That makes the LED blink. Then, I loop to infinity. Easy, huh? import java.io.* /* * Java Embedded Raspberry Pi GPIO app */ package jerpigpio; import java.io.FileWriter; /** * * @author hinkmond */ public class JerpiGPIO { static final String GPIO_OUT = "out"; static final String GPIO_ON = "1"; static final String GPIO_OFF = "0"; static final String GPIO_CH00="0"; /** * @param args the command line arguments */ public static void main(String[] args) { FileWriter commandFile; try { /*** Init GPIO port for output ***/ // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); // Reset the port unexportFile.write(GPIO_CH00); unexportFile.flush(); // Set the port for use exportFile.write(GPIO_CH00); exportFile.flush(); // Open file handle to port input/output control FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio"+GPIO_CH00+"/direction"); // Set port for output directionFile.write(GPIO_OUT); directionFile.flush(); /*--- Send commands to GPIO port ---*/ // Opne file handle to issue commands to GPIO port commandFile = new FileWriter("/sys/class/gpio/gpio"+GPIO_CH00+"/value"); // Loop forever while (true) { // Set GPIO port ON commandFile.write(GPIO_ON); commandFile.flush(); // Wait for a while java.lang.Thread.sleep(200); // Set GPIO port OFF commandFile.write(GPIO_OFF); commandFile.flush(); // Wait for a while java.lang.Thread.sleep(200); } } catch (Exception exception) { exception.printStackTrace(); } } } Hinkmond

    Read the article

  • android Emulator always stop at "waiting for Home..."

    - by wuwupp
    hi,there, I freshed install Eclipse, jdk, android sdk 1.5 in winxp. but when I run the "hello world" app, the emulator always stop at "andorid" loading message. In eclipse console, it shows "waiting for HOME..." and in DDMS LogCat, it shows following msg: there are some error and warning. So, what's wrong with my case? I have googled lots of results, but no one can help me. Please help me. Many thx 06-13 00:07:54.323: INFO/DEBUG(551): debuggerd: Jun 30 2009 17:00:51 06-13 00:07:54.383: INFO/vold(550): Android Volume Daemon version 2.0 06-13 00:07:54.724: ERROR/flash_image(556): can't find recovery partition 06-13 00:07:55.223: DEBUG/qemud(558): entering main loop 06-13 00:07:55.323: DEBUG/qemud(558): multiplexer_handle_control: unknown control message (18 bytes): 'ko:unknown command' 06-13 00:07:55.493: INFO/vold(550): New MMC card 'SU02G' (serial 1012966) added @ /devices/platform/goldfish_mmc.0/mmc_host/mmc0/mmc0:e118 06-13 00:07:55.773: INFO/vold(550): Disk (blkdev 179:0), 262144 secs (128 MB) 0 partitions 06-13 00:07:55.773: INFO/vold(550): New blkdev 179.0 on media SU02G, media path /devices/platform/goldfish_mmc.0/mmc_host/mmc0/mmc0:e118, Dpp 0 06-13 00:07:55.814: INFO/vold(550): Evaluating dev '/devices/platform/goldfish_mmc.0/mmc_host/mmc0/mmc0:e118/block/mmcblk0' for mountable filesystems for '/sdcard' 06-13 00:07:56.014: ERROR/vold(550): Error opening switch name path '/sys/class/switch/test2' (No such file or directory) 06-13 00:07:56.014: ERROR/vold(550): Error bootstrapping switch '/sys/class/switch/test2' (m) 06-13 00:07:56.073: ERROR/vold(550): Error opening switch name path '/sys/class/switch/test' (No such file or directory) 06-13 00:07:56.073: ERROR/vold(550): Error bootstrapping switch '/sys/class/switch/test' (m) 06-13 00:07:56.073: DEBUG/vold(550): Bootstrapping complete 06-13 00:07:56.743: INFO//system/bin/dosfsck(550): dosfsck 3.0.1 (23 Nov 2008) 06-13 00:07:56.753: INFO//system/bin/dosfsck(550): dosfsck 3.0.1, 23 Nov 2008, FAT32, LFN 06-13 00:07:56.783: INFO//system/bin/dosfsck(550): Checking we can access the last sector of the filesystem 06-13 00:07:56.893: INFO//system/bin/dosfsck(550): Boot sector contents: 06-13 00:07:56.924: INFO//system/bin/dosfsck(550): System ID "MSWIN4.1" 06-13 00:07:56.934: INFO//system/bin/dosfsck(550): Media byte 0xf8 (hard disk) 06-13 00:07:56.953: INFO//system/bin/dosfsck(550): 512 bytes per logical sector 06-13 00:07:56.974: INFO//system/bin/dosfsck(550): 512 bytes per cluster 06-13 00:07:57.005: INFO//system/bin/dosfsck(550): 32 reserved sectors 06-13 00:07:57.013: INFO//system/bin/dosfsck(550): First FAT starts at byte 16384 (sector 32) 06-13 00:07:57.013: INFO//system/bin/dosfsck(550): 2 FATs, 32 bit entries 06-13 00:07:57.023: INFO//system/bin/dosfsck(550): 1040384 bytes per FAT (= 2032 sectors) 06-13 00:07:57.043: INFO//system/bin/dosfsck(550): Root directory start at cluster 2 (arbitrary size) 06-13 00:07:57.043: INFO//system/bin/dosfsck(550): Data area starts at byte 2097152 (sector 4096) 06-13 00:07:57.043: INFO//system/bin/dosfsck(550): 258048 data clusters (132120576 bytes) 06-13 00:07:57.103: INFO//system/bin/dosfsck(550): 9 sectors/track, 2 heads 06-13 00:07:57.103: INFO//system/bin/dosfsck(550): 0 hidden sectors 06-13 00:07:57.123: INFO//system/bin/dosfsck(550): 262144 sectors total 06-13 00:07:57.313: DEBUG/qemud(558): fdhandler_accept_event: accepting on fd 10 06-13 00:07:57.313: DEBUG/qemud(558): created client 0xe078 listening on fd 8 06-13 00:07:57.313: DEBUG/qemud(558): fdhandler_event: disconnect on fd 8 06-13 00:07:57.623: DEBUG/qemud(558): fdhandler_accept_event: accepting on fd 10 06-13 00:07:57.623: DEBUG/qemud(558): created client 0xf028 listening on fd 8 06-13 00:07:57.643: DEBUG/qemud(558): client_fd_receive: attempting registration for service 'gsm' 06-13 00:07:57.763: DEBUG/qemud(558): client_fd_receive: - received channel id 1 06-13 00:08:12.553: INFO//system/bin/dosfsck(550): Checking for unused clusters. 06-13 00:08:13.483: INFO//system/bin/dosfsck(550): Checking free cluster summary. 06-13 00:08:13.643: DEBUG/AndroidRuntime(553): AndroidRuntime START <<<<<<<<<<<<<< 06-13 00:08:13.705: DEBUG/AndroidRuntime(553): CheckJNI is ON 06-13 00:08:13.793: INFO//system/bin/dosfsck(550): /dev/block//vold/179:0: 0 files, 1/258048 clusters 06-13 00:08:14.063: INFO/logwrapper(550): /system/bin/dosfsck terminated by exit(0) 06-13 00:08:14.143: DEBUG/vold(550): Filesystem check completed OK 06-13 00:08:14.683: INFO/vold(550): Sucessfully mounted vfat filesystem 179:0 on /sdcard (safe-mode on) 06-13 00:08:17.023: INFO/(554): ServiceManager: 0xac38 06-13 00:08:17.883: INFO/AudioFlinger(554): AudioFlinger's thread ready to run for output 0 06-13 00:08:18.163: INFO/CameraService(554): CameraService started: pid=554 06-13 00:08:21.824: DEBUG/AndroidRuntime(553): --- registering native functions --- 06-13 00:08:27.813: INFO/Zygote(553): Preloading classes... 06-13 00:08:27.994: DEBUG/dalvikvm(553): GC freed 764 objects / 42216 bytes in 88ms 06-13 00:08:30.234: DEBUG/dalvikvm(553): GC freed 278 objects / 17160 bytes in 48ms 06-13 00:08:33.094: DEBUG/dalvikvm(553): GC freed 208 objects / 12696 bytes in 44ms 06-13 00:08:34.343: DEBUG/dalvikvm(553): Trying to load lib /system/lib/libmedia_jni.so 0x0 06-13 00:08:35.803: DEBUG/dalvikvm(553): Added shared lib /system/lib/libmedia_jni.so 0x0 06-13 00:08:35.903: DEBUG/dalvikvm(553): Trying to load lib /system/lib/libmedia_jni.so 0x0 06-13 00:08:35.903: DEBUG/dalvikvm(553): Shared lib '/system/lib/libmedia_jni.so' already loaded in same CL 0x0 06-13 00:08:36.003: DEBUG/dalvikvm(553): Trying to load lib /system/lib/libmedia_jni.so 0x0 06-13 00:08:36.003: DEBUG/dalvikvm(553): Shared lib '/system/lib/libmedia_jni.so' already loaded in same CL 0x0 06-13 00:08:36.215: DEBUG/dalvikvm(553): Trying to load lib /system/lib/libmedia_jni.so 0x0 06-13 00:08:36.244: DEBUG/dalvikvm(553): Shared lib '/system/lib/libmedia_jni.so' already loaded in same CL 0x0 06-13 00:08:36.455: DEBUG/dalvikvm(553): GC freed 462 objects / 29144 bytes in 70ms 06-13 00:08:44.123: DEBUG/dalvikvm(553): GC freed 3584 objects / 171648 bytes in 125ms 06-13 00:09:10.473: DEBUG/dalvikvm(553): GC freed 11329 objects / 400856 bytes in 196ms 06-13 00:09:17.373: DEBUG/dalvikvm(553): GC freed 10472 objects / 438272 bytes in 199ms 06-13 00:09:24.563: DEBUG/dalvikvm(553): GC freed 10975 objects / 459800 bytes in 202ms 06-13 00:09:46.403: DEBUG/dalvikvm(553): GC freed 14372 objects / 506896 bytes in 252ms 06-13 00:09:53.793: DEBUG/dalvikvm(553): GC freed 11314 objects / 481360 bytes in 215ms 06-13 00:09:57.743: DEBUG/dalvikvm(553): GC freed 5928 objects / 248640 bytes in 195ms 06-13 00:10:01.324: DEBUG/dalvikvm(553): GC freed 349 objects / 37032 bytes in 190ms 06-13 00:10:05.253: DEBUG/dalvikvm(553): GC freed 778 objects / 48376 bytes in 217ms 06-13 00:10:06.564: DEBUG/dalvikvm(553): GC freed 321 objects / 37288 bytes in 219ms 06-13 00:10:08.194: DEBUG/dalvikvm(553): GC freed 477 objects / 29584 bytes in 212ms 06-13 00:10:08.663: DEBUG/dalvikvm(553): Trying to load lib /system/lib/libwebcore.so 0x0 06-13 00:10:09.743: DEBUG/dalvikvm(553): Added shared lib /system/lib/libwebcore.so 0x0 06-13 00:10:11.634: DEBUG/dalvikvm(553): GC freed 441 objects / 26224 bytes in 236ms 06-13 00:10:12.893: DEBUG/dalvikvm(553): GC freed 506 objects / 41464 bytes in 235ms 06-13 00:10:14.153: DEBUG/dalvikvm(553): GC freed 537 objects / 38832 bytes in 239ms 06-13 00:10:15.883: DEBUG/dalvikvm(553): GC freed 342 objects / 22552 bytes in 248ms 06-13 00:10:17.124: DEBUG/dalvikvm(553): GC freed 338 objects / 18736 bytes in 264ms 06-13 00:10:18.523: DEBUG/dalvikvm(553): GC freed 629 objects / 32136 bytes in 260ms 06-13 00:10:38.933: DEBUG/dalvikvm(553): GC freed 14257 objects / 497280 bytes in 368ms 06-13 00:10:46.453: DEBUG/dalvikvm(553): GC freed 11164 objects / 469576 bytes in 360ms 06-13 00:10:52.973: DEBUG/dalvikvm(553): GC freed 7134 objects / 311432 bytes in 339ms 06-13 00:10:55.595: DEBUG/dalvikvm(553): GC freed 752 objects / 43224 bytes in 520ms 06-13 00:10:56.863: DEBUG/dalvikvm(553): GC freed 598 objects / 31496 bytes in 307ms 06-13 00:10:58.543: DEBUG/dalvikvm(553): GC freed 413 objects / 26336 bytes in 355ms 06-13 00:10:59.263: INFO/Zygote(553): ...preloaded 1166 classes in 151403ms. 06-13 00:10:59.683: DEBUG/dalvikvm(553): GC freed 313 objects / 19952 bytes in 343ms 06-13 00:10:59.793: INFO/Zygote(553): Preloading resources... 06-13 00:11:00.683: DEBUG/dalvikvm(553): GC freed 54 objects / 11248 bytes in 340ms 06-13 00:11:05.723: DEBUG/dalvikvm(553): GC freed 337 objects / 15008 bytes in 317ms 06-13 00:11:08.703: DEBUG/dalvikvm(553): GC freed 280 objects / 11768 bytes in 312ms 06-13 00:11:09.303: INFO/Zygote(553): ...preloaded 48 resources in 9513ms. 06-13 00:11:09.795: INFO/Zygote(553): ...preloaded 15 resources in 454ms. 06-13 00:11:10.303: DEBUG/dalvikvm(553): GC freed 118 objects / 8616 bytes in 420ms 06-13 00:11:10.913: DEBUG/dalvikvm(553): GC freed 205 objects / 8104 bytes in 308ms 06-13 00:11:11.344: DEBUG/dalvikvm(553): GC freed 36 objects / 1400 bytes in 320ms 06-13 00:11:11.543: INFO/dalvikvm(553): Splitting out new zygote heap 06-13 00:11:12.973: INFO/dalvikvm(553): System server process 585 has been created 06-13 00:11:13.336: INFO/Zygote(553): Accepting command socket connections 06-13 00:11:14.963: INFO/jdwp(585): received file descriptor 10 from ADB 06-13 00:11:16.843: WARN/System.err(585): Can't dispatch DDM chunk 46454154: no handler defined 06-13 00:11:16.953: WARN/System.err(585): Can't dispatch DDM chunk 4d505251: no handler defined 06-13 00:11:17.763: DEBUG/dalvikvm(585): Trying to load lib /system/lib/libandroid_servers.so 0x0 06-13 00:11:19.714: DEBUG/dalvikvm(585): Added shared lib /system/lib/libandroid_servers.so 0x0 06-13 00:11:20.123: INFO/sysproc(585): Entered system_init() 06-13 00:11:20.223: INFO/sysproc(585): ServiceManager: 0x1017b8 06-13 00:11:20.359: INFO/SurfaceFlinger(585): SurfaceFlinger is starting 06-13 00:11:20.493: INFO/SurfaceFlinger(585): SurfaceFlinger's main thread ready to run. Initializing graphics H/W... 06-13 00:11:20.634: ERROR/MemoryHeapBase(585): error opening /dev/pmem: No such file or directory 06-13 00:11:20.704: ERROR/SurfaceFlinger(585): Couldn't open /sys/power/wait_for_fb_sleep or /sys/power/wait_for_fb_wake 06-13 00:11:22.013: ERROR/GLLogger(585): couldn't load library (Cannot find library) 06-13 00:11:22.103: INFO/SurfaceFlinger(585): EGL informations: 06-13 00:11:22.113: INFO/SurfaceFlinger(585): # of configs : 6 06-13 00:11:22.123: INFO/SurfaceFlinger(585): vendor : Android 06-13 00:11:22.123: INFO/SurfaceFlinger(585): version : 1.31 Android META-EGL 06-13 00:11:22.134: INFO/SurfaceFlinger(585): extensions: 06-13 00:11:22.134: INFO/SurfaceFlinger(585): Client API: OpenGL ES 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): using (fd=22) 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): id = 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): xres = 320 px 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): yres = 480 px 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): xres_virtual = 320 px 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): yres_virtual = 960 px 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): bpp = 16 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): r = 11:5 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): g = 5:6 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): b = 0:5 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): width = 49 mm (165.877548 dpi) 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): height = 74 mm (164.756760 dpi) 06-13 00:11:22.193: INFO/EGLDisplaySurface(585): refresh rate = 60.00 Hz 06-13 00:11:22.533: WARN/HAL(585): load: module=/system/lib/hw/copybit.goldfish.so error=Cannot find library 06-13 00:11:22.543: WARN/HAL(585): load: module=/system/lib/hw/copybit.default.so error=Cannot find library 06-13 00:11:22.553: WARN/SurfaceFlinger(585): ro.sf.lcd_density not defined, using 160 dpi by default. 06-13 00:11:22.644: INFO/SurfaceFlinger(585): OpenGL informations: 06-13 00:11:22.654: INFO/SurfaceFlinger(585): vendor : Android 06-13 00:11:22.654: INFO/SurfaceFlinger(585): renderer : Android PixelFlinger 1.0 06-13 00:11:22.654: INFO/SurfaceFlinger(585): version : OpenGL ES-CM 1.0 06-13 00:11:22.654: INFO/SurfaceFlinger(585): extensions: GL_OES_byte_coordinates GL_OES_fixed_point GL_OES_single_precision GL_OES_read_format GL_OES_compressed_paletted_texture GL_OES_draw_texture GL_OES_matrix_get GL_OES_query_matrix GL_ARB_texture_compression GL_ARB_texture_non_power_of_two GL_ANDROID_direct_texture GL_ANDROID_user_clip_plane GL_ANDROID_vertex_buffer_object GL_ANDROID_generate_mipmap 06-13 00:11:22.673: WARN/HAL(585): load: module=/system/lib/hw/copybit.goldfish.so error=Cannot find library 06-13 00:11:22.683: WARN/HAL(585): load: module=/system/lib/hw/copybit.default.so error=Cannot find library 06-13 00:11:22.703: WARN/HAL(585): load: module=/system/lib/hw/overlay.goldfish.so error=Cannot find library 06-13 00:11:22.713: WARN/HAL(585): load: module=/system/lib/hw/overlay.default.so error=Cannot find library 06-13 00:11:23.663: INFO/sysproc(585): System server: starting Android runtime. 06-13 00:11:23.733: INFO/sysproc(585): System server: starting Android services. 06-13 00:11:23.953: INFO/SystemServer(585): Entered the Android system server! 06-13 00:11:24.303: INFO/sysproc(585): System server: entering thread pool. 06-13 00:11:24.763: ERROR/GLLogger(585): couldn't load library (Cannot find library) 06-13 00:11:25.893: INFO/ARMAssembler(585): generated scanline__00000077:03545404_00000A01_00000000 [ 30 ipp] (51 ins) at [0x18f708:0x18f7d4] in 72796961 ns 06-13 00:11:26.193: INFO/SystemServer(585): Starting Power Manager. 06-13 00:11:26.953: INFO/SystemServer(585): Starting Activity Manager. 06-13 00:11:31.733: INFO/SystemServer(585): Starting telephony registry 06-13 00:11:32.054: INFO/SystemServer(585): Starting Package Manager. 06-13 00:11:32.553: INFO/Installer(585): connecting... 06-13 00:11:32.914: INFO/installd(555): new connection 06-13 00:11:35.193: INFO/PackageManager(585): Got library android.awt in /system/framework/android.awt.jar 06-13 00:11:35.313: INFO/PackageManager(585): Got library android.test.runner in /system/framework/android.test.runner.jar 06-13 00:11:35.324: INFO/PackageManager(585): Got library com.android.im.plugin in /system/framework/com.android.im.plugin.jar 06-13 00:11:44.643: DEBUG/PackageManager(585): Scanning app dir /system/framework 06-13 00:11:49.513: DEBUG/PackageManager(585): Scanning app dir /system/app 06-13 00:11:51.493: DEBUG/dalvikvm(585): GC freed 6088 objects / 251280 bytes in 1237ms 06-13 00:12:27.497: DEBUG/dalvikvm(585): GC freed 3435 objects / 216088 bytes in 792ms 06-13 00:12:29.213: DEBUG/PackageManager(585): Scanning app dir /data/app 06-13 00:12:30.223: DEBUG/PackageManager(585): Scanning app dir /data/app-private 06-13 00:12:30.425: INFO/PackageManager(585): Time to scan packages: 47.319 seconds 06-13 00:12:30.703: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.providers.contacts 06-13 00:12:30.803: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.cp in package com.android.providers.contacts 06-13 00:12:30.853: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.development 06-13 00:12:30.913: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.ALL_SERVICES in package com.android.development 06-13 00:12:31.133: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.YouTubeUser in package com.android.development 06-13 00:12:31.143: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.ACCESS_GOOGLE_PASSWORD in package com.android.development 06-13 00:12:31.234: WARN/PackageManager(585): Unknown permission com.google.android.providers.gmail.permission.WRITE_GMAIL in package com.android.settings 06-13 00:12:31.254: WARN/PackageManager(585): Unknown permission com.google.android.providers.gmail.permission.READ_GMAIL in package com.android.settings 06-13 00:12:31.303: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.settings 06-13 00:12:31.683: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH in package com.android.browser 06-13 00:12:31.803: WARN/PackageManager(585): Unknown permission com.google.android.googleapps.permission.GOOGLE_AUTH.mail in package com.android.contacts 06-13 00:12:34.603: DEBUG/dalvikvm(585): GC freed 2851 objects / 161304 bytes in 845ms 06-13 00:12:35.403: INFO/SystemServer(585): Starting Content Manager. 06-13 00:12:39.954: WARN/ActivityManager(585): Unable to start service Intent { action=android.accounts.IAccountsService comp={com.google.android.googleapps/com.google.android.googleapps.GoogleLoginService} }: not found 06-13 00:12:40.063: WARN/AccountMonitor(585): Couldn't connect to Intent { action=android.accounts.IAccountsService comp={com.google.android.googleapps/com.google.android.googleapps.GoogleLoginService} } (Missing service?) 06-13 00:12:40.253: INFO/SystemServer(585): Starting System Content Providers. 06-13 00:12:40.553: INFO/ActivityThread(585): Publishing provider settings: com.android.providers.settings.SettingsProvider 06-13 00:12:41.433: INFO/ActivityThread(585): Publishing provider sync: android.content.SyncProvider 06-13 00:12:41.683: INFO/SystemServer(585): Starting Battery Service. 06-13 00:12:42.293: ERROR/BatteryService(585): Could not open '/sys/class/power_supply/usb/online' 06-13 00:12:42.433: ERROR/BatteryService(585): Could not open '/sys/class/power_supply/battery/batt_vol' 06-13 00:12:42.543: ERROR/BatteryService(585): Could not open '/sys/class/power_supply/battery/batt_temp' 06-13 00:12:42.933: INFO/SystemServer(585): Starting Hardware Service. 06-13 00:12:43.398: DEBUG/qemud(558): fdhandler_accept_event: accepting on fd 10 06-13 00:12:43.623: DEBUG/qemud(558): created client 0x10fd8 listening on fd 11 06-13 00:12:43.743: DEBUG/qemud(558): client_fd_receive: attempting registration for service 'hw-control' 06-13 00:12:43.873: DEBUG/qemud(558): client_fd_receive: - received channel id 2 06-13 00:15:20.695: WARN/SurfaceFlinger(585): executeScheduledBroadcasts() skipped, contention on the client. We'll try again later...

    Read the article

  • Running two wsgi applications on the same server gdal org exception with apache2/modwsgi

    - by monkut
    I'm trying to run two wsgi applications, one django and the other tilestache using the same server. The tilestache server accesses the db via django to query the db. In the process of serving tiles it performs a transform on the incoming bbox, and in this process hit's the following error. The transform works without error for the specific bbox polygon when run manually from the python shell: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 325, in __call__ mimetype, content = requestHandler(self.config, environ['PATH_INFO'], environ['QUERY_STRING']) File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 231, in requestHandler mimetype, content = getTile(layer, coord, extension) File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 84, in getTile tile = layer.render(coord, format) File "/usr/lib/python2.7/dist-packages/TileStache/Core.py", line 295, in render tile = provider.renderArea(width, height, srs, xmin, ymin, xmax, ymax, coord.zoom) File "/var/www/tileserver/providers.py", line 59, in renderArea bbox.transform(METERS_SRID) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/geos/geometry.py", line 520, in transform g = gdal.OGRGeometry(self.wkb, srid) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geometries.py", line 131, in __init__ self.__class__ = GEO_CLASSES[self.geom_type.num] File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geometries.py", line 245, in geom_type return OGRGeomType(capi.get_geom_type(self.ptr)) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geomtype.py", line 43, in __init__ raise OGRException('Invalid OGR Integer Type: %d' % type_input) OGRException: Invalid OGR Integer Type: 1987180391 I think I've hit the non thread safe issue with GDAL, metioned on the django site. Is there a way I could configure this so that it would work? Apache Version: Apache/2.2.22 (Ubuntu) mod_wsgi/3.3 Python/2.7.3 configured Apache apache2/sites-available/default: <VirtualHost *:80> ServerAdmin ironman@localhost DocumentRoot /var/www/bin LogLevel warn WSGIDaemonProcess lbs processes=2 maximum-requests=500 threads=1 WSGIProcessGroup lbs WSGIScriptAlias / /var/www/bin/apache/django.wsgi Alias /static /var/www/lbs/static/ </VirtualHost> <VirtualHost *:8080> ServerAdmin ironman@localhost DocumentRoot /var/www/bin LogLevel warn WSGIDaemonProcess tilestache processes=1 maximum-requests=500 threads=1 WSGIProcessGroup tilestache WSGIScriptAlias / /var/www/bin/tileserver/tilestache.wsgi </VirtualHost> Django Version: 1.4 httpd.conf: Listen 8080 NameVirtualHost *:8080 UPDATE I've added the a test.wsgi script to determine if the GLOBAL interpreter setting is correct, as mentioned by graham and described here: http://code.google.com/p/modwsgi/wiki/CheckingYourInstallation#Sub_Interpreter_Being_Used It seems to show the expected result: [Tue Aug 14 10:32:01 2012] [notice] Apache/2.2.22 (Ubuntu) mod_wsgi/3.3 Python/2.7.3 configured -- resuming normal operations [Tue Aug 14 10:32:01 2012] [info] mod_wsgi (pid=29891): Attach interpreter ''. I've worked around the issue for now by changing the srs used in the db so that the transform is unnecessary in tilestache app. I don't understand why the transform() method, when called in the django app works, but then in the tilestache app fails. tilestache.wsgi #!/usr/bin/python import os import time import sys import TileStache current_dir = os.path.abspath(os.path.dirname(__file__)) project_dir = os.path.realpath(os.path.join(current_dir, "..", "..")) sys.path.append(project_dir) sys.path.append(current_dir) os.environ['DJANGO_SETTINGS_MODULE'] = 'bin.settings' sys.stdout = sys.stderr # wait for the apache django lbs server to start up, # --> in order to retrieve the tilestache cfg time.sleep(2) tilestache_config_url = "http://127.0.0.1/tilestache/config/" application = TileStache.WSGITileServer(tilestache_config_url) UPDATE 2 So it turned out I did need to use a projection other than the google (900913) one in the db. So my previous workaround failed. While I'd like to fix this issue, I decided to work around the issue this type by making a django view that performs the transform needed. So now tilestache requests the data through the django app and not internally.

    Read the article

  • Python script is exiting with no output and I have no idea why

    - by Adam Tuttle
    I'm attempting to debug a Subversion post-commit hook that calls some python scripts. What I've been able to determine so far is that when I run post-commit.bat manually (I've created a wrapper for it to make it easier) everything succeeds, but when SVN runs it one particular step doesn't work. We're using CollabNet SVNServe, which I know from the documentation removes all environment variables. This had caused some problems earlier, but shouldn't be an issue now. Before Subversion calls a hook script, it removes all variables - including $PATH on Unix, and %PATH% on Windows - from the environment. Therefore, your script can only run another program if you spell out that program's absolute name. The relevant portion of post-commit.bat is: echo -------------------------- >> c:\svn-repos\company\hooks\svn2ftp.out.log set SITENAME=staging set SVNPATH=branches/staging/wwwroot/ "C:\Python3\python.exe" C:\svn-repos\company\hooks\svn2ftp.py ^ --svnUser="svnusername" ^ --svnPass="svnpassword" ^ --ftp-user=ftpuser ^ --ftp-password=ftppassword ^ --ftp-remote-dir=/ ^ --access-url=svn://10.0.100.6/company ^ --status-file="C:\svn-repos\company\hooks\svn2ftp-%SITENAME%.dat" ^ --project-directory=%SVNPATH% "staging.company.com" %1 %2 >> c:\svn-repos\company\hooks\svn2ftp.out.log echo -------------------------- >> c:\svn-repos\company\hooks\svn2ftp.out.log When I run post-commit.bat manually, for example: post-commit c:\svn-repos\company 12345, I see output like the following in svn2ftp.out.log: -------------------------- args1: c:\svn-repos\company args0: staging.company.com abspath: c:\svn-repos\company project_dir: branches/staging/wwwroot/ local_repos_path: c:\svn-repos\company getting youngest revision... done, up-to-date -------------------------- However, when I commit something to the repo and it runs automatically, the output is: -------------------------- -------------------------- svn2ftp.py is a bit long, so I apologize but here goes. I'll have some notes/disclaimers about its contents below it. #!/usr/bin/env python """Usage: svn2ftp.py [OPTION...] FTP-HOST REPOS-PATH Upload to FTP-HOST changes committed to the Subversion repository at REPOS-PATH. Uses svn diff --summarize to only propagate the changed files Options: -?, --help Show this help message. -u, --ftp-user=USER The username for the FTP server. Default: 'anonymous' -p, --ftp-password=P The password for the FTP server. Default: '@' -P, --ftp-port=X Port number for the FTP server. Default: 21 -r, --ftp-remote-dir=DIR The remote directory that is expected to resemble the repository project directory -a, --access-url=URL This is the URL that should be used when trying to SVN export files so that they can be uploaded to the FTP server -s, --status-file=PATH Required. This script needs to store the last successful revision that was transferred to the server. PATH is the location of this file. -d, --project-directory=DIR If the project you are interested in sending to the FTP server is not under the root of the repository (/), set this parameter. Example: -d 'project1/trunk/' This should NOT start with a '/'. 2008.5.2 CKS Fixed possible Windows-related bug with tempfile, where the script didn't have permission to write to the tempfile. Replaced this with a open()-created file created in the CWD. 2008.5.13 CKS Added error logging. Added exception for file-not-found errors when deleting files. 2008.5.14 CKS Change file open to 'rb' mode, to prevent Python's universal newline support from stripping CR characters, causing later comparisons between FTP and SVN to report changes. """ try: import sys, os import logging logging.basicConfig( level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s', filename='svn2ftp.debug.log', filemode='a' ) console = logging.StreamHandler() console.setLevel(logging.ERROR) logging.getLogger('').addHandler(console) import getopt, tempfile, smtplib, traceback, subprocess from io import StringIO import pysvn import ftplib import inspect except Exception as e: logging.error(e) #capture the location of the error frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace) print(stack_trace) #end capture sys.exit(1) #defaults host = "" user = "anonymous" password = "@" port = 21 repo_path = "" local_repos_path = "" status_file = "" project_directory = "" remote_base_directory = "" toAddrs = "[email protected]" youngest_revision = "" def email(toAddrs, message, subject, fromAddr='[email protected]'): headers = "From: %s\r\nTo: %s\r\nSubject: %s\r\n\r\n" % (fromAddr, toAddrs, subject) message = headers + message logging.info('sending email to %s...' % toAddrs) server = smtplib.SMTP('smtp.company.com') server.set_debuglevel(1) server.sendmail(fromAddr, toAddrs, message) server.quit() logging.info('email sent') def captureErrorMessage(e): sout = StringIO() traceback.print_exc(file=sout) errorMessage = '\n'+('*'*80)+('\n%s'%e)+('\n%s\n'%sout.getvalue())+('*'*80) return errorMessage def usage_and_exit(errmsg): """Print a usage message, plus an ERRMSG (if provided), then exit. If ERRMSG is provided, the usage message is printed to stderr and the script exits with a non-zero error code. Otherwise, the usage message goes to stdout, and the script exits with a zero errorcode.""" if errmsg is None: stream = sys.stdout else: stream = sys.stderr print(__doc__, file=stream) if errmsg: print("\nError: %s" % (errmsg), file=stream) sys.exit(2) sys.exit(0) def read_args(): global host global user global password global port global repo_path global local_repos_path global status_file global project_directory global remote_base_directory global youngest_revision try: opts, args = getopt.gnu_getopt(sys.argv[1:], "?u:p:P:r:a:s:d:SU:SP:", ["help", "ftp-user=", "ftp-password=", "ftp-port=", "ftp-remote-dir=", "access-url=", "status-file=", "project-directory=", "svnUser=", "svnPass=" ]) except getopt.GetoptError as msg: usage_and_exit(msg) for opt, arg in opts: if opt in ("-?", "--help"): usage_and_exit() elif opt in ("-u", "--ftp-user"): user = arg elif opt in ("-p", "--ftp-password"): password = arg elif opt in ("-SU", "--svnUser"): svnUser = arg elif opt in ("-SP", "--svnPass"): svnPass = arg elif opt in ("-P", "--ftp-port"): try: port = int(arg) except ValueError as msg: usage_and_exit("Invalid value '%s' for --ftp-port." % (arg)) if port < 1 or port > 65535: usage_and_exit("Value for --ftp-port must be a positive integer less than 65536.") elif opt in ("-r", "--ftp-remote-dir"): remote_base_directory = arg elif opt in ("-a", "--access-url"): repo_path = arg elif opt in ("-s", "--status-file"): status_file = os.path.abspath(arg) elif opt in ("-d", "--project-directory"): project_directory = arg if len(args) != 3: print(str(args)) usage_and_exit("host and/or local_repos_path not specified (" + len(args) + ")") host = args[0] print("args1: " + args[1]) print("args0: " + args[0]) print("abspath: " + os.path.abspath(args[1])) local_repos_path = os.path.abspath(args[1]) print('project_dir:',project_directory) youngest_revision = int(args[2]) if status_file == "" : usage_and_exit("No status file specified") def main(): global host global user global password global port global repo_path global local_repos_path global status_file global project_directory global remote_base_directory global youngest_revision read_args() #repository,fs_ptr #get youngest revision print("local_repos_path: " + local_repos_path) print('getting youngest revision...') #youngest_revision = fs.youngest_rev(fs_ptr) assert youngest_revision, "Unable to lookup youngest revision." last_sent_revision = get_last_revision() if youngest_revision == last_sent_revision: # no need to continue. we should be up to date. print('done, up-to-date') return if last_sent_revision or youngest_revision < 10: # Only compare revisions if the DAT file contains a valid # revision number. Otherwise we risk waiting forever while # we parse and uploading every revision in the repo in the case # where a repository is retroactively configured to sync with ftp. pysvn_client = pysvn.Client() pysvn_client.callback_get_login = get_login rev1 = pysvn.Revision(pysvn.opt_revision_kind.number, last_sent_revision) rev2 = pysvn.Revision(pysvn.opt_revision_kind.number, youngest_revision) summary = pysvn_client.diff_summarize(repo_path, rev1, repo_path, rev2, True, False) print('summary len:',len(summary)) if len(summary) > 0 : print('connecting to %s...' % host) ftp = FTPClient(host, user, password) print('connected to %s' % host) ftp.base_path = remote_base_directory print('set remote base directory to %s' % remote_base_directory) #iterate through all the differences between revisions for change in summary : #determine whether the path of the change is relevant to the path that is being sent, and modify the path as appropriate. print('change path:',change.path) ftp_relative_path = apply_basedir(change.path) print('ftp rel path:',ftp_relative_path) #only try to sync path if the path is in our project_directory if ftp_relative_path != "" : is_file = (change.node_kind == pysvn.node_kind.file) if str(change.summarize_kind) == "delete" : print("deleting: " + ftp_relative_path) try: ftp.delete_path("/" + ftp_relative_path, is_file) except ftplib.error_perm as e: if 'cannot find the' in str(e) or 'not found' in str(e): # Log, but otherwise ignore path-not-found errors # when deleting, since it's not a disaster if the file # we want to delete is already gone. logging.error(captureErrorMessage(e)) else: raise elif str(change.summarize_kind) == "added" or str(change.summarize_kind) == "modified" : local_file = "" if is_file : local_file = svn_export_temp(pysvn_client, repo_path, rev2, change.path) print("uploading file: " + ftp_relative_path) ftp.upload_path("/" + ftp_relative_path, is_file, local_file) if is_file : os.remove(local_file) elif str(change.summarize_kind) == "normal" : print("skipping 'normal' element: " + ftp_relative_path) else : raise str("Unknown change summarize kind: " + str(change.summarize_kind) + ", path: " + ftp_relative_path) ftp.close() #write back the last revision that was synced print("writing last revision: " + str(youngest_revision)) set_last_revision(youngest_revision) # todo: undo def get_login(a,b,c,d): #arguments don't matter, we're always going to return the same thing try: return True, "svnUsername", "svnPassword", True except Exception as e: logging.error(e) #capture the location of the error frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace) #end capture sys.exit(1) #functions for persisting the last successfully synced revision def get_last_revision(): if os.path.isfile(status_file) : f=open(status_file, 'r') line = f.readline() f.close() try: i = int(line) except ValueError: i = 0 else: i = 0 f = open(status_file, 'w') f.write(str(i)) f.close() return i def set_last_revision(rev) : f = open(status_file, 'w') f.write(str(rev)) f.close() #augmented ftp client class that can work off a base directory class FTPClient(ftplib.FTP) : def __init__(self, host, username, password) : self.base_path = "" self.current_path = "" ftplib.FTP.__init__(self, host, username, password) def cwd(self, path) : debug_path = path if self.current_path == "" : self.current_path = self.pwd() print("pwd: " + self.current_path) if not os.path.isabs(path) : debug_path = self.base_path + "<" + path path = os.path.join(self.current_path, path) elif self.base_path != "" : debug_path = self.base_path + ">" + path.lstrip("/") path = os.path.join(self.base_path, path.lstrip("/")) path = os.path.normpath(path) #by this point the path should be absolute. if path != self.current_path : print("change from " + self.current_path + " to " + debug_path) ftplib.FTP.cwd(self, path) self.current_path = path else : print("staying put : " + self.current_path) def cd_or_create(self, path) : assert os.path.isabs(path), "absolute path expected (" + path + ")" try: self.cwd(path) except ftplib.error_perm as e: for folder in path.split('/'): if folder == "" : self.cwd("/") continue try: self.cwd(folder) except: print("mkd: (" + path + "):" + folder) self.mkd(folder) self.cwd(folder) def upload_path(self, path, is_file, local_path) : if is_file: (path, filename) = os.path.split(path) self.cd_or_create(path) # Use read-binary to avoid universal newline support from stripping CR characters. f = open(local_path, 'rb') self.storbinary("STOR " + filename, f) f.close() else: self.cd_or_create(path) def delete_path(self, path, is_file) : (path, filename) = os.path.split(path) print("trying to delete: " + path + ", " + filename) self.cwd(path) try: if is_file : self.delete(filename) else: self.delete_path_recursive(filename) except ftplib.error_perm as e: if 'The system cannot find the' in str(e) or '550 File not found' in str(e): # Log, but otherwise ignore path-not-found errors # when deleting, since it's not a disaster if the file # we want to delete is already gone. logging.error(captureErrorMessage(e)) else: raise def delete_path_recursive(self, path): if path == "/" : raise "WARNING: trying to delete '/'!" for node in self.nlst(path) : if node == path : #it's a file. delete and return self.delete(path) return if node != "." and node != ".." : self.delete_path_recursive(os.path.join(path, node)) try: self.rmd(path) except ftplib.error_perm as msg : sys.stderr.write("Error deleting directory " + os.path.join(self.current_path, path) + " : " + str(msg)) # apply the project_directory setting def apply_basedir(path) : #remove any leading stuff (in this case, "trunk/") and decide whether file should be propagated if not path.startswith(project_directory) : return "" return path.replace(project_directory, "", 1) def svn_export_temp(pysvn_client, base_path, rev, path) : # Causes access denied error. Couldn't deduce Windows-perm issue. # It's possible Python isn't garbage-collecting the open file-handle in time for pysvn to re-open it. # Regardless, just generating a simple filename seems to work. #(fd, dest_path) = tempfile.mkstemp() dest_path = tmpName = '%s.tmp' % __file__ exportPath = os.path.join(base_path, path).replace('\\','/') print('exporting %s to %s' % (exportPath, dest_path)) pysvn_client.export( exportPath, dest_path, force=False, revision=rev, native_eol=None, ignore_externals=False, recurse=True, peg_revision=rev ) return dest_path if __name__ == "__main__": logging.info('svnftp.start') try: main() logging.info('svnftp.done') except Exception as e: # capture the location of the error for debug purposes frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace[:-1]) print(stack_trace) # end capture error_text = '\nFATAL EXCEPTION!!!\n'+captureErrorMessage(e) subject = "ALERT: SVN2FTP Error" message = """An Error occurred while trying to FTP an SVN commit. repo_path = %(repo_path)s\n local_repos_path = %(local_repos_path)s\n project_directory = %(project_directory)s\n remote_base_directory = %(remote_base_directory)s\n error_text = %(error_text)s """ % globals() email(toAddrs, message, subject) logging.error(e) Notes/Disclaimers: I have basically no python training so I'm learning as I go and spending lots of time reading docs to figure stuff out. The body of get_login is in a try block because I was getting strange errors saying there was an unhandled exception in callback_get_login. Never figured out why, but it seems fine now. Let sleeping dogs lie, right? The username and password for get_login are currently hard-coded (but correct) just to eliminate variables and try to change as little as possible at once. (I added the svnuser and svnpass arguments to the existing argument parsing.) So that's where I am. I can't figure out why on earth it's not printing anything into svn2ftp.out.log. If you're wondering, the output for one of these failed attempts in svn2ftp.debug.log is: 2012-09-06 15:18:12,496 INFO svnftp.start 2012-09-06 15:18:12,496 INFO svnftp.done And it's no different on a successful run. So there's nothing useful being logged. I'm lost. I've gone way down the rabbit hole on this one, and don't know where to go from here. Any ideas?

    Read the article

  • overriding new ubuntu installation

    - by tkoomzaaskz
    I've got a ubuntu 11.10 which has lost its support in May 2013, now I'd like to reintall up to the most up-to-date LTS, which is 12.04. My question is regarding my current partitions and doing backups. Is there a safe way to backup my data on some local partitions instead of copying files into DVDs/external drives (this is very uncormortable in my situation). Following are system commands shoing my disk: $ lsblk NAME MAJ:MIN RM SIZE RO MOUNTPOINT sda 8:0 0 232,9G 0 +-sda1 8:1 0 48,8G 0 +-sda2 8:2 0 63G 0 +-sda3 8:3 0 1K 0 +-sda4 8:4 0 53,7G 0 / +-sda5 8:5 0 18,6G 0 +-sda6 8:6 0 25,5G 0 +-sda7 8:7 0 23,3G 0 [SWAP] sr0 11:0 1 1024M 0 and $ sudo fdisk -l [sudo] password for xyz: Disk /dev/sda: 250.1 GB, 250059350016 bytes glowic: 255, sektorów/sciezke: 63, cylindrów: 30401, w sumie sektorów: 488397168 Jednostka = sektorów, czyli 1 * 512 = 512 bajtów Rozmiar sektora (logiczny/fizyczny) w bajtach: 512 / 512 Rozmiar we/wy (minimalny/optymalny) w bajtach: 512 / 512 Identyfikator dysku: 0xc3ffc3ff Device Boot Beginning End Blocks ID System /dev/sda1 * 2048 102402047 51200000 7 HPFS/NTFS/exFAT /dev/sda2 215044096 347080703 66018304 7 HPFS/NTFS/exFAT /dev/sda3 347082750 488392064 70654657+ 5 Extended /dev/sda4 102402048 215042047 56320000 83 Linux /dev/sda5 395905923 434975939 19535008+ 83 Linux /dev/sda6 434976003 488392064 26708031 83 Linux /dev/sda7 347082752 395905023 24411136 82 Linux swap / Solaris In the beginning I had Windows Vista pre-installed with the machine when it was bought (damn!) and I installed linux (the one I have now). The windows-program in master boot record has been overriden by grub and now I can boot with both Windows and Linux. This is list of mounted devices: $ mount /dev/sda4 on / type ext4 (rw,errors=remount-ro,commit=0) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/tomasz/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=tomasz) It's strange (I don't remember such thing) that my current linux uses only one partition (/dev/sda4). But, anyway, it seems like that. My final question is: am I able to use one of the existing linux partitions for a backup and install ubuntu 12.04 without removing neither windows nor ubuntu 11.04? I mean - will grub automatically accept both old windows vista and 2 linuxes (old 11.10 and "new" 12.04)? Is there any hidden operation done while installation that could harm my custom-backup-partition while installing? my fstab file: proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda4 during installation UUID=d44e89f5-9da2-48eb-83b3-887652ec95d2 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda7 during installation UUID=bbe50535-ba57-434a-9272-211d859f0e00 none swap sw 0 0 sda5 and sda6 are trash partitions created during unsuccessful linux installation (this was linux installation before my current installation), I didn't delete these partitions, but I have access to them (and I can use them as backup partitions). edit: second question is: why does lsblk show /dev/sda having 232,9G while fdisk shows that it has 250.1GB? Where does the difference come from?

    Read the article

  • Linux software RAID6: rebuild slow

    - by Ole Tange
    I am trying to find the bottleneck in the rebuilding of a software raid6. ## Pause rebuilding when measuring raw I/O performance # echo 1 > /proc/sys/dev/raid/speed_limit_min # echo 1 > /proc/sys/dev/raid/speed_limit_max ## Drop caches so that does not interfere with measuring # sync ; echo 3 | tee /proc/sys/vm/drop_caches >/dev/null # time parallel -j0 "dd if=/dev/{} bs=256k count=4000 | cat >/dev/null" ::: sdbd sdbc sdbf sdbm sdbl sdbk sdbe sdbj sdbh sdbg 4000+0 records in 4000+0 records out 1048576000 bytes (1.0 GB) copied, 7.30336 s, 144 MB/s [... similar for each disk ...] # time parallel -j0 "dd if=/dev/{} skip=15000000 bs=256k count=4000 | cat >/dev/null" ::: sdbd sdbc sdbf sdbm sdbl sdbk sdbe sdbj sdbh sdbg 4000+0 records in 4000+0 records out 1048576000 bytes (1.0 GB) copied, 12.7991 s, 81.9 MB/s [... similar for each disk ...] So we can read sequentially at 140 MB/s in the outer tracks and 82 MB/s in the inner tracks on all the drives simultaneously. Sequential write performance is similar. This would lead me to expect a rebuild speed of 82 MB/s or more. # echo 800000 > /proc/sys/dev/raid/speed_limit_min # echo 800000 > /proc/sys/dev/raid/speed_limit_max # cat /proc/mdstat md2 : active raid6 sdbd[10](S) sdbc[9] sdbf[0] sdbm[8] sdbl[7] sdbk[6] sdbe[11] sdbj[4] sdbi[3](F) sdbh[2] sdbg[1] 27349121408 blocks super 1.2 level 6, 128k chunk, algorithm 2 [9/8] [UUU_UUUUU] [=========>...........] recovery = 47.3% (1849905884/3907017344) finish=855.9min speed=40054K/sec But we only get 40 MB/s. And often this drops to 30 MB/s. # iostat -dkx 1 sdbc 0.00 8023.00 0.00 329.00 0.00 33408.00 203.09 0.70 2.12 1.06 34.80 sdbd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdbe 13.00 0.00 8334.00 0.00 33388.00 0.00 8.01 0.65 0.08 0.06 47.20 sdbf 0.00 0.00 8348.00 0.00 33388.00 0.00 8.00 0.58 0.07 0.06 48.00 sdbg 16.00 0.00 8331.00 0.00 33388.00 0.00 8.02 0.71 0.09 0.06 48.80 sdbh 961.00 0.00 8314.00 0.00 37100.00 0.00 8.92 0.93 0.11 0.07 54.80 sdbj 70.00 0.00 8276.00 0.00 33384.00 0.00 8.07 0.78 0.10 0.06 48.40 sdbk 124.00 0.00 8221.00 0.00 33380.00 0.00 8.12 0.88 0.11 0.06 47.20 sdbl 83.00 0.00 8262.00 0.00 33380.00 0.00 8.08 0.96 0.12 0.06 47.60 sdbm 0.00 0.00 8344.00 0.00 33376.00 0.00 8.00 0.56 0.07 0.06 47.60 iostat says the disks are not 100% busy (but only 40-50%). This fits with the hypothesis that the max is around 80 MB/s. Since this is software raid the limiting factor could be CPU. top says: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 38520 root 20 0 0 0 0 R 64 0.0 2947:50 md2_raid6 6117 root 20 0 0 0 0 D 53 0.0 473:25.96 md2_resync So md2_raid6 and md2_resync are clearly busy taking up 64% and 53% of a CPU respectively, but not near 100%. The chunk size (128k) of the RAID was chosen after measuring which chunksize gave the least CPU penalty. If this speed is normal: What is the limiting factor? Can I measure that? If this speed is not normal: How can I find the limiting factor? Can I change that?

    Read the article

  • Wi-Fi Stick with ZD1211 chip refuses to work on Ubuntu >8.10. No clue.

    - by Benjamin Maus
    I have a machine running Ubuntu 9.10 (Karmic *x86_64*). Everything is running smooth so far, except for the Wi-Fi USB Stick. The same device worked perfectly in 8.10. The wireless device is a GW-US54GXS using the Zydas Zd1211 chipset. Dmesg output after plugging in: [ 196.303436] phy0: Selected rate control algorithm 'minstrel' [ 196.304209] zd1211rw 2-1:1.0: phy0 [ 196.304227] usbcore: registered new interface driver zd1211rw [ 196.334137] usb 2-1: firmware: requesting zd1211/zd1211b_ub [ 196.357463] usb 2-1: firmware: requesting zd1211/zd1211b_uphr [ 196.402643] zd1211rw 2-1:1.0: firmware version 4725 [ 196.442611] zd1211rw 2-1:1.0: zd1211b chip 2019:5303 v4810 high 00-90-cc AL2230_RF pa0 ---N- [ 196.463814] usb 2-1: firmware: requesting zd1211/zd1211b_ub [ 196.466823] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Syslog output: Nov 5 11:20:24 somesystem kernel: [ 196.303436] phy0: Selected rate control algorithm 'minstrel' Nov 5 11:20:24 kierkegaard NetworkManager: <info> Found radio killswitch rfkill0 (at /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/ieee80211/phy0/rfkill0) (driver <unknown>) Nov 5 11:20:24 somesystem kernel: [ 196.304209] zd1211rw 2-1:1.0: phy0 Nov 5 11:20:24 somesystem kernel: [ 196.304227] usbcore: registered new interface driver zd1211rw Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wmaster0, iface: wmaster0) Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wmaster0, iface: wmaster0): no ifupdown configuration found. Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wlan0, iface: wlan0) Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wlan0, iface: wlan0): no ifupdown configuration found. Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): driver supports SSID scans (scan_capa 0x01). Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): new 802.11 WiFi device (driver: 'zd1211rw') Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/2 Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): now managed Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): device state change: 1 -> 2 (reason 2) Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): bringing up device. Nov 5 11:20:24 somesystem kernel: [ 196.334137] usb 2-1: firmware: requesting zd1211/zd1211b_ub Nov 5 11:20:24 somesystem kernel: [ 196.357463] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Nov 5 11:20:24 somesystem kernel: [ 196.402643] zd1211rw 2-1:1.0: firmware version 4725 Nov 5 11:20:24 somesystem kernel: [ 196.442611] zd1211rw 2-1:1.0: zd1211b chip 2019:5303 v4810 high 00-90-cc AL2230_RF pa0 ---N- Nov 5 11:20:24 somesystem NetworkManager: <WARN> nm_device_hw_bring_up(): (wlan0): device not up after timeout! Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): deactivating device (reason: 2). Nov 5 11:20:24 somesystem kernel: [ 196.463814] usb 2-1: firmware: requesting zd1211/zd1211b_ub Nov 5 11:20:24 somesystem kernel: [ 196.466823] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Nov 5 11:20:29 somesystem wpa_supplicant[978]: Could not set interface 'wlan0' UP Nov 5 11:20:29 somesystem wpa_supplicant[978]: Failed to initialize driver interface Nov 5 11:20:29 somesystem NetworkManager: <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. Gnome tells me in the network menu that the device was "not ready". It appears in iwconfig but not in ifconfig. The same symptoms appear when I boot from the live CD. How can I solve this dilemma?

    Read the article

  • SQL Server Log File Won't Shrink due cause "log are pending replication" on non replicated DB?

    - by user796466
    I have a non Mission Critial DB 9am-5pm SQL Server database that I have set up to do nightly full backups and log backups every 30 minutes during business hours. The database is in full recovery and normally I have no reason to truncate/shrink logs unless I do some heavy maintenance. Log backups manage the size with no issue. However I have not been at this client for several weeks and upon inspection I noticed that the log had grown to about 10 times the size of the .mdf file. I poked around backups had been running and I had not gotten any severity error alerts (SQL mail). I attempted to put DB in simple recovery and shrink the log, this was no good. I precede to try a log backup and I got: The log was not truncated because records at the beginning of the log are pending replication or Change Data Capture. Ensure the Log Reader Agent or capture job is running or use sp_repldone to mark transactions as distributed or captured. Restart SQL Server rinse repeat same thing ... I said ??? Replication is not nor ever has been set up on this DB or database /server ??? So the log backups have not been flushing the .ldf. So I did a couple hours of research and I found: http://www.sqlmonster.com/Uwe/Forum.aspx/sql-server/5445/Log-file-is-not-truncated-inspite-of-regular-log-backup http://www.eggheadcafe.com/software/aspnet/30708322/the-log-was-not-truncated-because-records-at-the-beginning-of-the-log-are-pending-replication.aspx seems to be some kind of poorly documented bug ?? The solution seems to have been to run exec sp_repldone, more precisley EXEC sp_repldone @xactid = NULL, @xact_segno = NULL, @numtrans = 0, @time= 0, @reset = 1 This procedure can be used in emergency situations to allow truncation of the transaction log when transactions pending replication are present. Using this procedure prevents Microsoft SQL Server 2000 from replicating the database until the database is unpublished and republished. ~ MSDN When I do that I get the following Msg 18757, Level 16, State 1, Procedure sp_repldone, Line 1 Unable to execute procedure. The database is not published. Execute the procedure in a database that is published for replication. Which makes sense Because the DB has never been published for replication. I have several questions: A) First and foremost is, WTF is going on ? What is causeing this, I am interested in knowing the why here ? Is this genuinley a bug or is there some aspect of the backup that is not functioning properly that cause's the DB to mimick a replicated state ? Someone please edify me on this. B) Second ... Do I really have to publish / replicate this DB to exec this SP to fix this ??? Sounds crazy or is there some T-SQL that I can put it in a published state exec the proc and be on my way ... C) Third, if I do indeed have to publish this database to exec the SP to release this unneeded mis replicated/intended log , to get my .ldf file and backup back on track. How do I publish the database without an online host that it is asking for ??? I don't generally do this kind of database administration and need some guidance. Sorry if this is too verbose but just voicing the question helps me clarify it ... Thank you in advance for your help

    Read the article

  • Ajax actionlinks straight from a DropDownList

    - by Ingó Vals
    I've got a small linkbox on the side of my page that is rendered as a PartialView. In it I have a dropDownlist the should change the routing value of the links in the box but I'm having difficulty doing so. My current plan is to call on something similar to a Ajax.ActionLink to reload the partial view into the with a different parameter based on the value of the dropdown selection. However I'm having multiple problems with this, for example as a novice in using dropdownlists I have no idea how to call on the selected value for example. <%= Html.DropDownList("DropDownList1", new SelectList(Model, "ID", "Name"), "--Pick--", new { AutoPostBack = "true", onchange = "maybe something here" })%> I tried putting in the sys.mvc.AsyncHyperlink into the onchange attribute and that worked except I don't know how to put in the route value for it. Sys.Mvc.AsyncHyperlink.handleClick(this, new Sys.UI.DomEvent(event), { insertionMode: Sys.Mvc.InsertionMode.replace, updateTargetId: 'SmallMenu' } Is there no straight Ajax drop down list that fires events onchange? Any way this is possible? I have later in the Partial view the Ajax actionlinks but they need to have their id's updated by the value in the dropdownlist and if I could do that somehow else I would appreciate a suggestion.

    Read the article

  • PyDev and Django: PyDev breaking Django shell?

    - by Rosarch
    I've set up a new project, and populated it with simple models. (Essentially I'm following the tut.) When I run python manage.py shell on the command line, it works fine: >python manage.py shell Python 2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from mysite.myapp.models import School >>> School.objects.all() [] Works great. Then, I try to do the same thing in Eclipse (using a Django project that is composed of the same files.) Right click on mysite project Django Shell with Django environment This is the output from the PyDev Console: >>> import sys; print('%s %s' % (sys.executable or sys.platform, sys.version)) C:\Python26\python.exe 2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500 32 bit (Intel)] >>> >>> from django.core import management;import mysite.settings as settings;management.setup_environ(settings) 'path\\to\\mysite' >>> from mysite.myapp.models import School >>> School.objects.all() Traceback (most recent call last): File "<console>", line 1, in <module> File "C:\Python26\lib\site-packages\django\db\models\query.py", line 68, in __repr__ data = list(self[:REPR_OUTPUT_SIZE + 1]) File "C:\Python26\lib\site-packages\django\db\models\query.py", line 83, in __len__ self._result_cache.extend(list(self._iter)) File "C:\Python26\lib\site-packages\django\db\models\query.py", line 238, in iterator for row in self.query.results_iter(): File "C:\Python26\lib\site-packages\django\db\models\sql\query.py", line 287, in results_iter for rows in self.execute_sql(MULTI): File "C:\Python26\lib\site-packages\django\db\models\sql\query.py", line 2368, in execute_sql cursor = self.connection.cursor() File "C:\Python26\lib\site-packages\django\db\backends\__init__.py", line 81, in cursor cursor = self._cursor() File "C:\Python26\lib\site-packages\django\db\backends\sqlite3\base.py", line 170, in _cursor self.connection = Database.connect(**kwargs) OperationalError: unable to open database file What am I doing wrong here?

    Read the article

  • Telerik RadAlert back button cache problem

    - by Michael VS
    Hi, I'm sure you have had this one before so if you could point me to something similar.... I have a server side creation of a RadAlert window using the usual Sys.Application.remove_load and add_load procedure however the alert keeps popping up as it seems to be caching when the user hits the back button after it has been activated. I have tried to put a onclick event on a button to clear the function using remove_load before it moves to the next page however it still doesn't seem to clear it. Its used in validation so if a user inputs failed validation it pops up. If they then go and enter correct validation it then moves onto the next page. If they then use back button this is where it pops up again. Any ideas? Server side: private void Page_Load(object sender, System.EventArgs e) { if (!IsPostBack) { btnSearch.Attributes.Add("onclick", "Sys.Application.remove_load(f);"); } } private void btnSearch_Click(object sender, System.EventArgs e) { string radalertscript = "(function(){var f = function(){radalert('Welcome to RadWindow Prometheus!', 330, 210); Sys.Application.remove_load(f);};Sys.Application.add_load(f);})()"; RadAjaxManager1.ResponseScripts.Add(radalertscript); } Ive also tried using RadAjaxManager1.ResponseScripts.Clear(); before it moves on to the next page on the postback event

    Read the article

  • QWebView not loading external resources

    - by Nick
    Hi. I'm working on a kiosk web browser using Qt and PyQt4. QWebView seems to work quite well except for one quirk. If a URL fails to load for any reason, I want to redirect the user to a custom error page. I've done this using the loadFinished() signal to check the result, and change the URL to the custom page if necessary using QWebView.load(). However, any page I attempt to load here fails to pull in external resources like CSS or images. Using QWebView.load() to set the initial page at startup seems to work fine, and clicking any link on the custom error page will result in the destination page loading fine. It's just the error page that doesn't work. I'm really not sure where to go next. I've included the source for an app that will replicate the problem below. It takes a URL as a command line argument - a valid URL will display correctly, a bad URL (eg. DNS resolution fails) will redirect to Google, but with the logo missing. import sys from PyQt4 import QtGui, QtCore, QtWebKit class MyWebView(QtWebKit.QWebView): def __init__(self, parent=None): QtWebKit.QWebView.__init__(self, parent) self.resize(800, 600) self.load(QtCore.QUrl(sys.argv[1])) self.connect(self, QtCore.SIGNAL('loadFinished(bool)'), self.checkLoadResult) def checkLoadResult(self, result): if (result == False): self.load(QtCore.QUrl('http://google.com')) app = QtGui.QApplication(sys.argv) main = MyWebView() main.show() sys.exit(app.exec_()) If anyone could offer some advice it would be greatly appreciated.

    Read the article

  • Find does not work in Expect Send command

    - by Sharjeel Sayed
    I run this bash command to display contents of somefile.cf in a Weblogic domain directory. find $(/usr/ucb/ps auwwx | grep weblogic | tr ' ' '\n' | grep security.policy | grep domain | awk -F'=' '{print $2}' | sed -e 's/weblogic.policy//' -e 's/security\///' -e 's/dep\///' | awk -F'/' '{print "/"$2"/"$3"/"$4"/somefile.cf"}' | sort | uniq) 2> /dev/null -exec ls {} \; -exec cat {} \; I tried incorporating this in an expect script and also escaped some special characters which would throw an error in expect but its still not working. send "echo ; echo 'Weblogic somefile.cf:' ; find \$(/usr/ucb/ps auwwx | grep weblogic | tr ' ' '\n' | grep security.policy | grep domain | awk -F'=' '{print \$2}' | sed -e 's/weblogic.policy//' -e 's/security\\///' -e 's/dep\\///' | awk -F'/' '{print "/"\$2"/"\$3"/"\$4"/somefile.cf"}' | sort | uniq) 2> /dev/null -exec ls {} \\; -exec cat {} \\; ; echo\r" I guess it needs some more escaping of special characters or probably I dint escape the existing ones correctly. Any help would be appreciated.

    Read the article

  • Give back full control to a user on a disk from another computer

    - by Foghorn
    I have my friend's hard drive mounted externally. After messing with the permissions with TAKEOWN so I could fix some viruses, I have full control over their drive. The problem is, now it's stuck in a "autochk not found" reboot sequence. I think the problem is that the boot sector is invisible to the drive now. So my question is, How can I use icacls to give back the full ownership, when the user I am giving it to is not on my machine? I ran the TAKEOWN command from my windows 7 laptop, their machine is a windows xp Professional with three partitions, I only altered the one that has the boot sector. Here is the permissions that icacls shows: (Where my computer is %System% my username is ME, and the drive is E:\ C:\Users\ME icacls E:\* E:\$RECYCLE.BIN %System%\ME:(OI)(CI)(F) Mandatory Label\Low Mandatory Level:(OI)(CI)(IO)(NW) E:\ALLDATAW %System%\ME:(I)(OI)(CI)(F) E:\alrt_200.data %System%\ME:(OI)(CI)(F) E:\AUTOEXEC.BAT %System%\ME:(OI)(CI)(F) E:\AZ Commercial %System%\ME:(I)(OI)(CI)(F) E:\boot.ini %System%\ME:(OI)(CI)(F) E:\Config.Msi %System%\ME:(I)(OI)(CI)(F) E:\CONFIG.SYS %System%\ME:(OI)(CI)(F) E:\Documents and Settings %System%\ME:(I)(OI)(CI)(F) E:\IO.SYS %System%\ME:(OI)(CI)(F) E:\Mitchell1 %System%\ME:(I)(OI)(CI)(F) E:\MSDOS.SYS %System%\ME:(OI)(CI)(F) E:\MSOCache %System%\ME:(I)(OI)(CI)(F) E:\NTDClient.log %System%\ME:(OI)(CI)(F) E:\NTDETECT.COM %System%\ME:(OI)(CI)(F) E:\ntldr %System%\ME:(OI)(CI)(F) E:\pagefile.sys %System%\ME:(OI)(CI)(F) E:\Program Files %System%\ME:(I)(OI)(CI)(F) E:\RECYCLER %System%\ME:(I)(OI)(CI)(F) E:\RHDSetup.log %System%\ME:(OI)(CI)(F) E:\System Volume Information %System%\ME:(I)(OI)(CI)(F) E:\WINDOWS %System%\ME:(I)(OI)(CI)(F) Successfully processed 22 files; Failed processing 0 files C:\Users\ME

    Read the article

  • Can't build pyxpcom on OS X 10.6

    - by Gj
    I've been following these instructions at https://developer.mozilla.org/en/Building_PyXPCOM but getting this: $ make make export make[2]: Nothing to be done for `export'. make[4]: Nothing to be done for `export'. make[4]: Nothing to be done for `export'. /opt/local/bin/python2.5 ../../../src/config/nsinstall.py -L /usr/local/pyxpcom/build/xpcom/src -m 644 ../../../src/xpcom/src/PyXPCOM.h ../../dist/include make[3]: Nothing to be done for `export'. /opt/local/bin/python2.5 ../../../../src/config/nsinstall.py -D ../../../dist/idl /opt/local/bin/python2.5 ../../../../src/config/nsinstall.py -D ../../../dist/idl make[4]: *** No rule to make target `_xpidlgen/py_test_component.h', needed by `export'. Stop. make[3]: *** [export] Error 2 make[2]: *** [export] Error 2 make[1]: *** [export] Error 2 make: *** [default] Error 2 Any ideas? An interesting anomaly is that despite me setting the PYTHON env variable to Python 2.6, the configure and make both seem to go after the 2.5... Thanks for any advice! PS here's the configure output: $ ../src/configure --with-libxul-sdk=/Users/me/xulrunner-sdk/ loading cache ./config.cache checking host system type... i386-apple-darwin10.3.0 checking target system type... i386-apple-darwin10.3.0 checking build system type... i386-apple-darwin10.3.0 checking for mawk... (cached) gawk checking for perl5... (cached) /opt/local/bin/perl5 checking for gcc... (cached) gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... (cached) yes checking whether gcc accepts -g... (cached) yes checking for c++... (cached) c++ checking whether the C++ compiler (c++ ) works... yes checking whether the C++ compiler (c++ ) is a cross-compiler... no checking whether we are using GNU C++... (cached) yes checking whether c++ accepts -g... (cached) yes checking for ranlib... (cached) ranlib checking for as... (cached) /usr/bin/as checking for ar... (cached) ar checking for ld... (cached) ld checking for strip... (cached) strip checking for windres... no checking whether gcc and cc understand -c and -o together... (cached) yes checking how to run the C preprocessor... (cached) gcc -E checking how to run the C++ preprocessor... (cached) c++ -E checking for a BSD compatible install... (cached) /usr/bin/install -c checking whether ln -s works... (cached) yes checking for minimum required perl version >= 5.006... 5.008009 checking for full perl installation... yes checking for /opt/local/bin/python... (cached) /opt/local/bin/python2.5 checking for doxygen... (cached) : checking for whoami... (cached) /usr/bin/whoami checking for autoconf... (cached) /opt/local/bin/autoconf checking for unzip... (cached) /usr/bin/unzip checking for zip... (cached) /usr/bin/zip checking for makedepend... (cached) /opt/local/bin/makedepend checking for xargs... (cached) /usr/bin/xargs checking for pbbuild... (cached) /usr/bin/xcodebuild checking for sdp... (cached) /usr/bin/sdp checking for gmake... (cached) /opt/local/bin/gmake checking for X... (cached) no checking whether the compiler supports -Wno-invalid-offsetof... yes checking whether ld has archive extraction flags... (cached) no checking that static assertion macros used in autoconf tests work... (cached) yes checking for 64-bit OS... yes checking for minimum required Python version >= 2.4... yes checking for -dead_strip option to ld... yes checking for ANSI C header files... (cached) yes checking for working const... (cached) yes checking for mode_t... (cached) yes checking for off_t... (cached) yes checking for pid_t... (cached) yes checking for size_t... (cached) yes checking for st_blksize in struct stat... (cached) yes checking for siginfo_t... (cached) yes checking for int16_t... (cached) yes checking for int32_t... (cached) yes checking for int64_t... (cached) yes checking for int64... (cached) no checking for uint... (cached) yes checking for uint_t... (cached) no checking for uint16_t... (cached) no checking for uname.domainname... (cached) no checking for uname.__domainname... (cached) no checking for usable char16_t (2 bytes, unsigned)... (cached) no checking for usable wchar_t (2 bytes, unsigned)... (cached) no checking for compiler -fshort-wchar option... (cached) yes checking for visibility(hidden) attribute... (cached) yes checking for visibility(default) attribute... (cached) yes checking for visibility pragma support... (cached) yes checking For gcc visibility bug with class-level attributes (GCC bug 26905)... (cached) yes checking For x86_64 gcc visibility bug with builtins (GCC bug 20297)... (cached) no checking for dirent.h that defines DIR... (cached) yes checking for opendir in -ldir... (cached) no checking for sys/byteorder.h... (cached) no checking for compat.h... (cached) no checking for getopt.h... (cached) yes checking for sys/bitypes.h... (cached) no checking for memory.h... (cached) yes checking for unistd.h... (cached) yes checking for gnu/libc-version.h... (cached) no checking for nl_types.h... (cached) yes checking for malloc.h... (cached) no checking for X11/XKBlib.h... (cached) yes checking for io.h... (cached) no checking for sys/statvfs.h... (cached) yes checking for sys/statfs.h... (cached) no checking for sys/vfs.h... (cached) no checking for sys/mount.h... (cached) yes checking for sys/quota.h... (cached) yes checking for mmintrin.h... (cached) yes checking for new... (cached) yes checking for sys/cdefs.h... (cached) yes checking for gethostbyname_r in -lc_r... (cached) no checking for dladdr... (cached) yes checking for socket in -lsocket... (cached) no checking whether mmap() sees write()s... yes checking whether gcc needs -traditional... (cached) no checking for 8-bit clean memcmp... (cached) yes checking for random... (cached) yes checking for strerror... (cached) yes checking for lchown... (cached) yes checking for fchmod... (cached) yes checking for snprintf... (cached) yes checking for statvfs... (cached) yes checking for memmove... (cached) yes checking for rint... (cached) yes checking for stat64... (cached) yes checking for lstat64... (cached) yes checking for truncate64... (cached) no checking for statvfs64... (cached) no checking for setbuf... (cached) yes checking for isatty... (cached) yes checking for flockfile... (cached) yes checking for getpagesize... (cached) yes checking for localtime_r... (cached) yes checking for strtok_r... (cached) yes checking for wcrtomb... (cached) yes checking for mbrtowc... (cached) yes checking for res_ninit()... (cached) no checking for gnu_get_libc_version()... (cached) no ../src/configure: line 9881: AM_LANGINFO_CODESET: command not found checking for an implementation of va_copy()... (cached) yes checking for an implementation of __va_copy()... (cached) yes checking whether va_lists can be copied by value... (cached) no checking for C++ exceptions flag... (cached) -fno-exceptions checking for gcc 3.0 ABI... (cached) yes checking for C++ "explicit" keyword... (cached) yes checking for C++ "typename" keyword... (cached) yes checking for modern C++ template specialization syntax support... (cached) yes checking whether partial template specialization works... (cached) yes checking whether operators must be re-defined for templates derived from templates... (cached) no checking whether we need to cast a derived template to pass as its base class... (cached) no checking whether the compiler can resolve const ambiguities for templates... (cached) yes checking whether the C++ "using" keyword can change access... (cached) yes checking whether the C++ "using" keyword resolves ambiguity... (cached) yes checking for "std::" namespace... (cached) yes checking whether standard template operator!=() is ambiguous... (cached) unambiguous checking for C++ reinterpret_cast... (cached) yes checking for C++ dynamic_cast to void*... (cached) yes checking whether C++ requires implementation of unused virtual methods... (cached) yes checking for trouble comparing to zero near std::operator!=()... (cached) no checking for LC_MESSAGES... (cached) yes checking for tar archiver... checking for gnutar... (cached) gnutar gnutar checking for wget... checking for wget... (cached) wget wget checking for valid optimization flags... yes checking for gcc -pipe support... yes checking whether compiler supports -Wno-long-long... yes checking whether C compiler supports -fprofile-generate... yes checking for correct temporary object destruction order... yes checking for correct overload resolution with const and templates... no Building Python extensions using python-2.5 from /opt/local/Library/Frameworks/Python.framework/Versions/2.5 creating ./config.status creating config/autoconf.mk creating Makefile creating xpcom/Makefile creating xpcom/src/Makefile creating xpcom/src/loader/Makefile creating xpcom/src/module/Makefile creating xpcom/components/Makefile creating xpcom/test/Makefile creating xpcom/test/test_component/Makefile creating dom/Makefile creating dom/src/Makefile creating dom/test/Makefile creating dom/test/pyxultest/Makefile creating dom/nsdom/Makefile creating dom/nsdom/test/Makefile

    Read the article

  • Java Runtime command line Process

    - by AEIOU
    I have a class with the following code: Process process = null; try { process = Runtime.getRuntime().exec("gs -version"); System.out.println(process.toString()); } catch (Exception e1) { e1.printStackTrace(); } finally { process.destroy(); } I can run "gs -version" on my command line and get: GPL Ghostscript 8.71 (2010-02-10) Copyright (C) 2010 Artifex Software, Inc. All rights reserved. So I know I have the path at least set somewhere. I can run that class from command line and it works. But when I run it using eclipse I get the following error: java.io.IOException: Cannot run program "gs": error=2, No such file or directory at java.lang.ProcessBuilder.start(ProcessBuilder.java:459) at java.lang.Runtime.exec(Runtime.java:593) at java.lang.Runtime.exec(Runtime.java:431) at java.lang.Runtime.exec(Runtime.java:328) at clris.batchdownloader.TestJDBC.main(TestJDBC.java:17) Caused by: java.io.IOException: error=2, No such file or directory at java.lang.UNIXProcess.forkAndExec(Native Method) at java.lang.UNIXProcess.(UNIXProcess.java:53) at java.lang.ProcessImpl.start(ProcessImpl.java:91) at java.lang.ProcessBuilder.start(ProcessBuilder.java:452) ... 4 more In my program, i can replace "gs" with: "java", "mvn", "svn" and it works. But "gs" does not. It's only in eclipse I have this problem. Any ideas, on what I need to do to resolve this issue?

    Read the article

  • Testing warnings with doctest

    - by Eli Courtwright
    I'd like to use doctests to test the presence of certain warnings. For example, suppose I have the following module: from warnings import warn class Foo(object): """ Instantiating Foo always gives a warning: >>> foo = Foo() testdocs.py:14: UserWarning: Boo! warn("Boo!", UserWarning) >>> """ def __init__(self): warn("Boo!", UserWarning) If I run python -m doctest testdocs.py to run the doctest in my class and make sure that the warning is printed, I get: testdocs.py:14: UserWarning: Boo! warn("Boo!", UserWarning) ********************************************************************** File "testdocs.py", line 7, in testdocs.Foo Failed example: foo = Foo() Expected: testdocs.py:14: UserWarning: Boo! warn("Boo!", UserWarning) Got nothing ********************************************************************** 1 items had failures: 1 of 1 in testdocs.Foo ***Test Failed*** 1 failures. It looks like the warning is getting printed but not captured or noticed by doctest. I'm guessing that this is because warnings are printed to sys.stderr instead of sys.stdout. But this happens even when I say sys.stderr = sys.stdout at the end of my module. So is there any way to use doctests to test for warnings? I can find no mention of this one way or the other in the documentation or in my Google searching.

    Read the article

  • Mercurial 1.5 pager on Windows

    - by alexandrul
    I'm trying to set the pager used for Mercurial but the output is empty, even if I specify the command in the [pager] section or as the PAGER environment variable. I noticed that the command provided is launched with cmd.exe. Is this the cause of empty output, and if yes, what is the right syntax? Environment: Mercurial 1.5, Mecurial 1.4.3 hgrc: [extensions] pager = [pager] pager = d:\tools\less\less.exe Sample command lines (from Process Explorer): hg diff c:\windows\system32\cmd.exe /c "d:\tools\less\less.exe 2> NUL:" d:\tools\less\less.exe UPDATE In pager.py, by replacing: sys.stderr = sys.stdout = util.popen(p, "wb") with sys.stderr = sys.stdout = subprocess.Popen(p, stdin = subprocess.PIPE, shell=False).stdin I managed to obtain the desired output for the hg status and diff. BUT, I'm sure it's wrong (or at least incomplete), and I have no control over the pager app (less.exe): the output is shown in the cmd.exe window, I can see the less prompt (:) but any further input is fed into cmd.exe. It seems that the pager app is still active in the background: after typing exit in the cmd.exe window, I have control over the pager app, and I can terminate it normally. Also, it makes no difference what I'm choosing as a pager app (more is behaving the same). UPDATE 2 Issue1677 - [PATCH] pager for "hg help" output on windows

    Read the article

  • python-xmpp and looping through list of recipients to receive and IM message

    - by David
    I can't figure out the problem and want some input as to whether my Python code is incorrect, or if this is an issue or design limitation of Python XMPP library. I'm new to Python by the way. Here's snippets of code in question below. What I'd like to do is read in a text file of IM recipients, one recipient per line, in XMPP/Jabber ID format. This is read into a Python list variable. I then instantiate an XMPP client session and loop through the list of recipients and send a message to each recipient. Then sleep some time and repeat test. This is for load testing the IM client of recipients as well as IM server. There is code to alternately handle case of taking only one recipient from command line input instead of from file. What ends up happening is that Python does iterate/loop through the list but only last recipient in list receives message. Switch order of recipients to verify. Kind of looks like Python XMPP library is not sending it out right, or I'm missing a step with the library calls, because the debug print statements during runtime indicate the looping works correctly. recipient = "" delay = 60 useFile = False recList = [] ... elif (sys.argv[i] == '-t'): recipient = sys.argv[i+1] useFile = False elif (sys.argv[i] == '-tf'): fil = open(sys.argv[i+1], 'r') recList = fil.readlines() fil.close() useFile = True ... # disable debug msgs cnx = xmpp.Client(svr,debug=[]) cnx.connect(server=(svr,5223)) cnx.auth(user,pwd,'imbot') cnx.sendInitPresence() while (True): if useFile: for listUser in recList: cnx.send(xmpp.Message(listUser,msg+str(msgCounter))) print "sending to "+listUser+" msg = "+msg+str(msgCounter) else: cnx.send(xmpp.Message(recipient,msg+str(msgCounter))) msgCounter += 1 time.sleep(delay)

    Read the article

  • "select * from table" vs "select colA,colB,etc from table" interesting behaviour in SqlServer2005

    - by kristof
    Apology for a lengthy post but I needed to post some code to illustrate the problem. Inspired by the question What is the reason not to use select * ? posted a few minutes ago, I decided to point out some observations of the select * behaviour that I noticed some time ago. So let's the code speak for itself: IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[starTest]') AND type in (N'U')) DROP TABLE [dbo].[starTest] CREATE TABLE [dbo].[starTest]( [id] [int] IDENTITY(1,1) NOT NULL, [A] [varchar](50) NULL, [B] [varchar](50) NULL, [C] [varchar](50) NULL ) ON [PRIMARY] GO insert into dbo.starTest(a,b,c) select 'a1','b1','c1' union all select 'a2','b2','c2' union all select 'a3','b3','c3' go IF EXISTS (SELECT * FROM sys.views WHERE object_id = OBJECT_ID(N'[dbo].[vStartest]')) DROP VIEW [dbo].[vStartest] go create view dbo.vStartest as select * from dbo.starTest go go IF EXISTS (SELECT * FROM sys.views WHERE object_id = OBJECT_ID(N'[dbo].[vExplicittest]')) DROP VIEW [dbo].[vExplicittest] go create view dbo.[vExplicittest] as select a,b,c from dbo.starTest go select a,b,c from dbo.vStartest select a,b,c from dbo.vExplicitTest IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[starTest]') AND type in (N'U')) DROP TABLE [dbo].[starTest] CREATE TABLE [dbo].[starTest]( [id] [int] IDENTITY(1,1) NOT NULL, [A] [varchar](50) NULL, [B] [varchar](50) NULL, [D] [varchar](50) NULL, [C] [varchar](50) NULL ) ON [PRIMARY] GO insert into dbo.starTest(a,b,d,c) select 'a1','b1','d1','c1' union all select 'a2','b2','d2','c2' union all select 'a3','b3','d3','c3' select a,b,c from dbo.vExplicittest select a,b,c from dbo.vStartest If you execute the following query and look at the results of last 2 select statements, the results that you will see will be as follows: select a,b,c from dbo.vExplicittest a1 b1 c1 a2 b2 c2 a3 b3 c3 select a,b,c from dbo.vStartest a1 b1 d1 a2 b2 d2 a3 b3 d3 As you can see in the results of select a,b,c from dbo.vStartest the data of column c has been replaced with the data from colum d. I believe that is related to the way the views are compiled, my understanding is that the columns are mapped by column indexes (1,2,3,4) as apposed to names. I though I would post it as a warning for people using select * in their sql and experiencing unexpected behaviour. Note: If you rebuild the view that uses select * each time after you modify the table it will work as expected

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >