Search Results

Search found 9750 results on 390 pages for 'sys debug'.

Page 75/390 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Configuration problems with django and mod_wsgi

    - by Jimbo
    Hi, I've got problems on getting django to work on apache 2.2 with mod_wsgi. Django is installed and mod_wsgi too. I can even see a 404 page when accessing the path and I can login to django admin. But if I want to install the tagging module I get the following error: Traceback (most recent call last): File "setup.py", line 49, in <module> version_tuple = __import__('tagging').VERSION File "/home/jim/django-tagging/tagging/__init__.py", line 3, in <module> from tagging.managers import ModelTaggedItemManager, TagDescriptor File "/home/jim/django-tagging/tagging/managers.py", line 5, in <module> from django.contrib.contenttypes.models import ContentType File "/usr/lib/python2.5/site-packages/django/contrib/contenttypes/models.py", line 1, in <module> from django.db import models File "/usr/lib/python2.5/site-packages/django/db/__init__.py", line 10, in <module> if not settings.DATABASE_ENGINE: File "/usr/lib/python2.5/site-packages/django/utils/functional.py", line 269, in __getattr__ self._setup() File "/usr/lib/python2.5/site-packages/django/conf/__init__.py", line 40, in _setup self._wrapped = Settings(settings_module) File "/usr/lib/python2.5/site-packages/django/conf/__init__.py", line 75, in __init__ raise ImportError, "Could not import settings '%s' (Is it on sys.path? Does it have syntax errors?): %s" % (self.SETTINGS_MODULE, e) ImportError: Could not import settings 'mysite.settings' (Is it on sys.path? Does it have syntax errors?): No module named mysite.settings My httpd.conf: Alias /media/ /home/jim/django/mysite/media/ <Directory /home/jim/django/mysite/media> Order deny,allow Allow from all </Directory> Alias /admin/media/ "/usr/lib/python2.5/site-packages/django/contrib/admin/media/" <Directory "/usr/lib/python2.5/site-packages/django/contrib/admin/media/"> Order allow,deny Allow from all </Directory> WSGIScriptAlias /dj /home/jim/django/mysite/apache/django.wsgi <Directory /home/jim/django/mysite/apache> Order deny,allow Allow from all </Directory> My django.wsgi: import sys, os sys.path.append('/home/jim/django') sys.path.append('/home/jim/django/mysite') os.chdir('/home/jim/django/mysite') os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() I try to get this to work since a few days and have read several blogs and answers here on so but nothing worked.

    Read the article

  • Bidirectional FIFO

    - by nunos
    I would like to implement a bidirectional fifo. The code below is functioning but it is not using bidirectional fifo. I have searched all over the internet, but haven't found any good example... How can I do that? Thanks, WRITER.c: #include <stdio.h> #include <unistd.h> #include <string.h> #include <sys/types.h> #include <sys/wait.h> #include <fcntl.h> #define MAXLINE 4096 #define READ 0 #define WRITE 1 int main (int argc, char** argv) { int a, b, fd; do { fd=open("/tmp/myfifo",O_WRONLY); if (fd==-1) sleep(1); } while (fd==-1); while (1) { scanf("%d", &a); scanf("%d", &b); write(fd,&a,sizeof(int)); write(fd,&b,sizeof(int)); if (a == 0 && b == 0) { break; } } close(fd); return 0; } READER.c: #include <stdio.h> #include <unistd.h> #include <string.h> #include <sys/types.h> #include <sys/wait.h> #include <fcntl.h> #include <sys/stat.h> #define MAXLINE 4096 #define READ 0 #define WRITE 1 int main(void) { int n1, n2; int fd; mkfifo("/tmp/myfifo",0660); fd=open("/tmp/myfifo",O_RDONLY); while(read(fd, &n1, sizeof(int) )) { read(fd, &n2, sizeof(int)); if (n1 == 0 && n2 == 0) { break; } printf("soma: %d\n",n1+n2); printf("diferenca: %d\n", n1-n2); printf("divisao: %f\n", n1/(double)n2); printf("multiplicacao: %d\n", n1*n2); } close(fd); return 0; }

    Read the article

  • Memory efficient int-int dict in Python

    - by Bolo
    Hi, I need a memory efficient int-int dict in Python that would support the following operations in O(log n) time: d[k] = v # replace if present v = d[k] # None or a negative number if not present I need to hold ~250M pairs, so it really has to be tight. Do you happen to know a suitable implementation (Python 2.7)? EDIT Removed impossible requirement and other nonsense. Thanks, Craig and Kylotan! To rephrase. Here's a trivial int-int dictionary with 1M pairs: >>> import random, sys >>> from guppy import hpy >>> h = hpy() >>> h.setrelheap() >>> d = {} >>> for _ in xrange(1000000): ... d[random.randint(0, sys.maxint)] = random.randint(0, sys.maxint) ... >>> h.heap() Partition of a set of 1999530 objects. Total size = 49161112 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1 0 25165960 51 25165960 51 dict (no owner) 1 1999521 100 23994252 49 49160212 100 int On average, a pair of integers uses 49 bytes. Here's an array of 2M integers: >>> import array, random, sys >>> from guppy import hpy >>> h = hpy() >>> h.setrelheap() >>> a = array.array('i') >>> for _ in xrange(2000000): ... a.append(random.randint(0, sys.maxint)) ... >>> h.heap() Partition of a set of 14 objects. Total size = 8001108 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1 7 8000028 100 8000028 100 array.array On average, a pair of integers uses 8 bytes. I accept that 8 bytes/pair in a dictionary is rather hard to achieve in general. Rephrased question: is there a memory-efficient implementation of int-int dictionary that uses considerably less than 49 bytes/pair?

    Read the article

  • Searching for duplicate records within a text file where the duplicate is determined by only two fie

    - by plg
    First, Python Newbie; be patient/kind. Next, once a month I receive a large text file (think 7 Million records) to test for duplicate values. This is catalog information. I get 7 fields, but the two I'm interested in are a supplier code and a full orderable part number. To determine if the record is dupliacted, I compress all special characters from the part number (except . and #) and create a compressed part number. The test for duplicates becomes the supplier code and compressed part number combination. This part is fairly straight forward. Currently, I am just copying the original file with 2 new columns (compressed part and duplicate indicator). If the part is a duplicate, I put a "YES" in the last field. Now that this is done, I want to be able to go back (or better yet, at the same time) to get the previous record where there was a supplier code/compressed part number match. So far, my code looks like this: Compress Full Part to a Compressed Part and Check for Duplicates on Supplier Code and Compressed Part combination import sys import re import time ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ start=time.time() try: file1 = open("C:\Accounting\May Accounting\May.txt", "r") except IOError: print sys.stderr, "Cannot Open Read File" sys.exit(1) try: file2 = open(file1.name[0:len(file1.name)-4] + "_" + "COMPRESSPN.txt", "a") except IOError: print sys.stderr, "Cannot Open Write File" sys.exit(1) hdrList="CIGSUPPLIER|FULL_PART|PART_STATUS|ALIAS_FLAG|ACQUISITION_FLAG|COMPRESSED_PART|DUPLICATE_INDICATOR" file2.write(hdrList+chr(10)) lines_seen=set() affirm="YES" records = file1.readlines() for record in records: fields = record.split(chr(124)) if fields[0]=="CIGSupplier": continue #If incoming file has a header line, skip it file2.write(fields[0]+"|"), #Supplier Code file2.write(fields[1]+"|"), #Full_Part file2.write(fields[2]+"|"), #Part Status file2.write(fields[3]+"|"), #Alias Flag file2.write(re.sub("[$\r\n]", "", fields[4])+"|"), #Acquisition Flag file2.write(re.sub("[^0-9a-zA-Z.#]", "", fields[1])+"|"), #Compressed_Part dupechk=fields[0]+"|"+re.sub("[^0-9a-zA-Z.#]", "", fields[1]) if dupechk not in lines_seen: file2.write(chr(10)) lines_seen.add(dupechk) else: file2.write(affirm+chr(10)) print "it took", time.time() - start, "seconds." ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ file2.close() file1.close() It runs in less than 6 minutes, so I am happy with this part, even if it is not elegant. Right now, when I get my results, I import the results into Access and do a self join to locate the duplicates. Loading/querying/exporting results in Access a file this size takes around an hour, so I would like to be able to export the matched duplicates to another text file or an Excel file. Confusing enough? Thanks.

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • How can I get penetration depth from Minkowski Portal Refinement / Xenocollide?

    - by Raven Dreamer
    I recently got an implementation of Minkowski Portal Refinement (MPR) successfully detecting collision. Even better, my implementation returns a good estimate (local minimum) direction for the minimum penetration depth. So I took a stab at adjusting the algorithm to return the penetration depth in an arbitrary direction, and was modestly successful - my altered method works splendidly for face-edge collision resolution! What it doesn't currently do, is correctly provide the minimum penetration depth for edge-edge scenarios, such as the case on the right: What I perceive to be happening, is that my current method returns the minimum penetration depth to the nearest vertex - which works fine when the collision is actually occurring on the plane of that vertex, but not when the collision happens along an edge. Is there a way I can alter my method to return the penetration depth to the point of collision, rather than the nearest vertex? Here's the method that's supposed to return the minimum penetration distance along a specific direction: public static Vector3 CalcMinDistance(List<Vector3> shape1, List<Vector3> shape2, Vector3 dir) { //holding variables Vector3 n = Vector3.zero; Vector3 swap = Vector3.zero; // v0 = center of Minkowski sum v0 = Vector3.zero; // Avoid case where centers overlap -- any direction is fine in this case //if (v0 == Vector3.zero) return Vector3.zero; //always pass in a valid direction. // v1 = support in direction of origin n = -dir; //get the differnce of the minkowski sum Vector3 v11 = GetSupport(shape1, -n); Vector3 v12 = GetSupport(shape2, n); v1 = v12 - v11; //if the support point is not in the direction of the origin if (v1.Dot(n) <= 0) { //Debug.Log("Could find no points this direction"); return Vector3.zero; } // v2 - support perpendicular to v1,v0 n = v1.Cross(v0); if (n == Vector3.zero) { //v1 and v0 are parallel, which means //the direction leads directly to an endpoint n = v1 - v0; //shortest distance is just n //Debug.Log("2 point return"); return n; } //get the new support point Vector3 v21 = GetSupport(shape1, -n); Vector3 v22 = GetSupport(shape2, n); v2 = v22 - v21; if (v2.Dot(n) <= 0) { //can't reach the origin in this direction, ergo, no collision //Debug.Log("Could not reach edge?"); return Vector2.zero; } // Determine whether origin is on + or - side of plane (v1,v0,v2) //tests linesegments v0v1 and v0v2 n = (v1 - v0).Cross(v2 - v0); float dist = n.Dot(v0); // If the origin is on the - side of the plane, reverse the direction of the plane if (dist > 0) { //swap the winding order of v1 and v2 swap = v1; v1 = v2; v2 = swap; //swap the winding order of v11 and v12 swap = v12; v12 = v11; v11 = swap; //swap the winding order of v11 and v12 swap = v22; v22 = v21; v21 = swap; //and swap the plane normal n = -n; } /// // Phase One: Identify a portal while (true) { // Obtain the support point in a direction perpendicular to the existing plane // Note: This point is guaranteed to lie off the plane Vector3 v31 = GetSupport(shape1, -n); Vector3 v32 = GetSupport(shape2, n); v3 = v32 - v31; if (v3.Dot(n) <= 0) { //can't enclose the origin within our tetrahedron //Debug.Log("Could not reach edge after portal?"); return Vector3.zero; } // If origin is outside (v1,v0,v3), then eliminate v2 and loop if (v1.Cross(v3).Dot(v0) < 0) { //failed to enclose the origin, adjust points; v2 = v3; v21 = v31; v22 = v32; n = (v1 - v0).Cross(v3 - v0); continue; } // If origin is outside (v3,v0,v2), then eliminate v1 and loop if (v3.Cross(v2).Dot(v0) < 0) { //failed to enclose the origin, adjust points; v1 = v3; v11 = v31; v12 = v32; n = (v3 - v0).Cross(v2 - v0); continue; } bool hit = false; /// // Phase Two: Refine the portal int phase2 = 0; // We are now inside of a wedge... while (phase2 < 20) { phase2++; // Compute normal of the wedge face n = (v2 - v1).Cross(v3 - v1); n.Normalize(); // Compute distance from origin to wedge face float d = n.Dot(v1); // If the origin is inside the wedge, we have a hit if (d > 0 ) { //Debug.Log("Do plane test here"); float T = n.Dot(v2) / n.Dot(dir); Vector3 pointInPlane = (dir * T); return pointInPlane; } // Find the support point in the direction of the wedge face Vector3 v41 = GetSupport(shape1, -n); Vector3 v42 = GetSupport(shape2, n); v4 = v42 - v41; float delta = (v4 - v3).Dot(n); float separation = -(v4.Dot(n)); if (delta <= kCollideEpsilon || separation >= 0) { //Debug.Log("Non-convergance detected"); //Debug.Log("Do plane test here"); return Vector3.zero; } // Compute the tetrahedron dividing face (v4,v0,v1) float d1 = v4.Cross(v1).Dot(v0); // Compute the tetrahedron dividing face (v4,v0,v2) float d2 = v4.Cross(v2).Dot(v0); // Compute the tetrahedron dividing face (v4,v0,v3) float d3 = v4.Cross(v3).Dot(v0); if (d1 < 0) { if (d2 < 0) { // Inside d1 & inside d2 ==> eliminate v1 v1 = v4; v11 = v41; v12 = v42; } else { // Inside d1 & outside d2 ==> eliminate v3 v3 = v4; v31 = v41; v32 = v42; } } else { if (d3 < 0) { // Outside d1 & inside d3 ==> eliminate v2 v2 = v4; v21 = v41; v22 = v42; } else { // Outside d1 & outside d3 ==> eliminate v1 v1 = v4; v11 = v41; v12 = v42; } } } return Vector3.zero; } }

    Read the article

  • Alert visualization recipe: Get out your blender, drop in some sp_send_dbmail, Google Charts API, add your favorite colors and sprinkle with html. Blend till it’s smooth and looks pretty enough to taste.

    - by Maria Zakourdaev
      I really like database monitoring. My email inbox have a constant flow of different types of alerts coming from our production servers with all kinds of information, sometimes more useful and sometimes less useful. Usually database alerts look really simple, it’s usually a plain text email saying “Prod1 Database data file on Server X is 80% used. You’d better grow it manually before some query triggers the AutoGrowth process”. Imagine you could have received email like the one below.  In addition to the alert description it could have also included the the database file growth chart over the past 6 months. Wouldn’t it give you much more information whether the data growth is natural or extreme? That’s truly what data visualization is for. Believe it or not, I have sent the graph below from SQL Server stored procedure without buying any additional data monitoring/visualization tool.   Would you like to visualize your database alerts like I do? Then like myself, you’d love the Google Charts. All you need to know is a little HTML and have a mail profile configured on your SQL Server instance regardless of the SQL Server version. First of all, I hope you know that the sp_send_dbmail procedure has a great parameter @body_format = ‘HTML’, which allows us to send rich and colorful messages instead of boring black and white ones. All that we need is to dynamically create HTML code. This is how, for instance, you can create a table and populate it with some data: DECLARE @html varchar(max) SET @html = '<html>' + '<H3><font id="Text" style='color: Green;'>Top Databases: </H3>' + '<table border="1" bordercolor="#3300FF" style='background-color:#DDF8CC' width='70%' cellpadding='3' cellspacing='3'>' + '<tr><font color="Green"><th>Database Name</th><th>Size</th><th>Physical Name</th></tr>' + CAST( (SELECT TOP 10                             td = name,'',                             td = size * 8/1024 ,'',                             td = physical_name              FROM sys.master_files               ORDER BY size DESC             FOR XML PATH ('tr'),TYPE ) AS VARCHAR(MAX)) + '</table>' EXEC msdb.dbo.sp_send_dbmail @recipients = '[email protected]', @subject ='Top databases', @body = @html, @body_format = 'HTML' This is the result:   If you want to add more visualization effects, you can use Google Charts Tools https://google-developers.appspot.com/chart/interactive/docs/index which is a free and rich library of data visualization charts, they’re also easy to populate and embed. There are two versions of the Google Charts Image based charts: https://google-developers.appspot.com/chart/image/docs/gallery/chart_gall This is an old version, it’s officially deprecated although it will be up for a next few years or so. I really enjoy using this one because it can be viewed within the email body. For mobile devices you need to change the “Load remote images” property in your email application configuration.           Charts based on JavaScript classes: https://google-developers.appspot.com/chart/interactive/docs/gallery This API is newer, with rich and highly interactive charts, and it’s much more easier to understand and configure. The only downside of it is that they cannot be viewed within the email body. Outlook, Gmail and many other email clients, as part of their security policy, do not run any JavaScript that’s placed within the email body. However, you can still enjoy this API by sending the report as an email attachment. Here is an example of the old version of Google Charts API, sending the same top databases report as in the previous example but instead of a simple table, this script is using a pie chart right from  the T-SQL code DECLARE @html  varchar(8000) DECLARE @Series  varchar(800),@Labels  varchar(8000),@Legend  varchar(8000);     SET @Series = ''; SET @Labels = ''; SET @Legend = ''; SELECT TOP 5 @Series = @Series + CAST(size * 8/1024 as varchar) + ',',                         @Labels = @Labels +CAST(size * 8/1024 as varchar) + 'MB'+'|',                         @Legend = @Legend + name + '|' FROM sys.master_files ORDER BY size DESC SELECT @Series = SUBSTRING(@Series,1,LEN(@Series)-1),         @Labels = SUBSTRING(@Labels,1,LEN(@Labels)-1),         @Legend = SUBSTRING(@Legend,1,LEN(@Legend)-1) SET @html =   '<H3><font color="Green"> '+@@ServerName+' top 5 databases : </H3>'+    '<br>'+    '<img src="http://chart.apis.google.com/chart?'+    'chf=bg,s,DDF8CC&'+    'cht=p&'+    'chs=400x200&'+    'chco=3072F3|7777CC|FF9900|FF0000|4A8C26&'+    'chd=t:'+@Series+'&'+    'chl='+@Labels+'&'+    'chma=0,0,0,0&'+    'chdl='+@Legend+'&'+    'chdlp=b"'+    'alt="'+@@ServerName+' top 5 databases" />'              EXEC msdb.dbo.sp_send_dbmail @recipients = '[email protected]',                             @subject = 'Top databases',                             @body = @html,                             @body_format = 'HTML' This is what you get. Isn’t it great? Chart parameters reference: chf     Gradient fill  bg - backgroud ; s- solid cht     chart type  ( p - pie) chs        chart size width/height chco    series colors chd        chart data string        1,2,3,2 chl        pir chart labels        a|b|c|d chma    chart margins chdl    chart legend            a|b|c|d chdlp    chart legend text        b - bottom of chart   Line graph implementation is also really easy and powerful DECLARE @html varchar(max) DECLARE @Series varchar(max) DECLARE @HourList varchar(max) SET @Series = ''; SET @HourList = ''; SELECT @HourList = @HourList + SUBSTRING(CONVERT(varchar(13),last_execution_time,121), 12,2)  + '|' ,              @Series = @Series + CAST( COUNT(1) as varchar) + ',' FROM sys.dm_exec_query_stats s     CROSS APPLY sys.dm_exec_sql_text(plan_handle) t WHERE last_execution_time > = getdate()-1 GROUP BY CONVERT(varchar(13),last_execution_time,121) ORDER BY CONVERT(varchar(13),last_execution_time,121) SET @Series = SUBSTRING(@Series,1,LEN(@Series)-1) SET @html = '<img src="http://chart.apis.google.com/chart?'+ 'chco=CA3D05,87CEEB&'+ 'chd=t:'+@Series+'&'+ 'chds=1,350&'+ 'chdl= Proc executions from cache&'+ 'chf=bg,s,1F1D1D|c,lg,0,363433,1.0,2E2B2A,0.0&'+ 'chg=25.0,25.0,3,2&'+ 'chls=3|3&'+ 'chm=d,CA3D05,0,-1,12,0|d,FFFFFF,0,-1,8,0|d,87CEEB,1,-1,12,0|d,FFFFFF,1,-1,8,0&'+ 'chs=600x450&'+ 'cht=lc&'+ 'chts=FFFFFF,14&'+ 'chtt=Executions for from' +(SELECT CONVERT(varchar(16),min(last_execution_time),121)          FROM sys.dm_exec_query_stats          WHERE last_execution_time > = getdate()-1) +' till '+ +(SELECT CONVERT(varchar(16),max(last_execution_time),121)     FROM sys.dm_exec_query_stats) + '&'+ 'chxp=1,50.0|4,50.0&'+ 'chxs=0,FFFFFF,12,0|1,FFFFFF,12,0|2,FFFFFF,12,0|3,FFFFFF,12,0|4,FFFFFF,14,0&'+ 'chxt=y,y,x,x,x&'+ 'chxl=0:|1|350|1:|N|2:|'+@HourList+'3:|Hour&'+ 'chma=55,120,0,0" alt="" />' EXEC msdb.dbo.sp_send_dbmail @recipients = '[email protected]', @subject ='Daily number of executions', @body = @html, @body_format = 'HTML' Chart parameters reference: chco    series colors chd        series data chds    scale format chdl    chart legend chf        background fills chg        grid line chls    line style chm        line fill chs        chart size cht        chart type chts    chart style chtt    chart title chxp    axis label positions chxs    axis label styles chxt    axis tick mark styles chxl    axis labels chma    chart margins If you don’t mind to get your charts as an email attachment, you can enjoy the Java based Google Charts which are even easier to configure, and have much more advanced graphics. In the example below, the sp_send_email procedure uses the parameter @query which will be executed at the time that sp_send_dbemail is executed and the HTML result of this execution will be attached to the email. DECLARE @html varchar(max),@query varchar(max) DECLARE @SeriesDBusers  varchar(800);     SET @SeriesDBusers = ''; SELECT @SeriesDBusers = @SeriesDBusers +  ' ["'+DB_NAME(r.database_id) +'", ' +cast(count(1) as varchar)+'],' FROM sys.dm_exec_requests r GROUP BY DB_NAME(database_id) ORDER BY count(1) desc; SET @SeriesDBusers = SUBSTRING(@SeriesDBusers,1,LEN(@SeriesDBusers)-1) SET @query = ' PRINT '' <html>   <head>     <script type="text/javascript" src="https://www.google.com/jsapi"></script>     <script type="text/javascript">       google.load("visualization", "1", {packages:["corechart"]});        google.setOnLoadCallback(drawChart);       function drawChart() {                      var data = google.visualization.arrayToDataTable([                        ["Database Name", "Active users"],                        '+@SeriesDBusers+'                      ]);                        var options = {                        title: "Active users",                        pieSliceText: "value"                      };                        var chart = new google.visualization.PieChart(document.getElementById("chart_div"));                      chart.draw(data, options);       };     </script>   </head>   <body>     <table>     <tr><td>         <div id="chart_div" style='width: 800px; height: 300px;'></div>         </td></tr>     </table>   </body> </html> ''' EXEC msdb.dbo.sp_send_dbmail    @recipients = '[email protected]',    @subject ='Active users',    @body = @html,    @body_format = 'HTML',    @query = @Query,     @attach_query_result_as_file = 1,     @query_attachment_filename = 'Results.htm' After opening the email attachment in the browser you are getting this kind of report: In fact, the above is not only for database alerts. It can be used for applicative reports if you need high levels of customization that you cannot achieve using standard methods like SSRS. If you need more information on how to customize the charts, you can try the following: Image Based Charts wizard https://google-developers.appspot.com/chart/image/docs/chart_wizard  Live Image Charts Playground https://google-developers.appspot.com/chart/image/docs/chart_playground Image Based Charts Parameters List https://google-developers.appspot.com/chart/image/docs/chart_params Java Script Charts Playground https://code.google.com/apis/ajax/playground/?type=visualization Use the above examples as a starting point for your procedures and I’d be more than happy to hear of your implementations of the above techniques. Yours, Maria

    Read the article

  • How to deal with configuration style warnings occuring from TexLive 2012 installation?

    - by JJD
    I followed the advice of izx on how to install TexLive 2012 using the texlive-backports PPA. Before I started I removed all TexLive-related packages. The installation finished and everything seems to work fine. The only thing I noticed are some warnings in the output of the installer. Here is an excerpt of the output: Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg There are more of that kind in the rest of the output: $ sudo apt-get install texlive Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: latex-beamer latex-xcolor libgraphite3 libkpathsea6 libptexenc1 lmodern pgf prosper ps2eps tex-common tex-gyre texlive-base texlive-binaries texlive-common texlive-doc-base texlive-extra-utils texlive-font-utils texlive-fonts-recommended texlive-fonts-recommended-doc texlive-generic-recommended texlive-latex-base texlive-latex-base-doc texlive-latex-recommended texlive-latex-recommended-doc texlive-pstricks texlive-pstricks-doc tipa ttf-marvosym Suggested packages: texlive-doc-en purifyeps chktex latexmk dvipng xindy dvidvi fragmaster lacheck latexdiff t1utils The following NEW packages will be installed: latex-beamer latex-xcolor libgraphite3 libkpathsea6 libptexenc1 lmodern pgf prosper ps2eps tex-common tex-gyre texlive texlive-base texlive-binaries texlive-common texlive-doc-base texlive-extra-utils texlive-font-utils texlive-fonts-recommended texlive-fonts-recommended-doc texlive-generic-recommended texlive-latex-base texlive-latex-base-doc texlive-latex-recommended texlive-latex-recommended-doc texlive-pstricks texlive-pstricks-doc tipa ttf-marvosym 0 upgraded, 29 newly installed, 0 to remove and 17 not upgraded. Need to get 0 B/274 MB of archives. After this operation, 450 MB of additional disk space will be used. Do you want to continue [Y/n]? Preconfiguring packages ... Selecting previously unselected package tex-common. (Reading database ... 290206 files and directories currently installed.) Unpacking tex-common (from .../tex-common_3.13~ubuntu12.04.1_all.deb) ... Selecting previously unselected package lmodern. Unpacking lmodern (from .../lmodern_2.004.1-5~precise1_all.deb) ... Selecting previously unselected package tex-gyre. Unpacking tex-gyre (from .../tex-gyre_2.004.1-4~precise1_all.deb) ... Selecting previously unselected package libgraphite3. Unpacking libgraphite3 (from .../libgraphite3_1%3a2.3.1-0.2build1_amd64.deb) ... Selecting previously unselected package libkpathsea6. Unpacking libkpathsea6 (from .../libkpathsea6_2012.20120628-1~ubuntu12.04.1_amd64.deb) ... Selecting previously unselected package libptexenc1. Unpacking libptexenc1 (from .../libptexenc1_2012.20120628-1~ubuntu12.04.1_amd64.deb) ... Selecting previously unselected package texlive-common. Unpacking texlive-common (from .../texlive-common_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-binaries. Unpacking texlive-binaries (from .../texlive-binaries_2012.20120628-1~ubuntu12.04.1_amd64.deb) ... Selecting previously unselected package texlive-doc-base. Unpacking texlive-doc-base (from .../texlive-doc-base_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-base. Unpacking texlive-base (from .../texlive-base_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-base. Unpacking texlive-latex-base (from .../texlive-latex-base_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-recommended. Unpacking texlive-latex-recommended (from .../texlive-latex-recommended_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package latex-xcolor. Unpacking latex-xcolor (from .../latex-xcolor_2.11-1_all.deb) ... Selecting previously unselected package pgf. Unpacking pgf (from .../archives/pgf_2.10-1_all.deb) ... Selecting previously unselected package latex-beamer. Unpacking latex-beamer (from .../latex-beamer_3.10-1_all.deb) ... Selecting previously unselected package texlive-generic-recommended. Unpacking texlive-generic-recommended (from .../texlive-generic-recommended_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-pstricks. Unpacking texlive-pstricks (from .../texlive-pstricks_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package prosper. Unpacking prosper (from .../prosper_1.00.4+cvs.2007.05.01-4_all.deb) ... Selecting previously unselected package ps2eps. Unpacking ps2eps (from .../ps2eps_1.68-1_amd64.deb) ... Selecting previously unselected package ttf-marvosym. Unpacking ttf-marvosym (from .../ttf-marvosym_0.1+dfsg-2_all.deb) ... Selecting previously unselected package texlive-fonts-recommended. Unpacking texlive-fonts-recommended (from .../texlive-fonts-recommended_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive. Unpacking texlive (from .../texlive_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-extra-utils. Unpacking texlive-extra-utils (from .../texlive-extra-utils_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-font-utils. Unpacking texlive-font-utils (from .../texlive-font-utils_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-fonts-recommended-doc. Unpacking texlive-fonts-recommended-doc (from .../texlive-fonts-recommended-doc_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-base-doc. Unpacking texlive-latex-base-doc (from .../texlive-latex-base-doc_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-latex-recommended-doc. Unpacking texlive-latex-recommended-doc (from .../texlive-latex-recommended-doc_2012.20120611-3~ubuntu12.04.1_all.deb) ... Selecting previously unselected package texlive-pstricks-doc. Unpacking texlive-pstricks-doc (from .../texlive-pstricks-doc_2012.20120611-1~ubuntu12.04.1_all.deb) ... Selecting previously unselected package tipa. Unpacking tipa (from .../tipa_2%3a1.3-17~precise1_all.deb) ... Processing triggers for doc-base ... Processing 5 added doc-base files... Registering documents with scrollkeeper... Processing triggers for man-db ... Processing triggers for fontconfig ... Processing triggers for install-info ... Setting up tex-common (3.13~ubuntu12.04.1) ... Running mktexlsr. This may take some time... done. texlive-base is not ready, delaying updmap-sys call texlive-base is not ready, skipping fmtutil-sys --all call Setting up lmodern (2.004.1-5~precise1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up tex-gyre (2.004.1-4~precise1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up libgraphite3 (1:2.3.1-0.2build1) ... Setting up libkpathsea6 (2012.20120628-1~ubuntu12.04.1) ... Setting up libptexenc1 (2012.20120628-1~ubuntu12.04.1) ... Setting up texlive-common (2012.20120611-3~ubuntu12.04.1) ... Setting up texlive-binaries (2012.20120628-1~ubuntu12.04.1) ... update-alternatives: using /usr/bin/xdvi-xaw to provide /usr/bin/xdvi.bin (xdvi.bin) in auto mode. update-alternatives: using /usr/bin/bibtex.original to provide /usr/bin/bibtex (bibtex) in auto mode. mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEDIST... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. Building format(s) --refresh. This may take some time... done. Setting up texlive-doc-base (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up ps2eps (1.68-1) ... Setting up ttf-marvosym (0.1+dfsg-2) ... Setting up texlive-fonts-recommended-doc (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-base-doc (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-recommended-doc (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-pstricks-doc (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. texlive-base is not ready, delaying updmap-sys call Setting up texlive-base (2012.20120611-3~ubuntu12.04.1) ... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. /usr/bin/tl-paper: setting paper size for dvips to a4. /usr/bin/tl-paper: setting paper size for dvipdfmx to a4. /usr/bin/tl-paper: setting paper size for xdvi to a4. /usr/bin/tl-paper: setting paper size for pdftex to a4. Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Running mktexlsr. This may take some time... done. Building format(s) --all. This may take some time... done. Processing triggers for tex-common ... Running updmap-sys. This may take some time... done. Running mktexlsr /var/lib/texmf ... done. Setting up texlive-generic-recommended (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-fonts-recommended (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-extra-utils (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-font-utils (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-base (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Running mktexlsr. This may take some time... done. Building format(s) --all --cnffile /etc/texmf/fmt.d/10texlive-latex-base.cnf. This may take some time... done. Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. Running updmap-sys. This may take some time... done. Running mktexlsr /var/lib/texmf ... done. Setting up texlive-pstricks (2012.20120611-1~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up tipa (2:1.3-17~precise1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Setting up texlive-latex-recommended (2012.20120611-3~ubuntu12.04.1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. Running updmap-sys. This may take some time... done. Running mktexlsr /var/lib/texmf ... done. Setting up prosper (1.00.4+cvs.2007.05.01-4) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Running mktexlsr. This may take some time... done. Setting up texlive (2012.20120611-3~ubuntu12.04.1) ... Setting up latex-xcolor (2.11-1) ... mktexlsr: Updating /usr/local/share/texmf/ls-R... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEDIST... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. Setting up pgf (2.10-1) ... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg Processing triggers for tex-common ... Running mktexlsr. This may take some time... done. Setting up latex-beamer (3.10-1) ... mktexlsr: Updating /usr/local/share/texmf/ls-R... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEMAIN... mktexlsr: Updating /var/lib/texmf/ls-R-TEXLIVEDIST... mktexlsr: Updating /var/lib/texmf/ls-R-TEXMFMAIN... mktexlsr: Updating /var/lib/texmf/ls-R... mktexlsr: Done. Processing triggers for libc-bin ... ldconfig deferred processing now taking place What exactly is 10lmodern.cfg good for? How can I prevent this warnings? Here is the output of sudo update-updmap: $ sudo update-updmap Regenerating '/var/lib/texmf/updmap.cfg-DEBIAN'... Warning: Old configuration style found in /etc/texmf/updmap.d Warning: For now these files have been included, Warning: but expect inconsistencies. Warning: These packages should be rebuild with tex-common. Warning: Please see /usr/share/doc/tex-common/NEWS.Debian.gz Warning: found file: /etc/texmf/updmap.d/10lmodern.cfg done. Regenerating '/var/lib/texmf/updmap.cfg-TEXLIVEDIST'... done. update-updmap has updated the following file(s): /var/lib/texmf/updmap.cfg-DEBIAN /var/lib/texmf/updmap.cfg-TEXLIVEDIST If you want to enable the map files with this new file, you should run updmap-sys or updmap.

    Read the article

  • Automatic Standby Recreation for Data Guard

    - by pablo.boixeda(at)oracle.com
    Hi,Unfortunately sometimes a Standby Instance needs to be recreated. This can happen for many reasons such as lost archive logs, standby data files, failover, among others.This is why we wanted to have one script to recreate standby instances in an easy way.This script recreates the standby considering some prereqs:-Database Version should be at least 11gR1-Dummy instance started on the standby node (Seeking to improve this so it won't be needed)-Broker configuration hasn't been removed-In our case we have two TNSNAMES files, one for the Standby creation (using SID) and the other one for production using service names (including broker service name)-Some environment variables set up by the environment db script (like ORACLE_HOME, PATH...)-The directory tree should not have been modified in the stanby hostWe are currently using it on our 11gR2 Data Guard tests.Any improvements will be welcome! Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} #!/bin/ksh ###    NOMBRE / VERSION ###       recrea_dg.sh   v.1.00 ### ###    DESCRIPCION ###       reacreacion de la Standby ### ###    DEVUELVE ###       0 Creacion de STANDBY correcta ###       1 Fallo ### ###    NOTAS ###       Este shell script NO DEBE MODIFICARSE. ###       Todas las variables y constantes necesarias se toman del entorno. ### ###    MODIFICADO POR:    FECHA:        COMENTARIOS: ###    ---------------    ----------    ------------------------------------- ###      Oracle           15/02/2011    Creacion. ### ### ### Cargar entorno ### V_ADMIN_DIR=`dirname $0` . ${V_ADMIN_DIR}/entorno_bd.sh 1>>/dev/null if [ $? -ne 0 ] then   echo "Error Loading the environment."   exit 1 fi V_RET=0 V_DATE=`/bin/date` V_DATE_F=`/bin/date +%Y%m%d_%H%M%S` V_LOGFILE=${V_TRAZAS}/recrea_dg_${V_DATE_F}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1 ### ### Variables para Recrear el Data Guard ### V_DB_BR=`echo ${V_DB_NAME}|tr '[:lower:]' '[:upper:]'` if [ "${ORACLE_SID}" = "${V_DB_NAME}01" ] then         V_LOCAL_BR=${V_DB_BR}'01'         V_REMOTE_BR=${V_DB_BR}'02' else         V_LOCAL_BR=${V_DB_BR}'02'         V_REMOTE_BR=${V_DB_BR}'01' fi echo " Getting local instance ROLE ${ORACLE_SID} ..." sqlplus -s /nolog 1>>/dev/null 2>&1 <<-! whenever sqlerror exit 1 connect / as sysdba variable salida number declare   v_database_role v\$database.database_role%type; begin   select database_role into v_database_role from v\$database;   :salida := case v_database_role        when 'PRIMARY' then 2        when 'PHYSICAL STANDBY' then 3        else 4      end; end; / exit :salida ! case $? in 1) echo " ERROR: Cannot get instance ROLE ." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; 2) echo " Local Instance with PRIMARY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=PRIMARY ;; 3) echo " Local Instance with PHYSICAL STANDBY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=STANDBY ;; *) echo " ERROR: UNKNOWN ROLE." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; esac if [ "${V_DB_ROLE_LCL}" = "PRIMARY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_REMOTE_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_LOCAL_BR}         V_STANDBY=${V_REMOTE_BR} fi if [ "${V_DB_ROLE_LCL}" = "STANDBY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_LOCAL_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_REMOTE_BR}         V_STANDBY=${V_LOCAL_BR} fi # Cargamos las variables de los hosts # Cargamos las variables de los hosts PRY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_PRIMARY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` SBY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` echo "el HOST primary es: ${PRY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "el HOST standby es: ${SBY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## V_DATE=`/bin/date` echo "${V_DATE} - Shutting down Standby instance" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## SBY_STATUS=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',status from v\\$instance; EOF` if [ ${SBY_STATUS} = 'STARTED' ] || [ ${SBY_STATUS} = 'MOUNTED' ] || [ ${SBY_STATUS} = 'OPEN' ] then         echo "${V_DATE} - Standby instance shutdown in progress..." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1         sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         shutdown abort         ! fi V_DATE=`/bin/date` echo "" echo "${V_DATE} - Standby instance stopped" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Eliminamos los ficheros de la base de datos ## V_SBY_SID=`echo ${V_STANDBY}|tr '[:upper:]' '[:lower:]'` V_PRY_SID=`echo ${V_PRIMARY}|tr '[:upper:]' '[:lower:]'` ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/data/*.dbf ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch/*.arc ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.rdo ## ## Startup nomount stby instance ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting  DUMMY Standby Instance " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${SBY_HOST} touch /home/oracle/init_dg.ora ssh ${SBY_HOST} 'echo "DB_NAME='${V_DB_NAME}'">>/home/oracle/init_dg.ora' ssh ${SBY_HOST} touch /home/oracle/start_dummy.sh ssh ${SBY_HOST} 'echo "ORACLE_HOME=/opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_HOME">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "PATH=\$ORACLE_HOME/bin:\$PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "ORACLE_SID='${V_SBY_SID}'">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_SID">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "sqlplus -s /nolog <<-!" >>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      whenever sqlerror exit 1 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      connect / as sysdba ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      startup nomount pfile='\''/home/oracle/init_dg.ora'\''">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "! ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'chmod 744 /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'sh /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/init_dg.ora' ## ## TNSNAMES change, specific for RMAN duplicate ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Setting up TNSNAMES in PRIMARY host " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.inst  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting STANDBY creation with RMAN.. " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 rman<<-! >>${V_LOGFILE} connect target sys/${V_DB_PWD}@${V_PRIMARY} connect auxiliary sys/${V_DB_PWD}@${V_STANDBY} run { allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database dorecover spfile parameter_value_convert '${V_PRY_SID}','${V_SBY_SID}' set control_files='/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/control01.ctl','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/control02.ctl' set db_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set log_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set 'db_unique_name'='${V_SBY_SID}' set log_archive_config='DG_CONFIG=(${V_PRIMARY},${V_STANDBY})' set fal_client='${V_STANDBY}' set fal_server='${V_PRIMARY}' set log_archive_dest_1='LOCATION=/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch DB_UNIQUE_NAME=${V_SBY_SID} MANDATORY VALID_FOR=(ALL_LOGFILES,ALL_ROLES)' set log_archive_dest_2='SERVICE="${V_PRIMARY}"','SYNC AFFIRM DB_UNIQUE_NAME=${V_PRY_SID} DELAY=0 MAX_FAILURE=0 REOPEN=300 REGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)' nofilenamecheck ; } ! V_DATE=`/bin/date` if [ $? -ne 0 ] then         echo ""         echo "${V_DATE} - Error creating STANDBY instance"         echo ""         echo "********************************************************************************" else         echo ""         echo "${V_DATE} - STANDBY instance created SUCCESSFULLY "         echo ""         echo "********************************************************************************" fi sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=${SBY_HOST})(PORT=1544))' scope=both;         alter system set service_names='${V_DB_NAME}.eu.roca.net,${V_SBY_SID}.eu.roca.net,${V_SBY_SID}_DGMGRL.eu.roca.net' scope=both;         alter database recover managed standby database using current logfile disconnect from session;         alter system set dg_broker_start=true scope=both; ! ## ## TNSNAMES change, back to Production Mode ## V_DATE=`/bin/date` echo " " | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Restoring TNSNAMES in PRIMARY "  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.prod  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} -  Waiting for media recovery before check the DATA GUARD Broker"  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 sleep 200 dgmgrl <<-! | grep SUCCESS 1>/dev/null 2>&1     connect ${V_DB_USR}/${V_DB_PWD}@${V_STANDBY}     show configuration verbose; ! if [ $? -ne 0 ] ; then         echo "       ERROR: El status del Broker no es SUCCESS" | tee -a ${V_LOGFILE}   2>&1 ;         V_RET=1 else          echo "      DATA GUARD OK " | tee -a ${V_LOGFILE}   2>&1 ; Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}         V_RET=0 fi Hope it helps.

    Read the article

  • App cannot start at all in Android 2.2 (Froyo)

    - by Roland Lim
    Dear fellow Android developers & Google Engineers, My app has been running okay until the recent Froyo update. After installing the Android 2.2 SDK, I can compile my code without any errors. However, when I run it, it just force closes: Here's the log: 05-23 10:15:13.463: DEBUG/AndroidRuntime(423): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 05-23 10:15:13.463: DEBUG/AndroidRuntime(423): CheckJNI is ON 05-23 10:15:14.193: DEBUG/AndroidRuntime(423): --- registering native functions --- 05-23 10:15:15.293: DEBUG/AndroidRuntime(423): Shutting down VM 05-23 10:15:15.303: DEBUG/dalvikvm(423): Debugger has detached; object registry had 1 entries 05-23 10:15:15.333: INFO/AndroidRuntime(423): NOTE: attach of thread 'Binder Thread #3' failed 05-23 10:15:16.003: DEBUG/AndroidRuntime(431): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 05-23 10:15:16.013: DEBUG/AndroidRuntime(431): CheckJNI is ON 05-23 10:15:16.273: DEBUG/AndroidRuntime(431): --- registering native functions --- 05-23 10:15:17.392: INFO/ActivityManager(59): Starting activity: Intent { act=android.intent.action.MAIN cat= [android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.handyapps.easymoney/.EasyMoney } 05-23 10:15:17.602: DEBUG/AndroidRuntime(431): Shutting down VM 05-23 10:15:17.662: DEBUG/dalvikvm(431): Debugger has detached; object registry had 1 entries 05-23 10:15:17.742: INFO/AndroidRuntime(431): NOTE: attach of thread 'Binder Thread #3' failed 05-23 10:15:17.912: INFO/ActivityManager(59): Start proc com.handyapps.easymoney for activity com.handyapps.easymoney/.EasyMoney: pid=438 uid=10035 gids={1006, 1015} 05-23 10:15:19.032: DEBUG/AndroidRuntime(438): Shutting down VM 05-23 10:15:19.032: WARN/dalvikvm(438): threadid=1: thread exiting with uncaught exception (group=0x4001d800) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): FATAL EXCEPTION: main 05-23 10:15:19.062: ERROR/AndroidRuntime(438): java.lang.RuntimeException: Unable to instantiate application com.handyapps.easymoney.EasyMoney: java.lang.ClassCastException: com.handyapps.easymoney.EasyMoney 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread$PackageInfo.makeApplication (ActivityThread.java:649) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread.handleBindApplication (ActivityThread.java:4232) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread.access$3000(ActivityThread.java:125) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2071) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.os.Handler.dispatchMessage(Handler.java:99) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.os.Looper.loop(Looper.java:123) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread.main(ActivityThread.java:4627) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at java.lang.reflect.Method.invokeNative(Native Method) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at java.lang.reflect.Method.invoke(Method.java:521) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run (ZygoteInit.java:868) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at dalvik.system.NativeStart.main(Native Method) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): Caused by: java.lang.ClassCastException: com.handyapps.easymoney.EasyMoney 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.Instrumentation.newApplication(Instrumentation.java:957) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.Instrumentation.newApplication(Instrumentation.java:942) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread$PackageInfo.makeApplication (ActivityThread.java:644) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): ... 11 more 05-23 10:15:19.082: WARN/ActivityManager(59): Force finishing activity com.handyapps.easymoney/.EasyMoney 05-23 10:15:19.592: WARN/ActivityManager(59): Activity pause timeout for HistoryRecord{450018f0 com.handyapps.easymoney/.EasyMoney} //////////////THE ANDROID MANIFEST FILE//// <uses-permission android:name="android.permission.READ_PHONE_STATE"/> <uses-permission android:name="android.permission.CAMERA"/> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-feature android:name="android.hardware.camera" /> <uses-sdk android:minSdkVersion="3" android:targetSdkVersion="4" /> <application android:icon="@drawable/icon" android:name="@string/app_name" android:label="@string/app_name" android:debuggable="false"> <activity android:name=".EasyMoney" android:label="@string/app_name" android:theme="@android:style/Theme.NoTitleBar" android:launchMode="singleTask" android:clearTaskOnLaunch="true"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".TranList" android:label="@string/app_name" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".TranEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".BillReminderEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".BillReminderList" android:launchMode="singleTop" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".BudgetList" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".BudgetEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".Search" android:theme="@style/CustomDialogTheme" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".PasscodeEntry" android:theme="@style/CustomDialogTheme" android:windowSoftInputMode="stateAlwaysHidden" android:screenOrientation="portrait"/> <activity android:name=".AccountList" android:theme="@android:style/Theme.Light.NoTitleBar"> </activity> <activity android:name=".AccountEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".UserSettingsEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".CurrencySettingsEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".DisplaySettingsEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".BackupSettingsEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".CategoryList" android:theme="@android:style/Theme.Light.NoTitleBar" /> <activity android:name=".CategoryEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".ExpenseByCategory" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".BalanceReport" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".MonthlyExpenseReport" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".MonthlyIncomeReport" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".MonthlyCashflowReport" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".PhotoList" android:theme="@android:style/Theme.Light.NoTitleBar" /> <activity android:name=".ExpenseByPayee" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".ExpenseBySubCategory" android:theme="@android:style/Theme.Light.NoTitleBar"/> <service android:name="StartAlarm_Service"> <intent-filter> <action android:name="com.handyapps.easymoney.StartAlarm_Service" /> </intent-filter> </service> <service android:name=".AlarmService_Service" android:process=":remote" /> <receiver android:name="StartupIntentReceiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> <category android:name="android.intent.category.HOME" /> </intent-filter> </receiver> <receiver android:name=".WidgetProvider" android:label="@string/widget_name"> <intent-filter> <action android:name="android.appwidget.action.APPWIDGET_UPDATE" /> </intent-filter> <meta-data android:name="android.appwidget.provider" android:resource="@xml/widget" /> </receiver> <receiver android:name=".WidgetProvider" android:label="@string/widget_name"> <intent-filter> <action android:name="android.appwidget.action.APPWIDGET_UPDATE" /> <data android:scheme="easymoney_widget" /> </intent-filter> <meta-data android:name="android.appwidget.provider" android:resource="@xml/widget" /> </receiver> <receiver android:name=".WidgetProvider"> <intent-filter> <action android:name="com.handyapps.easymoney.WIDGET_CONTROL" /> <data android:scheme="easymoney_widget" /> </intent-filter> </receiver> </application> The main startup class is com.handyapps.easymoney.EasyMoney. I placed a breakpoint at the start of the onCreate() method but I discovered it didn't even reach there. Somehow, the application just couldn't be loaded in Android 2.2... but it works perfectly fine for all the previous Android versions. Been trying to find the cause for the past 2 days but am totally stumped!! Any help will be greatly appreciated!!!! Thanks!! Roland

    Read the article

  • SPP Socket createRfcommSocketToServiceRecord will not connect

    - by philDev
    Hello, I want to use Android 2.1 to connect to an external Bluetooth device, wich is offering an SPP port to me. In this case it is an external GPS unit. When I'm trying to connect I can't connect an established socket while being in the "client" mode. Then if I try to set up a socket (being in the server role), to RECEIVE text from my PC everything works just fine. The Computer can connect as the client to the Socket on the Phone via SPP using the SSP UUID or some random UUID. So the Problem is not that I'm using the wrong UUID. But the other way around (e.g. calling connect on the established client socket) createRfcommSocketToServiceRecord(UUID uuid)) just doesn't work. Sadly I don't have the time to inspect the problem further. It would be greate If somebody could point me the right way. In the following part of the Logfile has to be the Problem. Greets PhilDev P.S. I'm going to be present during the Office hours. Here the log file: 03-21 03:10:52.020: DEBUG/BluetoothSocket.cpp(4643): initSocketFromFdNative 03-21 03:10:52.025: DEBUG/BluetoothSocket(4643): connect 03-21 03:10:52.025: DEBUG/BluetoothSocket(4643): doSdp 03-21 03:10:52.050: DEBUG/ADAPTER(2132): create_device(01:00:00:7F:B5:B3) 03-21 03:10:52.050: DEBUG/ADAPTER(2132): adapter_create_device(01:00:00:7F:B5:B3) 03-21 03:10:52.055: DEBUG/DEVICE(2132): Creating device [address = 01:00:00:7F:B5:B3] /org/bluez/2132/hci0/dev_01_00_00_7F_B5_B3 [name = ] 03-21 03:10:52.055: DEBUG/DEVICE(2132): btd_device_ref(0x10c18): ref=1 03-21 03:10:52.065: INFO/BluetoothEventLoop.cpp(1914): event_filter: Received signal org.bluez.Adapter:DeviceCreated from /org/bluez/2132/hci0 03-21 03:10:52.065: INFO/BluetoothService.cpp(1914): ... Object Path = /org/bluez/2132/hci0/dev_01_00_00_7F_B5_B3 03-21 03:10:52.065: INFO/BluetoothService.cpp(1914): ... Pattern = 00001101-0000-1000-8000-00805f9b34fb, strlen = 36 03-21 03:10:52.070: DEBUG/DEVICE(2132): *************DiscoverServices******** 03-21 03:10:52.070: INFO/DTUN_HCID(2132): dtun_client_get_remote_svc_channel: starting discovery on (uuid16=0x0011) 03-21 03:10:52.070: INFO/DTUN_HCID(2132): bdaddr=01:00:00:7F:B5:B3 03-21 03:10:52.070: INFO/DTUN_CLNT(2132): Client calling DTUN_METHOD_DM_GET_REMOTE_SERVICE_CHANNEL (id 4) 03-21 03:10:52.070: INFO/(2106): DTUN_ReceiveCtrlMsg: [DTUN] Received message [BTLIF_DTUN_METHOD_CALL] 4354 03-21 03:10:52.070: INFO/(2106): handle_method_call: handle_method_call :: received DTUN_METHOD_DM_GET_REMOTE_SERVICE_CHANNEL (id 4), len 134 03-21 03:10:52.075: ERROR/BTLD(2106): ****************search UUID = 1101*********** 03-21 03:10:52.075: INFO//system/bin/btld(2103): btapp_dm_GetRemoteServiceChannel() 03-21 03:10:52.120: DEBUG/BluetoothService(1914): updateDeviceServiceChannelCache(01:00:00:7F:B5:B3) 03-21 03:10:52.120: DEBUG/BluetoothEventLoop(1914): ClassValue: null for remote device: 01:00:00:7F:B5:B3 is null 03-21 03:10:52.120: INFO/BluetoothEventLoop.cpp(1914): event_filter: Received signal org.bluez.Adapter:PropertyChanged from /org/bluez/2132/hci0 03-21 03:10:52.305: WARN/BTLD(2106): bta_dm_check_av:0 03-21 03:10:56.395: DEBUG/WifiService(1914): ACTION_BATTERY_CHANGED pluggedType: 2 03-21 03:10:57.440: WARN/BTLD(2106): SDP - Rcvd conn cnf with error: 0x4 CID 0x43 03-21 03:10:57.440: INFO/BTL-IFS(2106): send_ctrl_msg: [BTL_IFS CTRL] send BTLIF_DTUN_SIGNAL_EVT (CTRL) 13 pbytes (hdl 10) 03-21 03:10:57.445: INFO/DTUN_CLNT(2132): dtun-rx signal [DTUN_SIG_DM_RMT_SERVICE_CHANNEL] (id 42) len 15 03-21 03:10:57.445: INFO/DTUN_HCID(2132): dtun_dm_sig_rmt_service_channel: success=1, service=00000000 03-21 03:10:57.445: ERROR/DTUN_HCID(2132): discovery unsuccessful! package de.phil_dev.android.BT; import java.io.BufferedInputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.util.UUID; import android.app.Activity; import android.bluetooth.BluetoothAdapter; import android.bluetooth.BluetoothClass; import android.bluetooth.BluetoothDevice; import android.bluetooth.BluetoothServerSocket; import android.bluetooth.BluetoothSocket; import android.content.Intent; import android.os.Bundle; import android.util.Log; import android.widget.Toast; public class ThinBTClient extends Activity { private static final String TAG = "THINBTCLIENT"; private static final boolean D = true; private BluetoothAdapter mBluetoothAdapter = null; private BluetoothSocket btSocket = null; private BufferedInputStream inStream = null; private BluetoothServerSocket myServerSocket; private ConnectThread myConnection; private ServerThread myServer; // Well known SPP UUID (will *probably* map to // RFCOMM channel 1 (default) if not in use); // see comments in onResume(). private static final UUID MY_UUID = UUID .fromString("00001101-0000-1000-8000-00805F9B34FB"); // .fromString("94f39d29-7d6d-437d-973b-fba39e49d4ee"); // ==> hardcode your slaves MAC address here <== // PC // private static String address = "00:09:DD:50:86:A0"; // GPS private static String address = "00:0B:0D:8E:D4:33"; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); if (D) Log.e(TAG, "+++ ON CREATE +++"); mBluetoothAdapter = BluetoothAdapter.getDefaultAdapter(); if (mBluetoothAdapter == null) { Toast.makeText(this, "Bluetooth is not available.", Toast.LENGTH_LONG).show(); finish(); return; } if (!mBluetoothAdapter.isEnabled()) { Toast.makeText(this, "Please enable your BT and re-run this program.", Toast.LENGTH_LONG).show(); finish(); return; } if (D) Log.e(TAG, "+++ DONE IN ON CREATE, GOT LOCAL BT ADAPTER +++"); } @Override public void onStart() { super.onStart(); if (D) Log.e(TAG, "++ ON START ++"); } @Override public void onResume() { super.onResume(); if (D) { Log.e(TAG, "+ ON RESUME +"); Log.e(TAG, "+ ABOUT TO ATTEMPT CLIENT CONNECT +"); } // Make the phone discoverable // When this returns, it will 'know' about the server, // via it's MAC address. // mBluetoothAdapter.startDiscovery(); BluetoothDevice device = mBluetoothAdapter.getRemoteDevice(address); Log.e(TAG, device.getName() + " connected"); // myServer = new ServerThread(); // myServer.start(); myConnection = new ConnectThread(device); myConnection.start(); } @Override public void onPause() { super.onPause(); if (D) Log.e(TAG, "- ON PAUSE -"); try { btSocket.close(); } catch (IOException e2) { Log.e(TAG, "ON PAUSE: Unable to close socket.", e2); } } @Override public void onStop() { super.onStop(); if (D) Log.e(TAG, "-- ON STOP --"); } @Override public void onDestroy() { super.onDestroy(); if (D) Log.e(TAG, "--- ON DESTROY ---"); } private class ServerThread extends Thread { private final BluetoothServerSocket myServSocket; public ServerThread() { BluetoothServerSocket tmp = null; // create listening socket try { tmp = mBluetoothAdapter .listenUsingRfcommWithServiceRecord( "myServer", MY_UUID); } catch (IOException e) { Log.e(TAG, "Server establishing failed"); } myServSocket = tmp; } public void run() { Log.e(TAG, "Beginn waiting for connection"); BluetoothSocket connectSocket = null; InputStream inStream = null; byte[] buffer = new byte[1024]; int bytes; while (true) { try { connectSocket = myServSocket.accept(); } catch (IOException e) { Log.e(TAG, "Connection failed"); break; } Log.e(TAG, "ALL THE WAY AROUND"); try { connectSocket = connectSocket.getRemoteDevice() .createRfcommSocketToServiceRecord(MY_UUID); connectSocket.connect(); } catch (IOException e1) { Log.e(TAG, "DIDNT WORK"); } // handle Connection try { inStream = connectSocket.getInputStream(); while (true) { try { bytes = inStream.read(buffer); Log.e(TAG, "Received: " + buffer.toString()); } catch (IOException e3) { Log.e(TAG, "disconnected"); break; } } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); break; } } } void cancel() { } } private class ConnectThread extends Thread { private final BluetoothSocket mySocket; private final BluetoothDevice myDevice; public ConnectThread(BluetoothDevice device) { myDevice = device; BluetoothSocket tmp = null; try { tmp = device.createRfcommSocketToServiceRecord(MY_UUID); } catch (IOException e) { Log.e(TAG, "CONNECTION IN THREAD DIDNT WORK"); } mySocket = tmp; } public void run() { Log.e(TAG, "STARTING TO CONNECT THE SOCKET"); setName("My Connection Thread"); InputStream inStream = null; boolean run = false; //mBluetoothAdapter.cancelDiscovery(); try { mySocket.connect(); run = true; } catch (IOException e) { run = false; Log.e(TAG, this.getName() + ": CONN DIDNT WORK, Try closing socket"); try { mySocket.close(); } catch (IOException e1) { Log.e(TAG, this.getName() + ": COULD CLOSE SOCKET", e1); this.destroy(); } } synchronized (ThinBTClient.this) { myConnection = null; } byte[] buffer = new byte[1024]; int bytes; // handle Connection try { inStream = mySocket.getInputStream(); while (run) { try { bytes = inStream.read(buffer); Log.e(TAG, "Received: " + buffer.toString()); } catch (IOException e3) { Log.e(TAG, "disconnected"); } } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } // starting connected thread (handling there in and output } public void cancel() { try { mySocket.close(); } catch (IOException e) { Log.e(TAG, this.getName() + " SOCKET NOT CLOSED"); } } } }

    Read the article

  • SQL SERVER – Update Statistics are Sampled By Default

    - by pinaldave
    After reading my earlier post SQL SERVER – Create Primary Key with Specific Name when Creating Table on Statistics, I have received another question by a blog reader. The question is as follows: Question: Are the statistics sampled by default? Answer: Yes. The sampling rate can be specified by the user and it can be anywhere between a very low value to 100%. Let us do a small experiment to verify if the auto update on statistics is left on. Also, let’s examine a very large table that is created and statistics by default- whether the statistics are sampled or not. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Million Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 1000000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO Now let us observe the result of the DBCC SHOW_STATISTICS. The result shows that Resultset is for sure sampling for a large dataset. The percentage of sampling is based on data distribution as well as the kind of data in the table. Before dropping the table, let us check first the size of the table. The size of the table is 35 MB. Now, let us run the above code with lesser number of the rows. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Hundred Thousand Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 100000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO You can see that Rows Sampled is just the same as Rows of the table. In this case, the sample rate is 100%. Before dropping the table, let us also check the size of the table. The size of the table is less than 4 MB. Let us compare the Result set just for a valid reference. Test 1: Total Rows: 1000000, Rows Sampled: 255420, Size of the Table: 35.516 MB Test 2: Total Rows: 100000, Rows Sampled: 100000, Size of the Table: 3.555 MB The reason behind the sample in the Test1 is that the data space is larger than 8 MB, and therefore it uses more than 1024 data pages. If the data space is smaller than 8 MB and uses less than 1024 data pages, then the sampling does not happen. Sampling aids in reducing excessive data scan; however, sometimes it reduces the accuracy of the data as well. Please note that this is just a sample test and there is no way it can be claimed as a benchmark test. The result can be dissimilar on different machines. There are lots of other information can be included when talking about this subject. I will write detail post covering all the subject very soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • SQL SERVER – SOS_SCHEDULER_YIELD – Wait Type – Day 8 of 28

    - by pinaldave
    This is a very interesting wait type and quite often seen as one of the top wait types. Let us discuss this today. From Book On-Line: Occurs when a task voluntarily yields the scheduler for other tasks to execute. During this wait the task is waiting for its quantum to be renewed. SOS_SCHEDULER_YIELD Explanation: SQL Server has multiple threads, and the basic working methodology for SQL Server is that SQL Server does not let any “runnable” thread to starve. Now let us assume SQL Server OS is very busy running threads on all the scheduler. There are always new threads coming up which are ready to run (in other words, runnable). Thread management of the SQL Server is decided by SQL Server and not the operating system. SQL Server runs on non-preemptive mode most of the time, meaning the threads are co-operative and can let other threads to run from time to time by yielding itself. When any thread yields itself for another thread, it creates this wait. If there are more threads, it clearly indicates that the CPU is under pressure. You can fun the following DMV to see how many runnable task counts there are in your system. SELECT scheduler_id, current_tasks_count, runnable_tasks_count, work_queue_count, pending_disk_io_count FROM sys.dm_os_schedulers WHERE scheduler_id < 255 GO If you notice a two-digit number in runnable_tasks_count continuously for long time (not once in a while), you will know that there is CPU pressure. The two-digit number is usually considered as a bad thing; you can read the description of the above DMV over here. Additionally, there are several other counters (%Processor Time and other processor related counters), through which you can refer to so you can validate CPU pressure along with the method explained above. Reducing SOS_SCHEDULER_YIELD wait: This is the trickiest part of this procedure. As discussed, this particular wait type relates to CPU pressure. Increasing more CPU is the solution in simple terms; however, it is not easy to implement this solution. There are other things that you can consider when this wait type is very high. Here is the query where you can find the most expensive query related to CPU from the cache Note: The query that used lots of resources but is not cached will not be caught here. SELECT SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1, ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.TEXT) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2)+1), qs.execution_count, qs.total_logical_reads, qs.last_logical_reads, qs.total_logical_writes, qs.last_logical_writes, qs.total_worker_time, qs.last_worker_time, qs.total_elapsed_time/1000000 total_elapsed_time_in_S, qs.last_elapsed_time/1000000 last_elapsed_time_in_S, qs.last_execution_time, qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_worker_time DESC -- CPU time You can find the most expensive queries that are utilizing lots of CPU (from the cache) and you can tune them accordingly. Moreover, you can find the longest running query and attempt to tune them if there is any processor offending code. Additionally, pay attention to total_worker_time because if that is also consistently higher, then  the CPU under too much pressure. You can also check perfmon counters of compilations as they tend to use good amount of CPU. Index rebuild is also a CPU intensive process but we should consider that main cause for this query because that is indeed needed on high transactions OLTP system utilized to reduce fragmentations. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All of the discussions of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    After having excellent response to my quiz – Why SELECT * throws an error but SELECT COUNT(*) does not?I have decided to ask another puzzling question to all of you. I am running this test on SQL Server 2008 R2. Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Note: Auto Update Statistics and Auto Create Statistics for database is TRUE Expected Result – Statistics should be updated – SQL SERVER – When are Statistics Updated – What triggers Statistics to Update Now the question is why the statistics are not updated? The common answer is – we can update the statistics ourselves using UPDATE STATISTICS TableName WITH FULLSCAN, ALL However, the solution I am looking is where statistics should be updated automatically based on algorithm mentioned here. Now the solution is to ____________________. Vinod Kumar is not allowed to take participate over here as he is the one who has helped me to build this puzzle. I will publish the solution on next week. Please leave a comment and if your comment consist valid answer, I will publish with due credit. Here is the script to reproduce the scenario which I mentioned. -- Execution Plans Difference -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table - none listed sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -------------------------------------------------------------- -- Round 2 -- Insert Ten Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 10000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -- You will notice that Statistics are still updated with 1000 rows -- Clean up Database DROP TABLE ExecTable GO USE MASTER GO ALTER DATABASE SampleDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO DROP DATABASE SampleDB GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • RPi and Java Embedded GPIO: Writing Java code to blink LED

    - by hinkmond
    So, you've followed the previous steps to install Java Embedded on your Raspberry Pi ?, you went to Fry's and picked up some jumper wires, LEDs, and resistors ?, you hooked up the wires, LED, and resistor the the correct pins ?, and now you want to start programming in Java on your RPi? Yes? ???????! OK, then... Here we go. You can use the following source code to blink your first LED on your RPi using Java. In the code you can see that I'm not using any complicated gpio libraries like wiringpi or pi4j, and I'm not doing any low-level pin manipulation like you can in C. And, I'm not using python (hell no!). This is Java programming, so we keep it simple (and more readable) than those other programming languages. See: Write Java code to do this In the Java code, I'm opening up the RPi Debian Wheezy well-defined file handles to control the GPIO ports. First I'm resetting everything using the unexport/export file handles. (On the RPi, if you open the well-defined file handles and write certain ASCII text to them, you can drive your GPIO to perform certain operations. See this GPIO reference). Next, I write a "1" then "0" to the value file handle of the GPIO0 port (see the previous pinout diagram). That makes the LED blink. Then, I loop to infinity. Easy, huh? import java.io.* /* * Java Embedded Raspberry Pi GPIO app */ package jerpigpio; import java.io.FileWriter; /** * * @author hinkmond */ public class JerpiGPIO { static final String GPIO_OUT = "out"; static final String GPIO_ON = "1"; static final String GPIO_OFF = "0"; static final String GPIO_CH00="0"; /** * @param args the command line arguments */ public static void main(String[] args) { FileWriter commandFile; try { /*** Init GPIO port for output ***/ // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); // Reset the port unexportFile.write(GPIO_CH00); unexportFile.flush(); // Set the port for use exportFile.write(GPIO_CH00); exportFile.flush(); // Open file handle to port input/output control FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio"+GPIO_CH00+"/direction"); // Set port for output directionFile.write(GPIO_OUT); directionFile.flush(); /*--- Send commands to GPIO port ---*/ // Opne file handle to issue commands to GPIO port commandFile = new FileWriter("/sys/class/gpio/gpio"+GPIO_CH00+"/value"); // Loop forever while (true) { // Set GPIO port ON commandFile.write(GPIO_ON); commandFile.flush(); // Wait for a while java.lang.Thread.sleep(200); // Set GPIO port OFF commandFile.write(GPIO_OFF); commandFile.flush(); // Wait for a while java.lang.Thread.sleep(200); } } catch (Exception exception) { exception.printStackTrace(); } } } Hinkmond

    Read the article

  • Running two wsgi applications on the same server gdal org exception with apache2/modwsgi

    - by monkut
    I'm trying to run two wsgi applications, one django and the other tilestache using the same server. The tilestache server accesses the db via django to query the db. In the process of serving tiles it performs a transform on the incoming bbox, and in this process hit's the following error. The transform works without error for the specific bbox polygon when run manually from the python shell: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 325, in __call__ mimetype, content = requestHandler(self.config, environ['PATH_INFO'], environ['QUERY_STRING']) File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 231, in requestHandler mimetype, content = getTile(layer, coord, extension) File "/usr/lib/python2.7/dist-packages/TileStache/__init__.py", line 84, in getTile tile = layer.render(coord, format) File "/usr/lib/python2.7/dist-packages/TileStache/Core.py", line 295, in render tile = provider.renderArea(width, height, srs, xmin, ymin, xmax, ymax, coord.zoom) File "/var/www/tileserver/providers.py", line 59, in renderArea bbox.transform(METERS_SRID) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/geos/geometry.py", line 520, in transform g = gdal.OGRGeometry(self.wkb, srid) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geometries.py", line 131, in __init__ self.__class__ = GEO_CLASSES[self.geom_type.num] File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geometries.py", line 245, in geom_type return OGRGeomType(capi.get_geom_type(self.ptr)) File "/usr/local/lib/python2.7/dist-packages/django/contrib/gis/gdal/geomtype.py", line 43, in __init__ raise OGRException('Invalid OGR Integer Type: %d' % type_input) OGRException: Invalid OGR Integer Type: 1987180391 I think I've hit the non thread safe issue with GDAL, metioned on the django site. Is there a way I could configure this so that it would work? Apache Version: Apache/2.2.22 (Ubuntu) mod_wsgi/3.3 Python/2.7.3 configured Apache apache2/sites-available/default: <VirtualHost *:80> ServerAdmin ironman@localhost DocumentRoot /var/www/bin LogLevel warn WSGIDaemonProcess lbs processes=2 maximum-requests=500 threads=1 WSGIProcessGroup lbs WSGIScriptAlias / /var/www/bin/apache/django.wsgi Alias /static /var/www/lbs/static/ </VirtualHost> <VirtualHost *:8080> ServerAdmin ironman@localhost DocumentRoot /var/www/bin LogLevel warn WSGIDaemonProcess tilestache processes=1 maximum-requests=500 threads=1 WSGIProcessGroup tilestache WSGIScriptAlias / /var/www/bin/tileserver/tilestache.wsgi </VirtualHost> Django Version: 1.4 httpd.conf: Listen 8080 NameVirtualHost *:8080 UPDATE I've added the a test.wsgi script to determine if the GLOBAL interpreter setting is correct, as mentioned by graham and described here: http://code.google.com/p/modwsgi/wiki/CheckingYourInstallation#Sub_Interpreter_Being_Used It seems to show the expected result: [Tue Aug 14 10:32:01 2012] [notice] Apache/2.2.22 (Ubuntu) mod_wsgi/3.3 Python/2.7.3 configured -- resuming normal operations [Tue Aug 14 10:32:01 2012] [info] mod_wsgi (pid=29891): Attach interpreter ''. I've worked around the issue for now by changing the srs used in the db so that the transform is unnecessary in tilestache app. I don't understand why the transform() method, when called in the django app works, but then in the tilestache app fails. tilestache.wsgi #!/usr/bin/python import os import time import sys import TileStache current_dir = os.path.abspath(os.path.dirname(__file__)) project_dir = os.path.realpath(os.path.join(current_dir, "..", "..")) sys.path.append(project_dir) sys.path.append(current_dir) os.environ['DJANGO_SETTINGS_MODULE'] = 'bin.settings' sys.stdout = sys.stderr # wait for the apache django lbs server to start up, # --> in order to retrieve the tilestache cfg time.sleep(2) tilestache_config_url = "http://127.0.0.1/tilestache/config/" application = TileStache.WSGITileServer(tilestache_config_url) UPDATE 2 So it turned out I did need to use a projection other than the google (900913) one in the db. So my previous workaround failed. While I'd like to fix this issue, I decided to work around the issue this type by making a django view that performs the transform needed. So now tilestache requests the data through the django app and not internally.

    Read the article

  • Python script is exiting with no output and I have no idea why

    - by Adam Tuttle
    I'm attempting to debug a Subversion post-commit hook that calls some python scripts. What I've been able to determine so far is that when I run post-commit.bat manually (I've created a wrapper for it to make it easier) everything succeeds, but when SVN runs it one particular step doesn't work. We're using CollabNet SVNServe, which I know from the documentation removes all environment variables. This had caused some problems earlier, but shouldn't be an issue now. Before Subversion calls a hook script, it removes all variables - including $PATH on Unix, and %PATH% on Windows - from the environment. Therefore, your script can only run another program if you spell out that program's absolute name. The relevant portion of post-commit.bat is: echo -------------------------- >> c:\svn-repos\company\hooks\svn2ftp.out.log set SITENAME=staging set SVNPATH=branches/staging/wwwroot/ "C:\Python3\python.exe" C:\svn-repos\company\hooks\svn2ftp.py ^ --svnUser="svnusername" ^ --svnPass="svnpassword" ^ --ftp-user=ftpuser ^ --ftp-password=ftppassword ^ --ftp-remote-dir=/ ^ --access-url=svn://10.0.100.6/company ^ --status-file="C:\svn-repos\company\hooks\svn2ftp-%SITENAME%.dat" ^ --project-directory=%SVNPATH% "staging.company.com" %1 %2 >> c:\svn-repos\company\hooks\svn2ftp.out.log echo -------------------------- >> c:\svn-repos\company\hooks\svn2ftp.out.log When I run post-commit.bat manually, for example: post-commit c:\svn-repos\company 12345, I see output like the following in svn2ftp.out.log: -------------------------- args1: c:\svn-repos\company args0: staging.company.com abspath: c:\svn-repos\company project_dir: branches/staging/wwwroot/ local_repos_path: c:\svn-repos\company getting youngest revision... done, up-to-date -------------------------- However, when I commit something to the repo and it runs automatically, the output is: -------------------------- -------------------------- svn2ftp.py is a bit long, so I apologize but here goes. I'll have some notes/disclaimers about its contents below it. #!/usr/bin/env python """Usage: svn2ftp.py [OPTION...] FTP-HOST REPOS-PATH Upload to FTP-HOST changes committed to the Subversion repository at REPOS-PATH. Uses svn diff --summarize to only propagate the changed files Options: -?, --help Show this help message. -u, --ftp-user=USER The username for the FTP server. Default: 'anonymous' -p, --ftp-password=P The password for the FTP server. Default: '@' -P, --ftp-port=X Port number for the FTP server. Default: 21 -r, --ftp-remote-dir=DIR The remote directory that is expected to resemble the repository project directory -a, --access-url=URL This is the URL that should be used when trying to SVN export files so that they can be uploaded to the FTP server -s, --status-file=PATH Required. This script needs to store the last successful revision that was transferred to the server. PATH is the location of this file. -d, --project-directory=DIR If the project you are interested in sending to the FTP server is not under the root of the repository (/), set this parameter. Example: -d 'project1/trunk/' This should NOT start with a '/'. 2008.5.2 CKS Fixed possible Windows-related bug with tempfile, where the script didn't have permission to write to the tempfile. Replaced this with a open()-created file created in the CWD. 2008.5.13 CKS Added error logging. Added exception for file-not-found errors when deleting files. 2008.5.14 CKS Change file open to 'rb' mode, to prevent Python's universal newline support from stripping CR characters, causing later comparisons between FTP and SVN to report changes. """ try: import sys, os import logging logging.basicConfig( level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s', filename='svn2ftp.debug.log', filemode='a' ) console = logging.StreamHandler() console.setLevel(logging.ERROR) logging.getLogger('').addHandler(console) import getopt, tempfile, smtplib, traceback, subprocess from io import StringIO import pysvn import ftplib import inspect except Exception as e: logging.error(e) #capture the location of the error frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace) print(stack_trace) #end capture sys.exit(1) #defaults host = "" user = "anonymous" password = "@" port = 21 repo_path = "" local_repos_path = "" status_file = "" project_directory = "" remote_base_directory = "" toAddrs = "[email protected]" youngest_revision = "" def email(toAddrs, message, subject, fromAddr='[email protected]'): headers = "From: %s\r\nTo: %s\r\nSubject: %s\r\n\r\n" % (fromAddr, toAddrs, subject) message = headers + message logging.info('sending email to %s...' % toAddrs) server = smtplib.SMTP('smtp.company.com') server.set_debuglevel(1) server.sendmail(fromAddr, toAddrs, message) server.quit() logging.info('email sent') def captureErrorMessage(e): sout = StringIO() traceback.print_exc(file=sout) errorMessage = '\n'+('*'*80)+('\n%s'%e)+('\n%s\n'%sout.getvalue())+('*'*80) return errorMessage def usage_and_exit(errmsg): """Print a usage message, plus an ERRMSG (if provided), then exit. If ERRMSG is provided, the usage message is printed to stderr and the script exits with a non-zero error code. Otherwise, the usage message goes to stdout, and the script exits with a zero errorcode.""" if errmsg is None: stream = sys.stdout else: stream = sys.stderr print(__doc__, file=stream) if errmsg: print("\nError: %s" % (errmsg), file=stream) sys.exit(2) sys.exit(0) def read_args(): global host global user global password global port global repo_path global local_repos_path global status_file global project_directory global remote_base_directory global youngest_revision try: opts, args = getopt.gnu_getopt(sys.argv[1:], "?u:p:P:r:a:s:d:SU:SP:", ["help", "ftp-user=", "ftp-password=", "ftp-port=", "ftp-remote-dir=", "access-url=", "status-file=", "project-directory=", "svnUser=", "svnPass=" ]) except getopt.GetoptError as msg: usage_and_exit(msg) for opt, arg in opts: if opt in ("-?", "--help"): usage_and_exit() elif opt in ("-u", "--ftp-user"): user = arg elif opt in ("-p", "--ftp-password"): password = arg elif opt in ("-SU", "--svnUser"): svnUser = arg elif opt in ("-SP", "--svnPass"): svnPass = arg elif opt in ("-P", "--ftp-port"): try: port = int(arg) except ValueError as msg: usage_and_exit("Invalid value '%s' for --ftp-port." % (arg)) if port < 1 or port > 65535: usage_and_exit("Value for --ftp-port must be a positive integer less than 65536.") elif opt in ("-r", "--ftp-remote-dir"): remote_base_directory = arg elif opt in ("-a", "--access-url"): repo_path = arg elif opt in ("-s", "--status-file"): status_file = os.path.abspath(arg) elif opt in ("-d", "--project-directory"): project_directory = arg if len(args) != 3: print(str(args)) usage_and_exit("host and/or local_repos_path not specified (" + len(args) + ")") host = args[0] print("args1: " + args[1]) print("args0: " + args[0]) print("abspath: " + os.path.abspath(args[1])) local_repos_path = os.path.abspath(args[1]) print('project_dir:',project_directory) youngest_revision = int(args[2]) if status_file == "" : usage_and_exit("No status file specified") def main(): global host global user global password global port global repo_path global local_repos_path global status_file global project_directory global remote_base_directory global youngest_revision read_args() #repository,fs_ptr #get youngest revision print("local_repos_path: " + local_repos_path) print('getting youngest revision...') #youngest_revision = fs.youngest_rev(fs_ptr) assert youngest_revision, "Unable to lookup youngest revision." last_sent_revision = get_last_revision() if youngest_revision == last_sent_revision: # no need to continue. we should be up to date. print('done, up-to-date') return if last_sent_revision or youngest_revision < 10: # Only compare revisions if the DAT file contains a valid # revision number. Otherwise we risk waiting forever while # we parse and uploading every revision in the repo in the case # where a repository is retroactively configured to sync with ftp. pysvn_client = pysvn.Client() pysvn_client.callback_get_login = get_login rev1 = pysvn.Revision(pysvn.opt_revision_kind.number, last_sent_revision) rev2 = pysvn.Revision(pysvn.opt_revision_kind.number, youngest_revision) summary = pysvn_client.diff_summarize(repo_path, rev1, repo_path, rev2, True, False) print('summary len:',len(summary)) if len(summary) > 0 : print('connecting to %s...' % host) ftp = FTPClient(host, user, password) print('connected to %s' % host) ftp.base_path = remote_base_directory print('set remote base directory to %s' % remote_base_directory) #iterate through all the differences between revisions for change in summary : #determine whether the path of the change is relevant to the path that is being sent, and modify the path as appropriate. print('change path:',change.path) ftp_relative_path = apply_basedir(change.path) print('ftp rel path:',ftp_relative_path) #only try to sync path if the path is in our project_directory if ftp_relative_path != "" : is_file = (change.node_kind == pysvn.node_kind.file) if str(change.summarize_kind) == "delete" : print("deleting: " + ftp_relative_path) try: ftp.delete_path("/" + ftp_relative_path, is_file) except ftplib.error_perm as e: if 'cannot find the' in str(e) or 'not found' in str(e): # Log, but otherwise ignore path-not-found errors # when deleting, since it's not a disaster if the file # we want to delete is already gone. logging.error(captureErrorMessage(e)) else: raise elif str(change.summarize_kind) == "added" or str(change.summarize_kind) == "modified" : local_file = "" if is_file : local_file = svn_export_temp(pysvn_client, repo_path, rev2, change.path) print("uploading file: " + ftp_relative_path) ftp.upload_path("/" + ftp_relative_path, is_file, local_file) if is_file : os.remove(local_file) elif str(change.summarize_kind) == "normal" : print("skipping 'normal' element: " + ftp_relative_path) else : raise str("Unknown change summarize kind: " + str(change.summarize_kind) + ", path: " + ftp_relative_path) ftp.close() #write back the last revision that was synced print("writing last revision: " + str(youngest_revision)) set_last_revision(youngest_revision) # todo: undo def get_login(a,b,c,d): #arguments don't matter, we're always going to return the same thing try: return True, "svnUsername", "svnPassword", True except Exception as e: logging.error(e) #capture the location of the error frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace) #end capture sys.exit(1) #functions for persisting the last successfully synced revision def get_last_revision(): if os.path.isfile(status_file) : f=open(status_file, 'r') line = f.readline() f.close() try: i = int(line) except ValueError: i = 0 else: i = 0 f = open(status_file, 'w') f.write(str(i)) f.close() return i def set_last_revision(rev) : f = open(status_file, 'w') f.write(str(rev)) f.close() #augmented ftp client class that can work off a base directory class FTPClient(ftplib.FTP) : def __init__(self, host, username, password) : self.base_path = "" self.current_path = "" ftplib.FTP.__init__(self, host, username, password) def cwd(self, path) : debug_path = path if self.current_path == "" : self.current_path = self.pwd() print("pwd: " + self.current_path) if not os.path.isabs(path) : debug_path = self.base_path + "<" + path path = os.path.join(self.current_path, path) elif self.base_path != "" : debug_path = self.base_path + ">" + path.lstrip("/") path = os.path.join(self.base_path, path.lstrip("/")) path = os.path.normpath(path) #by this point the path should be absolute. if path != self.current_path : print("change from " + self.current_path + " to " + debug_path) ftplib.FTP.cwd(self, path) self.current_path = path else : print("staying put : " + self.current_path) def cd_or_create(self, path) : assert os.path.isabs(path), "absolute path expected (" + path + ")" try: self.cwd(path) except ftplib.error_perm as e: for folder in path.split('/'): if folder == "" : self.cwd("/") continue try: self.cwd(folder) except: print("mkd: (" + path + "):" + folder) self.mkd(folder) self.cwd(folder) def upload_path(self, path, is_file, local_path) : if is_file: (path, filename) = os.path.split(path) self.cd_or_create(path) # Use read-binary to avoid universal newline support from stripping CR characters. f = open(local_path, 'rb') self.storbinary("STOR " + filename, f) f.close() else: self.cd_or_create(path) def delete_path(self, path, is_file) : (path, filename) = os.path.split(path) print("trying to delete: " + path + ", " + filename) self.cwd(path) try: if is_file : self.delete(filename) else: self.delete_path_recursive(filename) except ftplib.error_perm as e: if 'The system cannot find the' in str(e) or '550 File not found' in str(e): # Log, but otherwise ignore path-not-found errors # when deleting, since it's not a disaster if the file # we want to delete is already gone. logging.error(captureErrorMessage(e)) else: raise def delete_path_recursive(self, path): if path == "/" : raise "WARNING: trying to delete '/'!" for node in self.nlst(path) : if node == path : #it's a file. delete and return self.delete(path) return if node != "." and node != ".." : self.delete_path_recursive(os.path.join(path, node)) try: self.rmd(path) except ftplib.error_perm as msg : sys.stderr.write("Error deleting directory " + os.path.join(self.current_path, path) + " : " + str(msg)) # apply the project_directory setting def apply_basedir(path) : #remove any leading stuff (in this case, "trunk/") and decide whether file should be propagated if not path.startswith(project_directory) : return "" return path.replace(project_directory, "", 1) def svn_export_temp(pysvn_client, base_path, rev, path) : # Causes access denied error. Couldn't deduce Windows-perm issue. # It's possible Python isn't garbage-collecting the open file-handle in time for pysvn to re-open it. # Regardless, just generating a simple filename seems to work. #(fd, dest_path) = tempfile.mkstemp() dest_path = tmpName = '%s.tmp' % __file__ exportPath = os.path.join(base_path, path).replace('\\','/') print('exporting %s to %s' % (exportPath, dest_path)) pysvn_client.export( exportPath, dest_path, force=False, revision=rev, native_eol=None, ignore_externals=False, recurse=True, peg_revision=rev ) return dest_path if __name__ == "__main__": logging.info('svnftp.start') try: main() logging.info('svnftp.done') except Exception as e: # capture the location of the error for debug purposes frame = inspect.currentframe() stack_trace = traceback.format_stack(frame) logging.debug(stack_trace[:-1]) print(stack_trace) # end capture error_text = '\nFATAL EXCEPTION!!!\n'+captureErrorMessage(e) subject = "ALERT: SVN2FTP Error" message = """An Error occurred while trying to FTP an SVN commit. repo_path = %(repo_path)s\n local_repos_path = %(local_repos_path)s\n project_directory = %(project_directory)s\n remote_base_directory = %(remote_base_directory)s\n error_text = %(error_text)s """ % globals() email(toAddrs, message, subject) logging.error(e) Notes/Disclaimers: I have basically no python training so I'm learning as I go and spending lots of time reading docs to figure stuff out. The body of get_login is in a try block because I was getting strange errors saying there was an unhandled exception in callback_get_login. Never figured out why, but it seems fine now. Let sleeping dogs lie, right? The username and password for get_login are currently hard-coded (but correct) just to eliminate variables and try to change as little as possible at once. (I added the svnuser and svnpass arguments to the existing argument parsing.) So that's where I am. I can't figure out why on earth it's not printing anything into svn2ftp.out.log. If you're wondering, the output for one of these failed attempts in svn2ftp.debug.log is: 2012-09-06 15:18:12,496 INFO svnftp.start 2012-09-06 15:18:12,496 INFO svnftp.done And it's no different on a successful run. So there's nothing useful being logged. I'm lost. I've gone way down the rabbit hole on this one, and don't know where to go from here. Any ideas?

    Read the article

  • overriding new ubuntu installation

    - by tkoomzaaskz
    I've got a ubuntu 11.10 which has lost its support in May 2013, now I'd like to reintall up to the most up-to-date LTS, which is 12.04. My question is regarding my current partitions and doing backups. Is there a safe way to backup my data on some local partitions instead of copying files into DVDs/external drives (this is very uncormortable in my situation). Following are system commands shoing my disk: $ lsblk NAME MAJ:MIN RM SIZE RO MOUNTPOINT sda 8:0 0 232,9G 0 +-sda1 8:1 0 48,8G 0 +-sda2 8:2 0 63G 0 +-sda3 8:3 0 1K 0 +-sda4 8:4 0 53,7G 0 / +-sda5 8:5 0 18,6G 0 +-sda6 8:6 0 25,5G 0 +-sda7 8:7 0 23,3G 0 [SWAP] sr0 11:0 1 1024M 0 and $ sudo fdisk -l [sudo] password for xyz: Disk /dev/sda: 250.1 GB, 250059350016 bytes glowic: 255, sektorów/sciezke: 63, cylindrów: 30401, w sumie sektorów: 488397168 Jednostka = sektorów, czyli 1 * 512 = 512 bajtów Rozmiar sektora (logiczny/fizyczny) w bajtach: 512 / 512 Rozmiar we/wy (minimalny/optymalny) w bajtach: 512 / 512 Identyfikator dysku: 0xc3ffc3ff Device Boot Beginning End Blocks ID System /dev/sda1 * 2048 102402047 51200000 7 HPFS/NTFS/exFAT /dev/sda2 215044096 347080703 66018304 7 HPFS/NTFS/exFAT /dev/sda3 347082750 488392064 70654657+ 5 Extended /dev/sda4 102402048 215042047 56320000 83 Linux /dev/sda5 395905923 434975939 19535008+ 83 Linux /dev/sda6 434976003 488392064 26708031 83 Linux /dev/sda7 347082752 395905023 24411136 82 Linux swap / Solaris In the beginning I had Windows Vista pre-installed with the machine when it was bought (damn!) and I installed linux (the one I have now). The windows-program in master boot record has been overriden by grub and now I can boot with both Windows and Linux. This is list of mounted devices: $ mount /dev/sda4 on / type ext4 (rw,errors=remount-ro,commit=0) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/tomasz/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=tomasz) It's strange (I don't remember such thing) that my current linux uses only one partition (/dev/sda4). But, anyway, it seems like that. My final question is: am I able to use one of the existing linux partitions for a backup and install ubuntu 12.04 without removing neither windows nor ubuntu 11.04? I mean - will grub automatically accept both old windows vista and 2 linuxes (old 11.10 and "new" 12.04)? Is there any hidden operation done while installation that could harm my custom-backup-partition while installing? my fstab file: proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda4 during installation UUID=d44e89f5-9da2-48eb-83b3-887652ec95d2 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda7 during installation UUID=bbe50535-ba57-434a-9272-211d859f0e00 none swap sw 0 0 sda5 and sda6 are trash partitions created during unsuccessful linux installation (this was linux installation before my current installation), I didn't delete these partitions, but I have access to them (and I can use them as backup partitions). edit: second question is: why does lsblk show /dev/sda having 232,9G while fdisk shows that it has 250.1GB? Where does the difference come from?

    Read the article

  • Linux software RAID6: rebuild slow

    - by Ole Tange
    I am trying to find the bottleneck in the rebuilding of a software raid6. ## Pause rebuilding when measuring raw I/O performance # echo 1 > /proc/sys/dev/raid/speed_limit_min # echo 1 > /proc/sys/dev/raid/speed_limit_max ## Drop caches so that does not interfere with measuring # sync ; echo 3 | tee /proc/sys/vm/drop_caches >/dev/null # time parallel -j0 "dd if=/dev/{} bs=256k count=4000 | cat >/dev/null" ::: sdbd sdbc sdbf sdbm sdbl sdbk sdbe sdbj sdbh sdbg 4000+0 records in 4000+0 records out 1048576000 bytes (1.0 GB) copied, 7.30336 s, 144 MB/s [... similar for each disk ...] # time parallel -j0 "dd if=/dev/{} skip=15000000 bs=256k count=4000 | cat >/dev/null" ::: sdbd sdbc sdbf sdbm sdbl sdbk sdbe sdbj sdbh sdbg 4000+0 records in 4000+0 records out 1048576000 bytes (1.0 GB) copied, 12.7991 s, 81.9 MB/s [... similar for each disk ...] So we can read sequentially at 140 MB/s in the outer tracks and 82 MB/s in the inner tracks on all the drives simultaneously. Sequential write performance is similar. This would lead me to expect a rebuild speed of 82 MB/s or more. # echo 800000 > /proc/sys/dev/raid/speed_limit_min # echo 800000 > /proc/sys/dev/raid/speed_limit_max # cat /proc/mdstat md2 : active raid6 sdbd[10](S) sdbc[9] sdbf[0] sdbm[8] sdbl[7] sdbk[6] sdbe[11] sdbj[4] sdbi[3](F) sdbh[2] sdbg[1] 27349121408 blocks super 1.2 level 6, 128k chunk, algorithm 2 [9/8] [UUU_UUUUU] [=========>...........] recovery = 47.3% (1849905884/3907017344) finish=855.9min speed=40054K/sec But we only get 40 MB/s. And often this drops to 30 MB/s. # iostat -dkx 1 sdbc 0.00 8023.00 0.00 329.00 0.00 33408.00 203.09 0.70 2.12 1.06 34.80 sdbd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdbe 13.00 0.00 8334.00 0.00 33388.00 0.00 8.01 0.65 0.08 0.06 47.20 sdbf 0.00 0.00 8348.00 0.00 33388.00 0.00 8.00 0.58 0.07 0.06 48.00 sdbg 16.00 0.00 8331.00 0.00 33388.00 0.00 8.02 0.71 0.09 0.06 48.80 sdbh 961.00 0.00 8314.00 0.00 37100.00 0.00 8.92 0.93 0.11 0.07 54.80 sdbj 70.00 0.00 8276.00 0.00 33384.00 0.00 8.07 0.78 0.10 0.06 48.40 sdbk 124.00 0.00 8221.00 0.00 33380.00 0.00 8.12 0.88 0.11 0.06 47.20 sdbl 83.00 0.00 8262.00 0.00 33380.00 0.00 8.08 0.96 0.12 0.06 47.60 sdbm 0.00 0.00 8344.00 0.00 33376.00 0.00 8.00 0.56 0.07 0.06 47.60 iostat says the disks are not 100% busy (but only 40-50%). This fits with the hypothesis that the max is around 80 MB/s. Since this is software raid the limiting factor could be CPU. top says: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 38520 root 20 0 0 0 0 R 64 0.0 2947:50 md2_raid6 6117 root 20 0 0 0 0 D 53 0.0 473:25.96 md2_resync So md2_raid6 and md2_resync are clearly busy taking up 64% and 53% of a CPU respectively, but not near 100%. The chunk size (128k) of the RAID was chosen after measuring which chunksize gave the least CPU penalty. If this speed is normal: What is the limiting factor? Can I measure that? If this speed is not normal: How can I find the limiting factor? Can I change that?

    Read the article

  • Wi-Fi Stick with ZD1211 chip refuses to work on Ubuntu >8.10. No clue.

    - by Benjamin Maus
    I have a machine running Ubuntu 9.10 (Karmic *x86_64*). Everything is running smooth so far, except for the Wi-Fi USB Stick. The same device worked perfectly in 8.10. The wireless device is a GW-US54GXS using the Zydas Zd1211 chipset. Dmesg output after plugging in: [ 196.303436] phy0: Selected rate control algorithm 'minstrel' [ 196.304209] zd1211rw 2-1:1.0: phy0 [ 196.304227] usbcore: registered new interface driver zd1211rw [ 196.334137] usb 2-1: firmware: requesting zd1211/zd1211b_ub [ 196.357463] usb 2-1: firmware: requesting zd1211/zd1211b_uphr [ 196.402643] zd1211rw 2-1:1.0: firmware version 4725 [ 196.442611] zd1211rw 2-1:1.0: zd1211b chip 2019:5303 v4810 high 00-90-cc AL2230_RF pa0 ---N- [ 196.463814] usb 2-1: firmware: requesting zd1211/zd1211b_ub [ 196.466823] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Syslog output: Nov 5 11:20:24 somesystem kernel: [ 196.303436] phy0: Selected rate control algorithm 'minstrel' Nov 5 11:20:24 kierkegaard NetworkManager: <info> Found radio killswitch rfkill0 (at /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/ieee80211/phy0/rfkill0) (driver <unknown>) Nov 5 11:20:24 somesystem kernel: [ 196.304209] zd1211rw 2-1:1.0: phy0 Nov 5 11:20:24 somesystem kernel: [ 196.304227] usbcore: registered new interface driver zd1211rw Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wmaster0, iface: wmaster0) Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wmaster0, iface: wmaster0): no ifupdown configuration found. Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wlan0, iface: wlan0) Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wlan0, iface: wlan0): no ifupdown configuration found. Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): driver supports SSID scans (scan_capa 0x01). Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): new 802.11 WiFi device (driver: 'zd1211rw') Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/2 Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): now managed Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): device state change: 1 -> 2 (reason 2) Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): bringing up device. Nov 5 11:20:24 somesystem kernel: [ 196.334137] usb 2-1: firmware: requesting zd1211/zd1211b_ub Nov 5 11:20:24 somesystem kernel: [ 196.357463] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Nov 5 11:20:24 somesystem kernel: [ 196.402643] zd1211rw 2-1:1.0: firmware version 4725 Nov 5 11:20:24 somesystem kernel: [ 196.442611] zd1211rw 2-1:1.0: zd1211b chip 2019:5303 v4810 high 00-90-cc AL2230_RF pa0 ---N- Nov 5 11:20:24 somesystem NetworkManager: <WARN> nm_device_hw_bring_up(): (wlan0): device not up after timeout! Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): deactivating device (reason: 2). Nov 5 11:20:24 somesystem kernel: [ 196.463814] usb 2-1: firmware: requesting zd1211/zd1211b_ub Nov 5 11:20:24 somesystem kernel: [ 196.466823] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Nov 5 11:20:29 somesystem wpa_supplicant[978]: Could not set interface 'wlan0' UP Nov 5 11:20:29 somesystem wpa_supplicant[978]: Failed to initialize driver interface Nov 5 11:20:29 somesystem NetworkManager: <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. Gnome tells me in the network menu that the device was "not ready". It appears in iwconfig but not in ifconfig. The same symptoms appear when I boot from the live CD. How can I solve this dilemma?

    Read the article

  • XNA Screen Manager problem with transitions

    - by NexAddo
    I'm having issues using the game statemanagement example in the game I am developing. I have no issues with my first three screens transitioning between one another. I have a main menu screen, a splash screen and a high score screen that cycle: mainMenuScreen->splashScreen->highScoreScreen->mainMenuScreen The screens change every 15 seconds. Transition times public MainMenuScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.0); currentCreditAmount = Global.CurrentCredits; } public SplashScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.5); } public HighScoreScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.5); } public GamePlayScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.5); } When a user inserts credits they can play the game after pressing start mainMenuScreen->splashScreen->highScoreScreen->(loops forever) || || || ===========Credits In============= || Start || \/ LoadingScreen || Start || \/ GamePlayScreen During each of these transitions, between screens, the same code is used, which exits(removes) all current active screens and respects transitions, then adds the new screen to the screen manager: foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); //AddScreen takes a new screen to manage and the controlling player ScreenManager.AddScreen(new NameOfScreenHere(), null); Each screen is removed from the ScreenManager with ExitScreen() and using this function, each screen transition is respected. The problem I am having is with my gamePlayScreen. When the current game is finished and the transition is complete for the gamePlayScreen, it should be removed and the next screens should be added to the ScreenManager. GamePlayScreen Code Snippet private void FinishCurrentGame() { AudioManager.StopSounds(); this.UnloadContent(); if (Global.SaveDevice.IsReady) Stats.Save(); if (HighScoreScreen.IsInHighscores(timeLimit)) { foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); Global.TimeRemaining = timeLimit; ScreenManager.AddScreen(new BackgroundScreen(), null); ScreenManager.AddScreen(new MessageBoxScreen("Enter your Initials", true), null); } else { foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); ScreenManager.AddScreen(new BackgroundScreen(), null); ScreenManager.AddScreen(new MainMenuScreen(), null); } } The problem is that when isExiting is set to true by screen.ExitScreen() for the gamePlayScreen, the transition never completes the transition and removes the screen from the ScreenManager. Every other screen that I use the same technique to add and remove each screen fully transitions On/Off and is removed at the appropriate time from the ScreenManager, but noy my GamePlayScreen. Has anyone that has used the GameStateManagement example experienced this issue or can someone see the mistake I am making? EDIT This is what I tracked down. When the game is done, I call foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); to start the transition off process for the gameplay screen. At this point there is only 1 screen on the ScreenManager stack. The gamePlay screen gets isExiting set to true and starts to transition off. Right after the above call to ExitScreen() I add a background screen and menu screen to the screenManager: ScreenManager.AddScreen(new background(), null); ScreenManager.AddScreen(new Menu(), null); The count of the ScreenManager is now 3. What I noticed while stepping through the updates for GameScreen and ScreenManager, the gameplay screen never gets to the point where the transistion process finishes so the ScreenManager can remove it from the stack. This anomaly does not happen to any of my other screens when I switch between them. Screen Manager Code #region File Description //----------------------------------------------------------------------------- // ScreenManager.cs // // Microsoft XNA Community Game Platform // Copyright (C) Microsoft Corporation. All rights reserved. //----------------------------------------------------------------------------- #endregion #define DEMO #region Using Statements using System; using System.Diagnostics; using System.Collections.Generic; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using PerformanceUtility.GameDebugTools; #endregion namespace GameStateManagement { /// <summary> /// The screen manager is a component which manages one or more GameScreen /// instances. It maintains a stack of screens, calls their Update and Draw /// methods at the appropriate times, and automatically routes input to the /// topmost active screen. /// </summary> public class ScreenManager : DrawableGameComponent { #region Fields List<GameScreen> screens = new List<GameScreen>(); List<GameScreen> screensToUpdate = new List<GameScreen>(); InputState input = new InputState(); SpriteBatch spriteBatch; SpriteFont font; Texture2D blankTexture; bool isInitialized; bool getOut; bool traceEnabled; #if DEBUG DebugSystem debugSystem; Stopwatch stopwatch = new Stopwatch(); bool debugTextEnabled; #endif #endregion #region Properties /// <summary> /// A default SpriteBatch shared by all the screens. This saves /// each screen having to bother creating their own local instance. /// </summary> public SpriteBatch SpriteBatch { get { return spriteBatch; } } /// <summary> /// A default font shared by all the screens. This saves /// each screen having to bother loading their own local copy. /// </summary> public SpriteFont Font { get { return font; } } public Rectangle ScreenRectangle { get { return new Rectangle(0, 0, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height); } } /// <summary> /// If true, the manager prints out a list of all the screens /// each time it is updated. This can be useful for making sure /// everything is being added and removed at the right times. /// </summary> public bool TraceEnabled { get { return traceEnabled; } set { traceEnabled = value; } } #if DEBUG public bool DebugTextEnabled { get { return debugTextEnabled; } set { debugTextEnabled = value; } } public DebugSystem DebugSystem { get { return debugSystem; } } #endif #endregion #region Initialization /// <summary> /// Constructs a new screen manager component. /// </summary> public ScreenManager(Game game) : base(game) { // we must set EnabledGestures before we can query for them, but // we don't assume the game wants to read them. //TouchPanel.EnabledGestures = GestureType.None; } /// <summary> /// Initializes the screen manager component. /// </summary> public override void Initialize() { base.Initialize(); #if DEBUG debugSystem = DebugSystem.Initialize(Game, "Fonts/MenuFont"); #endif isInitialized = true; } /// <summary> /// Load your graphics content. /// </summary> protected override void LoadContent() { // Load content belonging to the screen manager. ContentManager content = Game.Content; spriteBatch = new SpriteBatch(GraphicsDevice); font = content.Load<SpriteFont>(@"Fonts\menufont"); blankTexture = content.Load<Texture2D>(@"Textures\Backgrounds\blank"); // Tell each of the screens to load their content. foreach (GameScreen screen in screens) { screen.LoadContent(); } } /// <summary> /// Unload your graphics content. /// </summary> protected override void UnloadContent() { // Tell each of the screens to unload their content. foreach (GameScreen screen in screens) { screen.UnloadContent(); } } #endregion #region Update and Draw /// <summary> /// Allows each screen to run logic. /// </summary> public override void Update(GameTime gameTime) { #if DEBUG debugSystem.TimeRuler.StartFrame(); debugSystem.TimeRuler.BeginMark("Update", Color.Blue); if (debugTextEnabled && getOut == false) { debugSystem.FpsCounter.Visible = true; debugSystem.TimeRuler.Visible = true; debugSystem.TimeRuler.ShowLog = true; getOut = true; } else if (debugTextEnabled == false) { getOut = false; debugSystem.FpsCounter.Visible = false; debugSystem.TimeRuler.Visible = false; debugSystem.TimeRuler.ShowLog = false; } #endif // Read the keyboard and gamepad. input.Update(); // Make a copy of the master screen list, to avoid confusion if // the process of updating one screen adds or removes others. screensToUpdate.Clear(); foreach (GameScreen screen in screens) screensToUpdate.Add(screen); bool otherScreenHasFocus = !Game.IsActive; bool coveredByOtherScreen = false; // Loop as long as there are screens waiting to be updated. while (screensToUpdate.Count > 0) { // Pop the topmost screen off the waiting list. GameScreen screen = screensToUpdate[screensToUpdate.Count - 1]; screensToUpdate.RemoveAt(screensToUpdate.Count - 1); // Update the screen. screen.Update(gameTime, otherScreenHasFocus, coveredByOtherScreen); if (screen.ScreenState == ScreenState.TransitionOn || screen.ScreenState == ScreenState.Active) { // If this is the first active screen we came across, // give it a chance to handle input. if (!otherScreenHasFocus) { screen.HandleInput(input); otherScreenHasFocus = true; } // If this is an active non-popup, inform any subsequent // screens that they are covered by it. if (!screen.IsPopup) coveredByOtherScreen = true; } } // Print debug trace? if (traceEnabled) TraceScreens(); #if DEBUG debugSystem.TimeRuler.EndMark("Update"); #endif } /// <summary> /// Prints a list of all the screens, for debugging. /// </summary> void TraceScreens() { List<string> screenNames = new List<string>(); foreach (GameScreen screen in screens) screenNames.Add(screen.GetType().Name); Debug.WriteLine(string.Join(", ", screenNames.ToArray())); } /// <summary> /// Tells each screen to draw itself. /// </summary> public override void Draw(GameTime gameTime) { #if DEBUG debugSystem.TimeRuler.StartFrame(); debugSystem.TimeRuler.BeginMark("Draw", Color.Yellow); #endif foreach (GameScreen screen in screens) { if (screen.ScreenState == ScreenState.Hidden) continue; screen.Draw(gameTime); } #if DEBUG debugSystem.TimeRuler.EndMark("Draw"); #endif #if DEMO SpriteBatch.Begin(); SpriteBatch.DrawString(font, "DEMO - NOT FOR RESALE", new Vector2(20, 80), Color.White); SpriteBatch.End(); #endif } #endregion #region Public Methods /// <summary> /// Adds a new screen to the screen manager. /// </summary> public void AddScreen(GameScreen screen, PlayerIndex? controllingPlayer) { screen.ControllingPlayer = controllingPlayer; screen.ScreenManager = this; screen.IsExiting = false; // If we have a graphics device, tell the screen to load content. if (isInitialized) { screen.LoadContent(); } screens.Add(screen); } /// <summary> /// Removes a screen from the screen manager. You should normally /// use GameScreen.ExitScreen instead of calling this directly, so /// the screen can gradually transition off rather than just being /// instantly removed. /// </summary> public void RemoveScreen(GameScreen screen) { // If we have a graphics device, tell the screen to unload content. if (isInitialized) { screen.UnloadContent(); } screens.Remove(screen); screensToUpdate.Remove(screen); } /// <summary> /// Expose an array holding all the screens. We return a copy rather /// than the real master list, because screens should only ever be added /// or removed using the AddScreen and RemoveScreen methods. /// </summary> public GameScreen[] GetScreens() { return screens.ToArray(); } /// <summary> /// Helper draws a translucent black fullscreen sprite, used for fading /// screens in and out, and for darkening the background behind popups. /// </summary> public void FadeBackBufferToBlack(float alpha) { Viewport viewport = GraphicsDevice.Viewport; spriteBatch.Begin(); spriteBatch.Draw(blankTexture, new Rectangle(0, 0, viewport.Width, viewport.Height), Color.Black * alpha); spriteBatch.End(); } #endregion } } Game Screen Parent of GamePlayScreen #region File Description //----------------------------------------------------------------------------- // GameScreen.cs // // Microsoft XNA Community Game Platform // Copyright (C) Microsoft Corporation. All rights reserved. //----------------------------------------------------------------------------- #endregion #region Using Statements using System; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Input; //using Microsoft.Xna.Framework.Input.Touch; using System.IO; #endregion namespace GameStateManagement { /// <summary> /// Enum describes the screen transition state. /// </summary> public enum ScreenState { TransitionOn, Active, TransitionOff, Hidden, } /// <summary> /// A screen is a single layer that has update and draw logic, and which /// can be combined with other layers to build up a complex menu system. /// For instance the main menu, the options menu, the "are you sure you /// want to quit" message box, and the main game itself are all implemented /// as screens. /// </summary> public abstract class GameScreen { #region Properties /// <summary> /// Normally when one screen is brought up over the top of another, /// the first screen will transition off to make room for the new /// one. This property indicates whether the screen is only a small /// popup, in which case screens underneath it do not need to bother /// transitioning off. /// </summary> public bool IsPopup { get { return isPopup; } protected set { isPopup = value; } } bool isPopup = false; /// <summary> /// Indicates how long the screen takes to /// transition on when it is activated. /// </summary> public TimeSpan TransitionOnTime { get { return transitionOnTime; } protected set { transitionOnTime = value; } } TimeSpan transitionOnTime = TimeSpan.Zero; /// <summary> /// Indicates how long the screen takes to /// transition off when it is deactivated. /// </summary> public TimeSpan TransitionOffTime { get { return transitionOffTime; } protected set { transitionOffTime = value; } } TimeSpan transitionOffTime = TimeSpan.Zero; /// <summary> /// Gets the current position of the screen transition, ranging /// from zero (fully active, no transition) to one (transitioned /// fully off to nothing). /// </summary> public float TransitionPosition { get { return transitionPosition; } protected set { transitionPosition = value; } } float transitionPosition = 1; /// <summary> /// Gets the current alpha of the screen transition, ranging /// from 1 (fully active, no transition) to 0 (transitioned /// fully off to nothing). /// </summary> public float TransitionAlpha { get { return 1f - TransitionPosition; } } /// <summary> /// Gets the current screen transition state. /// </summary> public ScreenState ScreenState { get { return screenState; } protected set { screenState = value; } } ScreenState screenState = ScreenState.TransitionOn; /// <summary> /// There are two possible reasons why a screen might be transitioning /// off. It could be temporarily going away to make room for another /// screen that is on top of it, or it could be going away for good. /// This property indicates whether the screen is exiting for real: /// if set, the screen will automatically remove itself as soon as the /// transition finishes. /// </summary> public bool IsExiting { get { return isExiting; } protected internal set { isExiting = value; } } bool isExiting = false; /// <summary> /// Checks whether this screen is active and can respond to user input. /// </summary> public bool IsActive { get { return !otherScreenHasFocus && (screenState == ScreenState.TransitionOn || screenState == ScreenState.Active); } } bool otherScreenHasFocus; /// <summary> /// Gets the manager that this screen belongs to. /// </summary> public ScreenManager ScreenManager { get { return screenManager; } internal set { screenManager = value; } } ScreenManager screenManager; public KeyboardState KeyboardState { get {return Keyboard.GetState();} } /// <summary> /// Gets the index of the player who is currently controlling this screen, /// or null if it is accepting input from any player. This is used to lock /// the game to a specific player profile. The main menu responds to input /// from any connected gamepad, but whichever player makes a selection from /// this menu is given control over all subsequent screens, so other gamepads /// are inactive until the controlling player returns to the main menu. /// </summary> public PlayerIndex? ControllingPlayer { get { return controllingPlayer; } internal set { controllingPlayer = value; } } PlayerIndex? controllingPlayer; /// <summary> /// Gets whether or not this screen is serializable. If this is true, /// the screen will be recorded into the screen manager's state and /// its Serialize and Deserialize methods will be called as appropriate. /// If this is false, the screen will be ignored during serialization. /// By default, all screens are assumed to be serializable. /// </summary> public bool IsSerializable { get { return isSerializable; } protected set { isSerializable = value; } } bool isSerializable = true; #endregion #region Initialization /// <summary> /// Load graphics content for the screen. /// </summary> public virtual void LoadContent() { } /// <summary> /// Unload content for the screen. /// </summary> public virtual void UnloadContent() { } #endregion #region Update and Draw /// <summary> /// Allows the screen to run logic, such as updating the transition position. /// Unlike HandleInput, this method is called regardless of whether the screen /// is active, hidden, or in the middle of a transition. /// </summary> public virtual void Update(GameTime gameTime, bool otherScreenHasFocus, bool coveredByOtherScreen) { this.otherScreenHasFocus = otherScreenHasFocus; if (isExiting) { // If the screen is going away to die, it should transition off. screenState = ScreenState.TransitionOff; if (!UpdateTransition(gameTime, transitionOffTime, 1)) { // When the transition finishes, remove the screen. ScreenManager.RemoveScreen(this); } } else if (coveredByOtherScreen) { // If the screen is covered by another, it should transition off. if (UpdateTransition(gameTime, transitionOffTime, 1)) { // Still busy transitioning. screenState = ScreenState.TransitionOff; } else { // Transition finished! screenState = ScreenState.Hidden; } } else { // Otherwise the screen should transition on and become active. if (UpdateTransition(gameTime, transitionOnTime, -1)) { // Still busy transitioning. screenState = ScreenState.TransitionOn; } else { // Transition finished! screenState = ScreenState.Active; } } } /// <summary> /// Helper for updating the screen transition position. /// </summary> bool UpdateTransition(GameTime gameTime, TimeSpan time, int direction) { // How much should we move by? float transitionDelta; if (time == TimeSpan.Zero) transitionDelta = 1; else transitionDelta = (float)(gameTime.ElapsedGameTime.TotalMilliseconds / time.TotalMilliseconds); // Update the transition position. transitionPosition += transitionDelta * direction; // Did we reach the end of the transition? if (((direction < 0) && (transitionPosition <= 0)) || ((direction > 0) && (transitionPosition >= 1))) { transitionPosition = MathHelper.Clamp(transitionPosition, 0, 1); return false; } // Otherwise we are still busy transitioning. return true; } /// <summary> /// Allows the screen to handle user input. Unlike Update, this method /// is only called when the screen is active, and not when some other /// screen has taken the focus. /// </summary> public virtual void HandleInput(InputState input) { } public KeyboardState currentKeyState; public KeyboardState lastKeyState; public bool IsKeyHit(Keys key) { if (currentKeyState.IsKeyDown(key) && lastKeyState.IsKeyUp(key)) return true; return false; } /// <summary> /// This is called when the screen should draw itself. /// </summary> public virtual void Draw(GameTime gameTime) { } #endregion #region Public Methods /// <summary> /// Tells the screen to serialize its state into the given stream. /// </summary> public virtual void Serialize(Stream stream) { } /// <summary> /// Tells the screen to deserialize its state from the given stream. /// </summary> public virtual void Deserialize(Stream stream) { } /// <summary> /// Tells the screen to go away. Unlike ScreenManager.RemoveScreen, which /// instantly kills the screen, this method respects the transition timings /// and will give the screen a chance to gradually transition off. /// </summary> public void ExitScreen() { if (TransitionOffTime == TimeSpan.Zero) { // If the screen has a zero transition time, remove it immediately. ScreenManager.RemoveScreen(this); } else { // Otherwise flag that it should transition off and then exit. isExiting = true; } } #endregion #region Helper Methods /// <summary> /// A helper method which loads assets using the screen manager's /// associated game content loader. /// </summary> /// <typeparam name="T">Type of asset.</typeparam> /// <param name="assetName">Asset name, relative to the loader root /// directory, and not including the .xnb extension.</param> /// <returns></returns> public T Load<T>(string assetName) { return ScreenManager.Game.Content.Load<T>(assetName); } #endregion } }

    Read the article

  • Long pause in pxe/preseed install

    - by Bo Kersey
    I'm encountering a long pause during unattended (preseeded) install of precise server via pxe. The install eventually works, but appears to just sit there for 20 or 30 minutes after it authenticates the mirror. There is nothing in the logs during this time, even with full debug. Log excerpt: Aug 2 13:29:59 net-retriever: Signature made Fri Aug 2 06:05:28 2013 UTC using DSA key ID 437D05B5 Aug 2 13:29:59 net-retriever: gpgv: Aug 2 13:29:59 net-retriever: Good signature from "Ubuntu Archive Automatic Signing Key <[email protected]>" Aug 2 13:29:59 net-retriever: Aug 2 13:50:10 anna[5072]: DEBUG: resolver (ext2-modules): package doesn't exist (ignored) Aug 2 13:50:10 anna[5072]: DEBUG: resolver (efi-modules): package doesn't exist (ignored) Bug related to this problem that is not being worked. I'm surprised that no one else is complaining about this. Or, is there some other way to do an unattended install that everyone else is using?

    Read the article

  • Debugging "Premature end of script headers" - WSGI/Django [migrated]

    - by Marcin
    I have recently deployed an app to a shared host (webfaction), and for no apparent reason, my site will not load at all (it worked until today). It is a django app, but the django.log is not even created; the only clue is that in one of the logs, I get the error message: "Premature end of script headers", identifying my wsgi file as the source. I've tried to add logging to my wsgi file, but I can't find any log created for it. Is there any recommended way to debug this error? I am on the point of tearing my hair out. My WSGI file: import os import sys from django.core.handlers.wsgi import WSGIHandler import logging logger = logging.getLogger(__name__) os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' os.environ['CELERY_LOADER'] = 'django' virtenv = os.path.expanduser("~/webapps/django/oneclickcosvirt/") activate_this = virtenv + "bin/activate_this.py" execfile(activate_this, dict(__file__=activate_this)) # if 'VIRTUAL_ENV' not in os.environ: # os.environ['VIRTUAL_ENV'] = virtenv sys.path.append(os.path.dirname(virtenv+'oneclickcos/')) logger.debug('About to run WSGIHandler') try: application = WSGIHandler() except (Exception,), e: logger.debug('Exception starting wsgihandler: %s' % e) raise e

    Read the article

  • Having a hard time having consecutive animations for an attack

    - by Kelby Styler
    So I've been trying to figure this out for about 8 hours now...It's driving me nuts because I am pretty sure that it is something dead simple that I am just not understanding. I had everything working fine when I was just cycling through the animation: Idle - Attack - Attack 1 - Attack 2. Just in an infinite loop. The problem now is that I want it to go Attack - check if x time passes if ctrl pressed before x passes move to Attack 1, if not move back to Idle - Then either Attack 1 or Idle depending on how long has passed. I've almost gotten it a few time, but something always happens where it falls apart if I press ctrl too fast or after multiple cycles of the animation. Any help would be appreciated, I'm just at my wits end on this one. I've been looking at this so long that I just don't know where to go anymore. Code is below, here is the controller using UnityEngine; using System.Collections; public class MeleeAttack : MonoBehaviour { public int damage; public bool Attack; public bool Attack1; public bool Attack2; public bool Idle; private Animator animator; private int attnum = 0; private float count = 2f; private float timeLeft; //Gives value to damage output void MAttackDmg () { if (Input.GetKeyDown (KeyCode.RightControl) || Input.GetKeyDown (KeyCode.LeftControl)) { switch (attnum) { case (0): Attack = true; damage = 2; animator.SetBool ("Attack", Attack); attnum++; Idle = false; animator.SetBool ("Idle", Idle); timeLeft = count; break; case (1): Attack1 = true; damage = 2; animator.SetBool ("Attack1", Attack1); attnum++; Idle = false; animator.SetBool ("Idle", Idle); timeLeft = count; break; case (2): Attack2 = true; damage = 2; animator.SetBool ("Attack2", Attack2); attnum = 0; Idle = false; animator.SetBool ("Idle", Idle); timeLeft = count; break; } } if (Input.GetKeyUp (KeyCode.RightControl) || Input.GetKeyUp (KeyCode.LeftControl)) { switch (attnum) { case (0): Debug.Log ("false"); damage = 0; if (timeLeft <= 0f) { Attack2 = false; animator.SetBool ("Attack2", Attack2); Debug.Log ("t1"); Idle = true; animator.SetBool ("Idle", Idle); attnum = 0; timeLeft = count; } break; case (1): Debug.Log ("false1"); damage = 0; if (timeLeft <= 0f) { Debug.Log ("t2"); Attack = false; animator.SetBool ("Attack", Attack); Idle = true; animator.SetBool ("Idle", Idle); attnum = 0; timeLeft = count; } break; case (2): Debug.Log ("false2"); damage = 0; if (timeLeft <= 0f) { Attack1 = false; animator.SetBool ("Attack1", Attack1); Debug.Log ("t3"); Idle = true; animator.SetBool ("Idle", Idle); attnum = 0; timeLeft = count; } break; } } } // Use this for initialization void Awake () { animator = GetComponent<Animator> (); } // Update is called once per frame void Update () { timeLeft -= Time.deltaTime;; MAttackDmg (); } void Start (){ timeLeft = count; } }

    Read the article

  • Errors compiling XNA project Windows 8?

    - by ChocoMan
    I'm using Visual Studio 2012 have just installed Windows 8 on my computer and tried to compile a game Im working on in XNA. When the game tried to build, I got the following errors: Error 12 Could not copy the file "C:\Users\Computer\Documents\Visual Studio 2012\Projects\WindowsGame1\WindowsGame1\WindowsGame1\bin\x86\Debug\Content\SkyDome\skycirrus01.xnb" because it was not found. Error 13 Could not copy the file "C:\Users\Computer\Documents\Visual Studio 2012\Projects\WindowsGame1\WindowsGame1\WindowsGame1\bin\x86\Debug\Content\Fonts\Arial.xnb" because it was not found. Error 14 Could not copy the file "C:\Users\Computer\Documents\Visual Studio 2012\Projects\WindowsGame1\WindowsGame1\WindowsGame1\bin\x86\Debug\Content\Fonts\ISOCP2.xnb" because it was not found. skycirrus01.xnb is actually a .fbx. *Arial.xmb* and ISOCP2.xmb are my spritefonts within my project. Prior to installing Windows 8 (store bought) my project compiled. Does anyone know how to convert these to .xnb files? I'm assuming that will make them compatible.

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >