Search Results

Search found 5122 results on 205 pages for 'max'.

Page 118/205 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Shrinking image by 57% and centering inside css structure

    - by Johua
    Hy, i'm really stuck. I'll go step by step and hope to make it short. This is the html structure: <li class="FAVwithimage"> <a href=""> <img src="pics/Joshua.png"> <span class="name">Joshua</span> <span class="comment">Developer</span> <span class="arrow"></span> </a> </li> Before i paste the css classes, some info about the exact goal to accomplish: Resize the picture (img) by 57%. If it cannot be done with css, then jquery/javascript solution. For example: Original pic is 240x240px, i need to resize it by 57%. That means that a pic of 400x400 would be bigger after resizing. After resizing, the picture needs to be centered vertical&horizontal inside a: 68x90 boundaries. So you have an LI element, wich has an A element, and inside A we have IMG, IMG is resized by 57% and centered where the maximum width can be of course 68px and maximum height 90px. No for that to work i was adding a SPAN element arround the IMG. This is what i was thinking: <li class="FAVwithimage"> <a href=""> <span class="picHolder"><img src="pics/Joshua.png"></span> <span class="name">Joshua</span> <span class="comment">Developer</span> <span class="arrow"></span> </a> </li> Then i would give the span element: display:block and w=68px, h=90px. But unforunatelly that didn't work. I know it's a long post but i'v did my best to describe it very simple. Beneath are the css classes and a picture to see what i need. li.FAVwithimage { height: 90px!important; } li.FAVwithimage a, li.FAVwithimage:hover a { height: 81px!important; } That's it what's relevant. I have not included the classes for: name,comment,arrow And now the classes that are incomplete and refer to IMG. li.FAVwithimage a span.picHolder{ /*put the picHolder to the beginning of the LI element*/ position: absolute; left: 0; top: 0; width: 68px; height: 90px; diplay:block; border:1px solid #F00; } Border is used just temporary to show the actuall picHolder. It is now on the beginning of LI, width and height is set. li.FAVwithimage span.picHolder img { max-width:68px!important; max-height:90px!important; } This is the class wich should shrink the pic by 57% and center inside picHolder Here I have a drawing describing what i need:

    Read the article

  • C - Error with read() of a file, storage in an array, and printing output properly

    - by ns1
    I am new to C, so I am not exactly sure where my error is. However, I do know that the great portion of the issue lies either in how I am storing the doubles in the d_buffer (double) array or the way I am printing it. Specifically, my output keeps printing extremely large numbers (with around 10-12 digits before the decimal point and a trail of zeros after it. Additionally, this is an adaptation of an older program to allow for double inputs, so I only really added the two if statements (in the "read" for loop and the "printf" for loop) and the d_buffer declaration. I would appreciate any input whatsoever as I have spent several hours on this error. #include <stdio.h> #include <fcntl.h> #include <sys/types.h> #include <unistd.h> #include <string.h> struct DataDescription { char fieldname[30]; char fieldtype; int fieldsize; }; /* ----------------------------------------------- eof(fd): returns 1 if file `fd' is out of data ----------------------------------------------- */ int eof(int fd) { char c; if ( read(fd, &c, 1) != 1 ) return(1); else { lseek(fd, -1, SEEK_CUR); return(0); } } void main() { FILE *fp; /* Used to access meta data */ int fd; /* Used to access user data */ /* ---------------------------------------------------------------- Variables to hold the description of the data - max 10 fields ---------------------------------------------------------------- */ struct DataDescription DataDes[10]; /* Holds data descriptions for upto 10 fields */ int n_fields; /* Actual # fields */ /* ------------------------------------------------------ Variables to hold the data - max 10 fields.... ------------------------------------------------------ */ char c_buffer[10][100]; /* For character data */ int i_buffer[10]; /* For integer data */ double d_buffer[10]; int i, j; int found; printf("Program for searching a mini database:\n"); /* ============================= Read in meta information ============================= */ fp = fopen("db-description", "r"); n_fields = 0; while ( fscanf(fp, "%s %c %d", DataDes[n_fields].fieldname, &DataDes[n_fields].fieldtype, &DataDes[n_fields].fieldsize) > 0 ) n_fields++; /* --- Prints meta information --- */ printf("\nThe database consists of these fields:\n"); for (i = 0; i < n_fields; i++) printf("Index %d: Fieldname `%s',\ttype = %c,\tsize = %d\n", i, DataDes[i].fieldname, DataDes[i].fieldtype, DataDes[i].fieldsize); printf("\n\n"); /* --- Open database file --- */ fd = open("db-data", O_RDONLY); /* --- Print content of the database file --- */ printf("\nThe database content is:\n"); while ( ! eof(fd) ) { /* ------------------ Read next record ------------------ */ for (j = 0; j < n_fields; j++) { if ( DataDes[j].fieldtype == 'I' ) read(fd, &i_buffer[j], DataDes[j].fieldsize); if ( DataDes[j].fieldtype == 'F' ) read(fd, &d_buffer[j], DataDes[j].fieldsize); if ( DataDes[j].fieldtype == 'C' ) read(fd, &c_buffer[j], DataDes[j].fieldsize); } double d; /* ------------------ Print it... ------------------ */ for (j = 0; j < n_fields; j++) { if ( DataDes[j].fieldtype == 'I' ) printf("%d ", i_buffer[j]); if ( DataDes[j].fieldtype == 'F' ) d = d_buffer[j]; printf("%lf ", d); if ( DataDes[j].fieldtype == 'C' ) printf("%s ", c_buffer[j]); } printf("\n"); } printf("\n"); printf("\n"); } Post edits output: 16777216 0.000000 107245694331284094976.000000 107245694331284094976.000000 Pi 33554432 107245694331284094976.000000 2954938175610156848888276006519501238173891974277081114627768841840801736306392481516295906896346039950625609765296207682724801406770458881439696544971142710292689518104183685723154223544599940711614138798312668264956190761622328617992192.000000 2954938175610156848888276006519501238173891974277081114627768841840801736306392481516295906896346039950625609765296207682724801406770458881439696544971142710292689518104183685723154223544599940711614138798312668264956190761622328617992192.000000 Secret Key 50331648 2954938175610156848888276006519501238173891974277081114627768841840801736306392481516295906896346039950625609765296207682724801406770458881439696544971142710292689518104183685723154223544599940711614138798312668264956190761622328617992192.000000 -0.000000 -0.000000 The number E Expected Output: 3 rows of data ending with the number "e = 2.18281828" To reproduce the problem, the following two files need to be in the same directory as the lookup-data.c file: - db-data - db-description

    Read the article

  • MaxConnections: is it a Client or Request count?

    - by ctacke
    IIS allows me to set a MaxConnections value in the configuration. What is unclear (to me anyway) is what that number means exactly, and I can't find a definitive documented answer. Is this the maximum number of clients that connect simultaneously, or is the maximum number of requests that can be running concurrently? For example, if I have a page that loads up some images, CSS files, scripts and content, each of those could reasonably be on a separate request socket with a modern browser. Is each of those sockets counted against the "max", or is it counted only as 1 since they are all requests from the same client address?

    Read the article

  • Permissions problem running Apache ActiveMQ

    - by Edd
    I'm wanting to use Apache ActiveMQ on Ubuntu 12.04 LTS, but am running into what looks like a permissions problem when I try to run it as follows: edd:~$ sudo activemq --version INFO: Loading '/usr/share/activemq/activemq-options' INFO: Using java '/usr/lib/jvm/java-6-openjdk//bin/java' INFO: changing to user 'activemq' to invoke java Java Runtime: Sun Microsystems Inc. 1.6.0_24 /usr/lib/jvm/java-6-openjdk-amd64/jre Heap sizes: current=502464k free=499842k max=502464k JVM args: -Xms512M -Xmx512M -Dorg.apache.activemq.UseDedicatedTaskRunner=true -Dactivemq.classpath=/var/lib/activemq//conf;; -Dactivemq.home=/usr/share/activemq -Dactivemq.base=/var/lib/activemq/ ACTIVEMQ_HOME: /usr/share/activemq ACTIVEMQ_BASE: /var/lib/activemq ActiveMQ 5.5.0 For help or more information please see: http://activemq.apache.org edd:~$ sudo activemq start INFO: Loading '/usr/share/activemq/activemq-options' INFO: Using java '/usr/lib/jvm/java-6-openjdk//bin/java' INFO: Starting - inspect logfiles specified in logging.properties and log4j.properties to get details INFO: changing to user 'activemq' to invoke java -su: line 2: /var/run/activemq.pid: Permission denied INFO: pidfile created : '/var/run/activemq.pid' (pid '7811') edd:~$ sudo activemq status INFO: Loading '/usr/share/activemq/activemq-options' INFO: Using java '/usr/lib/jvm/java-6-openjdk//bin/java' ActiveMQ not running edd:~$ ps ax | grep 'activemq' 8040 pts/0 S+ 0:00 grep --color=auto activemq I installed ActiveMQ using sudo apt-get install activemq. Apologies if there's any additional information missing - I'm fairly new to Linux as you may well have guessed!

    Read the article

  • Sublinear Extra Space MergeSort

    - by hulkmeister
    I am reviewing basic algorithms from a book called Algorithms by Robert Sedgewick, and I came across a problem in MergeSort that I am, sad to say, having difficulty solving. The problem is below: Sublinear Extra Space. Develop a merge implementation that reduces that extra space requirement to max(M, N/M), based on the following idea: Divide the array into N/M blocks of size M (for simplicity in this description, assume that N is a multiple of M). Then, (i) considering the blocks as items with their first key as the sort key, sort them using selection sort; and (ii) run through the array merging the first block with the second, then the second block with the third, and so forth. The problem I have with the problem is that based on the idea Sedgewick recommends, the following set of arrays will not be sorted: {0, 10, 12}, {3, 9, 11}, {5, 8, 13}. The algorithm I use is the following: Divide the full array into subarrays of size M. Run Selection Sort on each of the subarrays. Merge each of the subarrays using the method Sedgwick recommends in (ii). (This is where I encounter the problem of where to store the results after the merge.) This leads to wanting to increase the size of the auxiliary space needed to handle at least two subarrays at a time (for merging), but based on the specifications of the problem, that is not allowed. I have also considered using the original array as space for one subarray and using the auxiliary space for the second subarray. However, I can't envision a solution that does not end up overwriting the entries of the first subarray. Any ideas on other ways this can be done? NOTE: If this is suppose to be on StackOverflow.com, please let me know how I can move it. I posted here because the question was academic.

    Read the article

  • Parallelism in .NET – Part 9, Configuration in PLINQ and TPL

    - by Reed
    Parallel LINQ and the Task Parallel Library contain many options for configuration.  Although the default configuration options are often ideal, there are times when customizing the behavior is desirable.  Both frameworks provide full configuration support. When working with Data Parallelism, there is one primary configuration option we often need to control – the number of threads we want the system to use when parallelizing our routine.  By default, PLINQ and the TPL both use the ThreadPool to schedule tasks.  Given the major improvements in the ThreadPool in CLR 4, this default behavior is often ideal.  However, there are times that the default behavior is not appropriate.  For example, if you are working on multiple threads simultaneously, and want to schedule parallel operations from within both threads, you might want to consider restricting each parallel operation to using a subset of the processing cores of the system.  Not doing this might over-parallelize your routine, which leads to inefficiencies from having too many context switches. In the Task Parallel Library, configuration is handled via the ParallelOptions class.  All of the methods of the Parallel class have an overload which accepts a ParallelOptions argument. We configure the Parallel class by setting the ParallelOptions.MaxDegreeOfParallelism property.  For example, let’s revisit one of the simple data parallel examples from Part 2: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re looping through an image, and calling a method on each pixel in the image.  If this was being done on a separate thread, and we knew another thread within our system was going to be doing a similar operation, we likely would want to restrict this to using half of the cores on the system.  This could be accomplished easily by doing: var options = new ParallelOptions(); options.MaxDegreeOfParallelism = Math.Max(Environment.ProcessorCount / 2, 1); Parallel.For(0, pixelData.GetUpperBound(0), options, row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Now, we’re restricting this routine to using no more than half the cores in our system.  Note that I included a check to prevent a single core system from supplying zero; without this check, we’d potentially cause an exception.  I also did not hard code a specific value for the MaxDegreeOfParallelism property.  One of our goals when parallelizing a routine is allowing it to scale on better hardware.  Specifying a hard-coded value would contradict that goal. Parallel LINQ also supports configuration, and in fact, has quite a few more options for configuring the system.  The main configuration option we most often need is the same as our TPL option: we need to supply the maximum number of processing threads.  In PLINQ, this is done via a new extension method on ParallelQuery<T>: ParallelEnumerable.WithDegreeOfParallelism. Let’s revisit our declarative data parallelism sample from Part 6: double min = collection.AsParallel().Min(item => item.PerformComputation()); Here, we’re performing a computation on each element in the collection, and saving the minimum value of this operation.  If we wanted to restrict this to a limited number of threads, we would add our new extension method: int maxThreads = Math.Max(Environment.ProcessorCount / 2, 1); double min = collection .AsParallel() .WithDegreeOfParallelism(maxThreads) .Min(item => item.PerformComputation()); This automatically restricts the PLINQ query to half of the threads on the system. PLINQ provides some additional configuration options.  By default, PLINQ will occasionally revert to processing a query in parallel.  This occurs because many queries, if parallelized, typically actually cause an overall slowdown compared to a serial processing equivalent.  By analyzing the “shape” of the query, PLINQ often decides to run a query serially instead of in parallel.  This can occur for (taken from MSDN): Queries that contain a Select, indexed Where, indexed SelectMany, or ElementAt clause after an ordering or filtering operator that has removed or rearranged original indices. Queries that contain a Take, TakeWhile, Skip, SkipWhile operator and where indices in the source sequence are not in the original order. Queries that contain Zip or SequenceEquals, unless one of the data sources has an originally ordered index and the other data source is indexable (i.e. an array or IList(T)). Queries that contain Concat, unless it is applied to indexable data sources. Queries that contain Reverse, unless applied to an indexable data source. If the specific query follows these rules, PLINQ will run the query on a single thread.  However, none of these rules look at the specific work being done in the delegates, only at the “shape” of the query.  There are cases where running in parallel may still be beneficial, even if the shape is one where it typically parallelizes poorly.  In these cases, you can override the default behavior by using the WithExecutionMode extension method.  This would be done like so: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .Select(i => i.PerformComputation()) .Reverse(); Here, the default behavior would be to not parallelize the query unless collection implemented IList<T>.  We can force this to run in parallel by adding the WithExecutionMode extension method in the method chain. Finally, PLINQ has the ability to configure how results are returned.  When a query is filtering or selecting an input collection, the results will need to be streamed back into a single IEnumerable<T> result.  For example, the method above returns a new, reversed collection.  In this case, the processing of the collection will be done in parallel, but the results need to be streamed back to the caller serially, so they can be enumerated on a single thread. This streaming introduces overhead.  IEnumerable<T> isn’t designed with thread safety in mind, so the system needs to handle merging the parallel processes back into a single stream, which introduces synchronization issues.  There are two extremes of how this could be accomplished, but both extremes have disadvantages. The system could watch each thread, and whenever a thread produces a result, take that result and send it back to the caller.  This would mean that the calling thread would have access to the data as soon as data is available, which is the benefit of this approach.  However, it also means that every item is introducing synchronization overhead, since each item needs to be merged individually. On the other extreme, the system could wait until all of the results from all of the threads were ready, then push all of the results back to the calling thread in one shot.  The advantage here is that the least amount of synchronization is added to the system, which means the query will, on a whole, run the fastest.  However, the calling thread will have to wait for all elements to be processed, so this could introduce a long delay between when a parallel query begins and when results are returned. The default behavior in PLINQ is actually between these two extremes.  By default, PLINQ maintains an internal buffer, and chooses an optimal buffer size to maintain.  Query results are accumulated into the buffer, then returned in the IEnumerable<T> result in chunks.  This provides reasonably fast access to the results, as well as good overall throughput, in most scenarios. However, if we know the nature of our algorithm, we may decide we would prefer one of the other extremes.  This can be done by using the WithMergeOptions extension method.  For example, if we know that our PerformComputation() routine is very slow, but also variable in runtime, we may want to retrieve results as they are available, with no bufferring.  This can be done by changing our above routine to: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.NotBuffered) .Select(i => i.PerformComputation()) .Reverse(); On the other hand, if are already on a background thread, and we want to allow the system to maximize its speed, we might want to allow the system to fully buffer the results: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.FullyBuffered) .Select(i => i.PerformComputation()) .Reverse(); Notice, also, that you can specify multiple configuration options in a parallel query.  By chaining these extension methods together, we generate a query that will always run in parallel, and will always complete before making the results available in our IEnumerable<T>.

    Read the article

  • Recommendation for Regex editor?

    - by Tim
    I asked for recommendations for Regex editors on stackoverflow a while ago. Following is one of the replies: What is "good" depends on what is most useful to you. For me, though, these are the key features for a good regex editor (besides the ability to test and create regular expressions, of course, which is a prerequisite to be called a "regex editor" :-) : Displays matches hierarchically with captured groups. Explains/analyzes an entered regex in plain English, showing a hierarchical tree. Translates your regex into code for a language of your choice. RegexBuddy, as @Max mentioned, does all these but there is also a free alternative, Expresso that also does them very well. These two utilities are the only ones I have found with the crucial ability to explain a regex. The features sound very attractive to me. But later I found the two are for Windows. I tried to install Expresso, the free one, via Wine, but met some trouble, about which I asked in another post. So I was wondering if in Ubuntu there are some applications comparable to RegexBuddy and Expresso? If it is required to install .NET Framework in order to install Expresso, is it still worth to install Expresso on Ubuntu? Thanks and regards!

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

  • TOTD #166: Using NoSQL database in your Java EE 6 Applications on GlassFish - MongoDB for now!

    - by arungupta
    The Java EE 6 platform includes Java Persistence API to work with RDBMS. The JPA specification defines a comprehensive API that includes, but not restricted to, how a database table can be mapped to a POJO and vice versa, provides mechanisms how a PersistenceContext can be injected in a @Stateless bean and then be used for performing different operations on the database table and write typesafe queries. There are several well known advantages of RDBMS but the NoSQL movement has gained traction over past couple of years. The NoSQL databases are not intended to be a replacement for the mainstream RDBMS. As Philosophy of NoSQL explains, NoSQL database was designed for casual use where all the features typically provided by an RDBMS are not required. The name "NoSQL" is more of a category of databases that is more known for what it is not rather than what it is. The basic principles of NoSQL database are: No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them. Scalability and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD. Not be restricted to SQL to access the information stored in the backing database. Designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Unlike RBDMS which require a separate caching tier, most of the NoSQL databases comes with integrated caching. Designed for less management and simpler data models lead to lower administration as well. There are primarily three types of NoSQL databases: Key-Value stores (e.g. Cassandra and Riak) Document databases (MongoDB or CouchDB) Graph databases (Neo4J) You may think NoSQL is panacea but as I mentioned above they are not meant to replace the mainstream databases and here is why: RDBMS have been around for many years, very stable, and functionally rich. This is something CIOs and CTOs can bet their money on without much worry. There is a reason 98% of Fortune 100 companies run Oracle :-) NoSQL is cutting edge, brings excitement to developers, but enterprises are cautious about them. Commercial databases like Oracle are well supported by the backing enterprises in terms of providing support resources on a global scale. There is a full ecosystem built around these commercial databases providing training, performance tuning, architecture guidance, and everything else. NoSQL is fairly new and typically backed by a single company not able to meet the scale of these big enterprises. NoSQL databases are good for CRUDing operations but business intelligence is extremely important for enterprises to stay competitive. RDBMS provide extensive tooling to generate this data but that was not the original intention of NoSQL databases and is lacking in that area. Generating any meaningful information other than CRUDing require extensive programming. Not suited for complex transactions such as banking systems or other highly transactional applications requiring 2-phase commit. SQL cannot be used with NoSQL databases and writing simple queries can be involving. Enough talking, lets take a look at some code. This blog has published multiple blogs on how to access a RDBMS using JPA in a Java EE 6 application. This Tip Of The Day (TOTD) will show you can use MongoDB (a document-oriented database) with a typical 3-tier Java EE 6 application. Lets get started! The complete source code of this project can be downloaded here. Download MongoDB for your platform from here (1.8.2 as of this writing) and start the server as: arun@ArunUbuntu:~/tools/mongodb-linux-x86_64-1.8.2/bin$./mongod./mongod --help for help and startup optionsSun Jun 26 20:41:11 [initandlisten] MongoDB starting : pid=11210port=27017 dbpath=/data/db/ 64-bit Sun Jun 26 20:41:11 [initandlisten] db version v1.8.2, pdfile version4.5Sun Jun 26 20:41:11 [initandlisten] git version:433bbaa14aaba6860da15bd4de8edf600f56501bSun Jun 26 20:41:11 [initandlisten] build sys info: Linuxbs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 2017:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41Sun Jun 26 20:41:11 [initandlisten] waiting for connections on port 27017Sun Jun 26 20:41:11 [websvr] web admin interface listening on port 28017 The default directory for the database is /data/db and needs to be created as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db You can specify a different directory using "--dbpath" option. Refer to Quickstart for your specific platform. Using NetBeans, create a Java EE 6 project and make sure to enable CDI and add JavaServer Faces framework. Download MongoDB Java Driver (2.6.3 of this writing) and add it to the project library by selecting "Properties", "LIbraries", "Add Library...", creating a new library by specifying the location of the JAR file, and adding the library to the created project. Edit the generated "index.xhtml" such that it looks like: <h1>Add a new movie</h1><h:form> Name: <h:inputText value="#{movie.name}" size="20"/><br/> Year: <h:inputText value="#{movie.year}" size="6"/><br/> Language: <h:inputText value="#{movie.language}" size="20"/><br/> <h:commandButton actionListener="#{movieSessionBean.createMovie}" action="show" title="Add" value="submit"/></h:form> This page has a simple HTML form with three text boxes and a submit button. The text boxes take name, year, and language of a movie and the submit button invokes the "createMovie" method of "movieSessionBean" and then render "show.xhtml". Create "show.xhtml" ("New" -> "Other..." -> "Other" -> "XHTML File") such that it looks like: <head> <title><h1>List of movies</h1></title> </head> <body> <h:form> <h:dataTable value="#{movieSessionBean.movies}" var="m" > <h:column><f:facet name="header">Name</f:facet>#{m.name}</h:column> <h:column><f:facet name="header">Year</f:facet>#{m.year}</h:column> <h:column><f:facet name="header">Language</f:facet>#{m.language}</h:column> </h:dataTable> </h:form> This page shows the name, year, and language of all movies stored in the database so far. The list of movies is returned by "movieSessionBean.movies" property. Now create the "Movie" class such that it looks like: import com.mongodb.BasicDBObject;import com.mongodb.BasicDBObject;import com.mongodb.DBObject;import javax.enterprise.inject.Model;import javax.validation.constraints.Size;/** * @author arun */@Modelpublic class Movie { @Size(min=1, max=20) private String name; @Size(min=1, max=20) private String language; private int year; // getters and setters for "name", "year", "language" public BasicDBObject toDBObject() { BasicDBObject doc = new BasicDBObject(); doc.put("name", name); doc.put("year", year); doc.put("language", language); return doc; } public static Movie fromDBObject(DBObject doc) { Movie m = new Movie(); m.name = (String)doc.get("name"); m.year = (int)doc.get("year"); m.language = (String)doc.get("language"); return m; } @Override public String toString() { return name + ", " + year + ", " + language; }} Other than the usual boilerplate code, the key methods here are "toDBObject" and "fromDBObject". These methods provide a conversion from "Movie" -> "DBObject" and vice versa. The "DBObject" is a MongoDB class that comes as part of the mongo-2.6.3.jar file and which we added to our project earlier.  The complete javadoc for 2.6.3 can be seen here. Notice, this class also uses Bean Validation constraints and will be honored by the JSF layer. Finally, create "MovieSessionBean" stateless EJB with all the business logic such that it looks like: package org.glassfish.samples;import com.mongodb.BasicDBObject;import com.mongodb.DB;import com.mongodb.DBCollection;import com.mongodb.DBCursor;import com.mongodb.DBObject;import com.mongodb.Mongo;import java.net.UnknownHostException;import java.util.ArrayList;import java.util.List;import javax.annotation.PostConstruct;import javax.ejb.Stateless;import javax.inject.Inject;import javax.inject.Named;/** * @author arun */@Stateless@Namedpublic class MovieSessionBean { @Inject Movie movie; DBCollection movieColl; @PostConstruct private void initDB() throws UnknownHostException { Mongo m = new Mongo(); DB db = m.getDB("movieDB"); movieColl = db.getCollection("movies"); if (movieColl == null) { movieColl = db.createCollection("movies", null); } } public void createMovie() { BasicDBObject doc = movie.toDBObject(); movieColl.insert(doc); } public List<Movie> getMovies() { List<Movie> movies = new ArrayList(); DBCursor cur = movieColl.find(); System.out.println("getMovies: Found " + cur.size() + " movie(s)"); for (DBObject dbo : cur.toArray()) { movies.add(Movie.fromDBObject(dbo)); } return movies; }} The database is initialized in @PostConstruct. Instead of a working with a database table, NoSQL databases work with a schema-less document. The "Movie" class is the document in our case and stored in the collection "movies". The collection allows us to perform query functions on all movies. The "getMovies" method invokes "find" method on the collection which is equivalent to the SQL query "select * from movies" and then returns a List<Movie>. Also notice that there is no "persistence.xml" in the project. Right-click and run the project to see the output as: Enter some values in the text box and click on enter to see the result as: If you reached here then you've successfully used MongoDB in your Java EE 6 application, congratulations! Some food for thought and further play ... SQL to MongoDB mapping shows mapping between traditional SQL -> Mongo query language. Tutorial shows fun things you can do with MongoDB. Try the interactive online shell  The cookbook provides common ways of using MongoDB In terms of this project, here are some tasks that can be tried: Encapsulate database management in a JPA persistence provider. Is it even worth it because the capabilities are going to be very different ? MongoDB uses "BSonObject" class for JSON representation, add @XmlRootElement on a POJO and how a compatible JSON representation can be generated. This will make the fromXXX and toXXX methods redundant.

    Read the article

  • Add collison detection to enemy sprites?

    - by xBroak
    i'd like to add the same collision detection used by the player sprite to the enemy sprites or 'creeps' ive added all the relevant code I can see yet collisons are still not being detected and handled, please find below the class, I have no idea what is wrong currently, the list of walls to collide with is 'wall_list' import pygame import pauseScreen as dm import re from pygame.sprite import Sprite from pygame import Rect, Color from random import randint, choice from vec2d import vec2d from simpleanimation import SimpleAnimation import displattxt black = (0,0,0) white = (255,255,255) blue = (0,0,255) green = (101,194,151) global currentEditTool currentEditTool = "Tree" global editMap editMap = False open('MapMaker.txt', 'w').close() def draw_background(screen, tile_img): screen.fill(black) img_rect = tile_img.get_rect() global rect rect = img_rect nrows = int(screen.get_height() / img_rect.height) + 1 ncols = int(screen.get_width() / img_rect.width) + 1 for y in range(nrows): for x in range(ncols): img_rect.topleft = (x * img_rect.width, y * img_rect.height) screen.blit(tile_img, img_rect) def changeTool(): if currentEditTool == "Tree": None elif currentEditTool == "Rock": None def pauseGame(): red = 255, 0, 0 green = 0,255, 0 blue = 0, 0,255 screen.fill(black) pygame.display.update() if editMap == False: choose = dm.dumbmenu(screen, [ 'Resume', 'Enable Map Editor', 'Quit Game'], 64,64,None,32,1.4,green,red) if choose == 0: print("hi") elif choose ==1: global editMap editMap = True elif choose ==2: print("bob") elif choose ==3: print("bob") elif choose ==4: print("bob") else: None else: choose = dm.dumbmenu(screen, [ 'Resume', 'Disable Map Editor', 'Quit Game'], 64,64,None,32,1.4,green,red) if choose == 0: print("Resume") elif choose ==1: print("Dis ME") global editMap editMap = False elif choose ==2: print("bob") elif choose ==3: print("bob") elif choose ==4: print("bob") else: None class Wall(pygame.sprite.Sprite): # Constructor function def __init__(self,x,y,width,height): pygame.sprite.Sprite.__init__(self) self.image = pygame.Surface([width, height]) self.image.fill(green) self.rect = self.image.get_rect() self.rect.y = y self.rect.x = x class insertTree(pygame.sprite.Sprite): def __init__(self,x,y,width,height, typ): pygame.sprite.Sprite.__init__(self) self.image = pygame.image.load("images/map/tree.png").convert() self.image.set_colorkey(white) self.rect = self.image.get_rect() self.rect.y = y self.rect.x = x class insertRock(pygame.sprite.Sprite): def __init__(self,x,y,width,height, typ): pygame.sprite.Sprite.__init__(self) self.image = pygame.image.load("images/map/rock.png").convert() self.image.set_colorkey(white) self.rect = self.image.get_rect() self.rect.y = y self.rect.x = x class Creep(pygame.sprite.Sprite): """ A creep sprite that bounces off walls and changes its direction from time to time. """ change_x=0 change_y=0 def __init__( self, screen, creep_image, explosion_images, field, init_position, init_direction, speed): """ Create a new Creep. screen: The screen on which the creep lives (must be a pygame Surface object, such as pygame.display) creep_image: Image (surface) object for the creep explosion_images: A list of image objects for the explosion animation. field: A Rect specifying the 'playing field' boundaries. The Creep will bounce off the 'walls' of this field. init_position: A vec2d or a pair specifying the initial position of the creep on the screen. init_direction: A vec2d or a pair specifying the initial direction of the creep. Must have an angle that is a multiple of 45 degres. speed: Creep speed, in pixels/millisecond (px/ms) """ Sprite.__init__(self) self.screen = screen self.speed = speed self.field = field self.rect = creep_image.get_rect() # base_image holds the original image, positioned to # angle 0. # image will be rotated. # self.base_image = creep_image self.image = self.base_image self.explosion_images = explosion_images # A vector specifying the creep's position on the screen # self.pos = vec2d(init_position) # The direction is a normalized vector # self.direction = vec2d(init_direction).normalized() self.state = Creep.ALIVE self.health = 15 def is_alive(self): return self.state in (Creep.ALIVE, Creep.EXPLODING) def changespeed(self,x,y): self.change_x+=x self.change_y+=y def update(self, time_passed, walls): """ Update the creep. time_passed: The time passed (in ms) since the previous update. """ if self.state == Creep.ALIVE: # Maybe it's time to change the direction ? # self._change_direction(time_passed) # Make the creep point in the correct direction. # Since our direction vector is in screen coordinates # (i.e. right bottom is 1, 1), and rotate() rotates # counter-clockwise, the angle must be inverted to # work correctly. # self.image = pygame.transform.rotate( self.base_image, -self.direction.angle) # Compute and apply the displacement to the position # vector. The displacement is a vector, having the angle # of self.direction (which is normalized to not affect # the magnitude of the displacement) # displacement = vec2d( self.direction.x * self.speed * time_passed, self.direction.y * self.speed * time_passed) self.pos += displacement # When the image is rotated, its size is changed. # We must take the size into account for detecting # collisions with the walls. # self.image_w, self.image_h = self.image.get_size() bounds_rect = self.field.inflate( -self.image_w, -self.image_h) if self.pos.x < bounds_rect.left: self.pos.x = bounds_rect.left self.direction.x *= -1 elif self.pos.x > bounds_rect.right: self.pos.x = bounds_rect.right self.direction.x *= -1 elif self.pos.y < bounds_rect.top: self.pos.y = bounds_rect.top self.direction.y *= -1 elif self.pos.y > bounds_rect.bottom: self.pos.y = bounds_rect.bottom self.direction.y *= -1 # collision detection old_x=bounds_rect.left new_x=old_x+self.direction.x bounds_rect.left = new_x # hit a wall? collide = pygame.sprite.spritecollide(self, walls, False) if collide: # yes bounds_rect.left=old_x old_y=self.pos.y new_y=old_y+self.direction.y self.pos.y = new_y collide = pygame.sprite.spritecollide(self, walls, False) if collide: # yes self.pos.y=old_y elif self.state == Creep.EXPLODING: if self.explode_animation.active: self.explode_animation.update(time_passed) else: self.state = Creep.DEAD self.kill() elif self.state == Creep.DEAD: pass #------------------ PRIVATE PARTS ------------------# # States the creep can be in. # # ALIVE: The creep is roaming around the screen # EXPLODING: # The creep is now exploding, just a moment before dying. # DEAD: The creep is dead and inactive # (ALIVE, EXPLODING, DEAD) = range(3) _counter = 0 def _change_direction(self, time_passed): """ Turn by 45 degrees in a random direction once per 0.4 to 0.5 seconds. """ self._counter += time_passed if self._counter > randint(400, 500): self.direction.rotate(45 * randint(-1, 1)) self._counter = 0 def _point_is_inside(self, point): """ Is the point (given as a vec2d) inside our creep's body? """ img_point = point - vec2d( int(self.pos.x - self.image_w / 2), int(self.pos.y - self.image_h / 2)) try: pix = self.image.get_at(img_point) return pix[3] > 0 except IndexError: return False def _decrease_health(self, n): """ Decrease my health by n (or to 0, if it's currently less than n) """ self.health = max(0, self.health - n) if self.health == 0: self._explode() def _explode(self): """ Starts the explosion animation that ends the Creep's life. """ self.state = Creep.EXPLODING pos = ( self.pos.x - self.explosion_images[0].get_width() / 2, self.pos.y - self.explosion_images[0].get_height() / 2) self.explode_animation = SimpleAnimation( self.screen, pos, self.explosion_images, 100, 300) global remainingCreeps remainingCreeps-=1 if remainingCreeps == 0: print("all dead") def draw(self): """ Blit the creep onto the screen that was provided in the constructor. """ if self.state == Creep.ALIVE: # The creep image is placed at self.pos. To allow for # smooth movement even when the creep rotates and the # image size changes, its placement is always # centered. # self.draw_rect = self.image.get_rect().move( self.pos.x - self.image_w / 2, self.pos.y - self.image_h / 2) self.screen.blit(self.image, self.draw_rect) # The health bar is 15x4 px. # health_bar_x = self.pos.x - 7 health_bar_y = self.pos.y - self.image_h / 2 - 6 self.screen.fill( Color('red'), (health_bar_x, health_bar_y, 15, 4)) self.screen.fill( Color('green'), ( health_bar_x, health_bar_y, self.health, 4)) elif self.state == Creep.EXPLODING: self.explode_animation.draw() elif self.state == Creep.DEAD: pass def mouse_click_event(self, pos): """ The mouse was clicked in pos. """ if self._point_is_inside(vec2d(pos)): self._decrease_health(3) #begin new player class Player(pygame.sprite.Sprite): change_x=0 change_y=0 frame = 0 def __init__(self,x,y): pygame.sprite.Sprite.__init__(self) # LOAD PLATER IMAGES # Set height, width self.images = [] for i in range(1,17): img = pygame.image.load("images/player/" + str(i)+".png").convert() #player images img.set_colorkey(white) self.images.append(img) self.image = self.images[0] self.rect = self.image.get_rect() self.rect.y = y self.rect.x = x self.health = 15 self.image_w, self.image_h = self.image.get_size() health_bar_x = self.rect.x - 7 health_bar_y = self.rect.y - self.image_h / 2 - 6 screen.fill( Color('red'), (health_bar_x, health_bar_y, 15, 4)) screen.fill( Color('green'), ( health_bar_x, health_bar_y, self.health, 4)) def changespeed(self,x,y): self.change_x+=x self.change_y+=y def _decrease_health(self, n): """ Decrease my health by n (or to 0, if it's currently less than n) """ self.health = max(0, self.health - n) if self.health == 0: self._explode() def update(self,walls): # collision detection old_x=self.rect.x new_x=old_x+self.change_x self.rect.x = new_x # hit a wall? collide = pygame.sprite.spritecollide(self, walls, False) if collide: # yes self.rect.x=old_x old_y=self.rect.y new_y=old_y+self.change_y self.rect.y = new_y collide = pygame.sprite.spritecollide(self, walls, False) if collide: # yes self.rect.y=old_y # right to left if self.change_x < 0: self.frame += 1 if self.frame > 3*4: self.frame = 0 # Grab the image, divide by 4 # every 4 frames. self.image = self.images[self.frame//4] # Move left to right. # images 4...7 instead of 0...3. if self.change_x > 0: self.frame += 1 if self.frame > 3*4: self.frame = 0 self.image = self.images[self.frame//4+4] if self.change_y > 0: self.frame += 1 if self.frame > 3*4: self.frame = 0 self.image = self.images[self.frame//4+4+4] if self.change_y < 0: self.frame += 1 if self.frame > 3*4: self.frame = 0 self.image = self.images[self.frame//4+4+4+4] score = 0 # initialize pyGame pygame.init() # 800x600 sized screen global screen screen = pygame.display.set_mode([800, 600]) screen.fill(black) #bg_tile_img = pygame.image.load('images/map/grass.png').convert_alpha() #draw_background(screen, bg_tile_img) #pygame.display.flip() # Set title pygame.display.set_caption('Test') #background = pygame.Surface(screen.get_size()) #background = background.convert() #background.fill(black) # Create the player player = Player( 50,50 ) player.rect.x=50 player.rect.y=50 movingsprites = pygame.sprite.RenderPlain() movingsprites.add(player) # Make the walls. (x_pos, y_pos, width, height) global wall_list wall_list=pygame.sprite.RenderPlain() wall=Wall(0,0,10,600) # left wall wall_list.add(wall) wall=Wall(10,0,790,10) # top wall wall_list.add(wall) #wall=Wall(10,200,100,10) # poke wall wall_list.add(wall) wall=Wall(790,0,10,600) #(x,y,thickness, height) wall_list.add(wall) wall=Wall(10,590,790,10) #(x,y,thickness, height) wall_list.add(wall) f = open('MapMaker.txt') num_lines = sum(1 for line in f) print(num_lines) lineCount = 0 with open("MapMaker.txt") as infile: for line in infile: f = open('MapMaker.txt') print(line) coords = line.split(',') #print(coords[0]) #print(coords[1]) #print(coords[2]) #print(coords[3]) #print(coords[4]) if "tree" in line: print("tree in") wall=insertTree(int(coords[0]),int(coords[1]), int(coords[2]),int(coords[3]),coords[4]) wall_list.add(wall) elif "rock" in line: print("rock in") wall=insertRock(int(coords[0]),int(coords[1]), int(coords[2]),int(coords[3]),coords[4] ) wall_list.add(wall) width = 20 height = 540 height = height - 48 for i in range(0,23): width = width + 32 name = insertTree(width,540,790,10,"tree") #wall_list.add(name) name = insertTree(width,height,690,10,"tree") #wall_list.add(name) CREEP_SPAWN_TIME = 200 # frames creep_spawn = CREEP_SPAWN_TIME clock = pygame.time.Clock() bg_tile_img = pygame.image.load('images/map/grass.png').convert() img_rect = bg_tile_img FIELD_RECT = Rect(50, 50, 700, 500) CREEP_FILENAMES = [ 'images/player/1.png', 'images/player/1.png', 'images/player/1.png'] N_CREEPS = 3 creep_images = [ pygame.image.load(filename).convert_alpha() for filename in CREEP_FILENAMES] explosion_img = pygame.image.load('images/map/tree.png').convert_alpha() explosion_images = [ explosion_img, pygame.transform.rotate(explosion_img, 90)] creeps = pygame.sprite.RenderPlain() done = False #bg_tile_img = pygame.image.load('images/map/grass.png').convert() #draw_background(screen, bg_tile_img) totalCreeps = 0 remainingCreeps = 3 while done == False: creep_images = pygame.image.load("images/player/1.png").convert() creep_images.set_colorkey(white) draw_background(screen, bg_tile_img) if len(creeps) != N_CREEPS: if totalCreeps < N_CREEPS: totalCreeps = totalCreeps + 1 print(totalCreeps) creeps.add( Creep( screen=screen, creep_image=creep_images, explosion_images=explosion_images, field=FIELD_RECT, init_position=( randint(FIELD_RECT.left, FIELD_RECT.right), randint(FIELD_RECT.top, FIELD_RECT.bottom)), init_direction=(choice([-1, 1]), choice([-1, 1])), speed=0.01)) for creep in creeps: creep.update(60,wall_list) creep.draw() for event in pygame.event.get(): if event.type == pygame.QUIT: done=True if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: player.changespeed(-2,0) creep.changespeed(-2,0) if event.key == pygame.K_RIGHT: player.changespeed(2,0) creep.changespeed(2,0) if event.key == pygame.K_UP: player.changespeed(0,-2) creep.changespeed(0,-2) if event.key == pygame.K_DOWN: player.changespeed(0,2) creep.changespeed(0,2) if event.key == pygame.K_ESCAPE: pauseGame() if event.key == pygame.K_1: global currentEditTool currentEditTool = "Tree" changeTool() if event.key == pygame.K_2: global currentEditTool currentEditTool = "Rock" changeTool() if event.type == pygame.KEYUP: if event.key == pygame.K_LEFT: player.changespeed(2,0) creep.changespeed(2,0) if event.key == pygame.K_RIGHT: player.changespeed(-2,0) creep.changespeed(-2,0) if event.key == pygame.K_UP: player.changespeed(0,2) creep.changespeed(0,2) if event.key == pygame.K_DOWN: player.changespeed(0,-2) creep.changespeed(0,-2) if event.type == pygame.MOUSEBUTTONDOWN and pygame.mouse.get_pressed()[0]: for creep in creeps: creep.mouse_click_event(pygame.mouse.get_pos()) if editMap == True: x,y = pygame.mouse.get_pos() if currentEditTool == "Tree": name = insertTree(x-10,y-25, 10 , 10, "tree") wall_list.add(name) wall_list.draw(screen) f = open('MapMaker.txt', "a+") image = pygame.image.load("images/map/tree.png").convert() screen.blit(image, (30,10)) pygame.display.flip() f.write(str(x) + "," + str(y) + ",790,10, tree\n") #f.write("wall=insertTree(" + str(x) + "," + str(y) + ",790,10)\nwall_list.add(wall)\n") elif currentEditTool == "Rock": name = insertRock(x-10,y-25, 10 , 10,"rock") wall_list.add(name) wall_list.draw(screen) f = open('MapMaker.txt', "a+") f.write(str(x) + "," + str(y) + ",790,10,rock\n") #f.write("wall=insertRock(" + str(x) + "," + str(y) + ",790,10)\nwall_list.add(wall)\n") else: None #pygame.display.flip() player.update(wall_list) movingsprites.draw(screen) wall_list.draw(screen) pygame.display.flip() clock.tick(60) pygame.quit()

    Read the article

  • Querying the SSIS Catalog? Here’s a handy query!

    - by jamiet
    I’ve been working on a SQL Server Integration Services (SSIS) solution for about 6 months now and I’ve learnt many many things that I intend to share on this blog just as soon as I get the time. Here’s a very short starter-for-ten… I’ve found the following query to be utterly invaluable when interrogating the SSIS Catalog to discover what is going on in my executions: SELECT event_message_id,MESSAGE,package_name,event_name,message_source_name,package_path,execution_path,message_type,message_source_typeFROM   (       SELECT  em.*       FROM    SSISDB.catalog.event_messages em       WHERE   em.operation_id = (SELECT MAX(execution_id) FROM SSISDB.catalog.executions)           AND event_name NOT LIKE '%Validate%'       )q/* Put in whatever WHERE predicates you might like*/--WHERE event_name = 'OnError'--WHERE package_name = 'Package.dtsx'--WHERE execution_path LIKE '%<some executable>%'ORDER BY message_time DESC Know it. Learn it. Love it. @jamiet

    Read the article

  • exporting bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    I'm having a hard time trying to understand how exactly Blender's concept of bone transforms maps to the usual math of skinning (which I'm implementing in an OpenGL-based engine of sorts). Or I'm missing out something in the math.. It's gonna be long, but here's as much background as I can think of. First, a few notes and assumptions: I'm using column-major order and multiply from right to left. So for instance, vertex v transformed by matrix A and then further transformed by matrix B would be: v' = BAv. This also means whenever I export a matrix from blender through python, I export it (in text format) in 4 lines, each representing a column. This is so I can then I can read them back into my engine like this: if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[0], &skeleton.joints[currentJointIndex].inverseBindTransform.m[1], &skeleton.joints[currentJointIndex].inverseBindTransform.m[2], &skeleton.joints[currentJointIndex].inverseBindTransform.m[3])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[4], &skeleton.joints[currentJointIndex].inverseBindTransform.m[5], &skeleton.joints[currentJointIndex].inverseBindTransform.m[6], &skeleton.joints[currentJointIndex].inverseBindTransform.m[7])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[8], &skeleton.joints[currentJointIndex].inverseBindTransform.m[9], &skeleton.joints[currentJointIndex].inverseBindTransform.m[10], &skeleton.joints[currentJointIndex].inverseBindTransform.m[11])) { if (fscanf(fileHandle, "%f %f %f %f", &skeleton.joints[currentJointIndex].inverseBindTransform.m[12], &skeleton.joints[currentJointIndex].inverseBindTransform.m[13], &skeleton.joints[currentJointIndex].inverseBindTransform.m[14], &skeleton.joints[currentJointIndex].inverseBindTransform.m[15])) { I'm simplifying the code I show because otherwise it would make things unnecessarily harder (in the context of my question) to explain / follow. Please refrain from making remarks related to optimizations. This is not final code. Having said that, if I understand correctly, the basic idea of skinning/animation is: I have a a mesh made up of vertices I have the mesh model-world transform W I have my joints, which are really just transforms from each joint's space to its parent's space. I'll call these transforms Bj meaning matrix which takes from joint j's bind pose to joint j-1's bind pose. For each of these, I actually import their inverse to the engine, Bj^-1. I have keyframes each containing a set of current poses Cj for each joint J. These are initially imported to my engine in TQS format but after (S)LERPING them I compose them into Cj matrices which are equivalent to the Bjs (not the Bj^-1 ones) only that for the current spacial configurations of each joint at that frame. Given the above, the "skeletal animation algorithm is" On each frame: check how much time has elpased and compute the resulting current time in the animation, from 0 meaning frame 0 to 1, meaning the end of the animation. (Oh and I'm looping forever so the time is mod(total duration)) for each joint: 1 -calculate its world inverse bind pose, that is Bj_w^-1 = Bj^-1 Bj-1^-1 ... B0^-1 2 -use the current animation time to LERP the componets of the TQS and come up with an interpolated current pose matrix Cj which should transform from the joints current configuration space to world space. Similar to what I did to get the world version of the inverse bind poses, I come up with the joint's world current pose, Cj_w = C0 C1 ... Cj 3 -now that I have world versions of Bj and Cj, I store this joint's world- skinning matrix K_wj = Cj_w Bj_w^-1. The above is roughly implemented like so: - (void)update:(NSTimeInterval)elapsedTime { static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.jointPoses[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.jointPoses[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeTranslation(LERPTranslation.x, LERPTranslation.y, LERPTranslation.z)); currentTransform = GLKMatrix4Multiply(currentTransform, GLKMatrix4MakeScale(LERPScaling.x, LERPScaling.y, LERPScaling.z)); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = aSkeleton.joints[i].inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(aSkeleton.joints[i].inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } } At this point, I should have my skinning palette. So on each frame in my vertex shader, I do: uniform mat4 modelMatrix; uniform mat4 projectionMatrix; uniform mat3 normalMatrix; uniform mat4 skinningPalette[6]; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { vec3 eyeNormal = normalize(normalMatrix * normal); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } gl_Position = projectionMatrix * modelMatrix * skinnedVertexPosition; } The result: The mesh parts that are supposed to animate do animate and follow the expected motion, however, the rotations are messed up in terms of orientations. That is, the mesh is not translated somewhere else or scaled in any way, but the orientations of rotations seem to be off. So a few observations: In the above shader notice I actually did not multiply the vertices by the mesh modelMatrix (the one which would take them to model or world or global space, whichever you prefer, since there is no parent to the mesh itself other than "the world") until after skinning. This is contrary to what I implied in the theory: if my skinning matrix takes vertices from model to joint and back to model space, I'd think the vertices should already be premultiplied by the mesh transform. But if I do so, I just get a black screen. As far as exporting the joints from Blender, my python script exports for each armature bone in bind pose, it's matrix in this way: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: poseJoint = skeleton.pose.bones[joint.name] jointTransform = poseJoint.matrix.inverted() file.write('Joint ' + joint.name + ' Transform {\n') for col in jointTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') And for current / keyframe poses (assuming I'm in the right keyframe): def exportAnimations(filepath): # Only one skeleton per scene objList = [object for object in bpy.context.scene.objects if object.type == 'ARMATURE'] if len(objList) == 0: return elif len(objList) > 1: return #raise exception? dialog box? skeleton = objList[0] jointNames = [bone.name for bone in skeleton.data.bones] for action in bpy.data.actions: # One animation clip per action in Blender, named as the action animationClipFilePath = filepath[0 : filepath.rindex('/') + 1] + action.name + ".aClip" file = open(animationClipFilePath, 'w') file.write('target skeleton: ' + skeleton.name + '\n') file.write('joints count: {:d}'.format(len(jointNames)) + '\n') skeleton.animation_data.action = action keyframeNum = max([len(fcurve.keyframe_points) for fcurve in action.fcurves]) keyframes = [] for fcurve in action.fcurves: for keyframe in fcurve.keyframe_points: keyframes.append(keyframe.co[0]) keyframes = set(keyframes) keyframes = [kf for kf in keyframes] keyframes.sort() file.write('keyframes count: {:d}'.format(len(keyframes)) + '\n') for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) joint = skeleton.pose.bones[i] jointCurrentPoseTransform = joint.matrix translationV = jointCurrentPoseTransform.to_translation() rotationQ = jointCurrentPoseTransform.to_3x3().to_quaternion() scaleV = jointCurrentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') file.close() Which I believe follow the theory explained at the beginning of my question. But then I checked out Blender's directX .x exporter for reference.. and what threw me off was that in the .x script they are exporting bind poses like so (transcribed using the same variable names I used so you can compare): if joint.parent: jointTransform = poseJoint.parent.matrix.inverted() else: jointTransform = Matrix() jointTransform *= poseJoint.matrix and exporting current keyframe poses like this: if joint.parent: jointCurrentPoseTransform = joint.parent.matrix.inverted() else: jointCurrentPoseTransform = Matrix() jointCurrentPoseTransform *= joint.matrix why are they using the parent's transform instead of the joint in question's? isn't the join transform assumed to exist in the context of a parent transform since after all it transforms from this joint's space to its parent's? Why are they concatenating in the same order for both bind poses and keyframe poses? If these two are then supposed to be concatenated with each other to cancel out the change of basis? Anyway, any ideas are appreciated.

    Read the article

  • NASA’s can alert you when Space Station is visible from your backyard

    - by Gopinath
    NASA’s International Space Station(ISS) is the third most brightest object visible in the sky after Sun and Moon. If we know exactly when to look up, we will be able to spot Space Station with naked eye and it looks like bright star moving.  On the occasion of 12th anniversary of astronauts living in space station, NASA started a free services dubbed as Spot The Station, that alerts you when Space Station is visible from your backyard. Those who sign up with the free service by providing location details will get an email & text alerts couple of hours in advance so that they can have a glimpse of space station. Here is a sample alert sent to registered users SpotTheStation! Time: Wed Apr 25 7:45 PM, Visible: 4 min, Max Height: 66 degrees, Appears: WSW, Disappears NE. The space station is typically visible right at early morning or evenings when moon is the only one brightest star visible in the sky. The service is available world wide and almost 90 percent of the population on earth would be able to see clearly without using any fancy equipment. Follow the link spotthestation.nasa.gov to register for alerts. Flickr cc image: slideshow bob

    Read the article

  • Weblogic domain scale up using EM Grid Control 11gR1

    - by dmitry.nefedkin(at)oracle.com
    As you know a weblogic domain consists of set of servers running independently or in a cluster mode, sharing the distributed resources. And in most environments weblogic  cluster consists of multiple managed servers running simultaneously and working together to provide increased scalability and reliability.  These servers can run on the same machine, or be located on different machines.  It's a common task to increase a cluster's capacity by adding new machines to the cluster to host the new server instances.  You can do it by manually installing weblogic binaries to the new host and use pack/unpack commands to add a managed server to this new host.  But with Enterprise Manager Grid Control 11gR1 (EMGC) there is  another way - Fusion Middleware Domain Scale Up  procedure. I'm going to show you how it works.Here is a picture of  my medrec_oradb weblogic domain, what is registered in EMGC. It contains an admin server and a cluster MedRecCluster with  the single managed server MS1. Both admin and managed servers are on the same host oel46-vmware, it's a virtual machine with OEL 4.6 that runs inside our Oracle VM infrastructure.  And here are the application deployments, note that couple of applications are deployed to the cluster.First of all I have to prepare a new machine that will host new managed sever of my cluster. I created new VM with OEL 5.4 using the corresponding Oracle VM template available in Oracle E-Delivery site for Oracle Linux and Oracle VM and named it wls1032. Next step is to install Oracle EM Grid Control 11gR1 Agent to this new host.  You can download it from the OTN page and install it manually,  or you can use Agent Installation Deployment procedure available in EMGC  (Deployments->Agent Installation->Install Agent). Anyway, when you agent is up and running on the new machine, you will see it in EMGC Console in the Targets->Hosts subtab.Now we are ready to scale up our weblogic domain. Click the Deployments tab in Oracle Enterprise Manager Grid Control, and then click Deployment Procedure. Select a Fusion Middleware Domain Scale Up procedure from the list, and click Schedule Deployment. The first page of the FMW Domain Scale Up Wizard is displayed and you can proceed with the deployment process.Select the domain from list, enter the working directory on the admin server host, and also fill the weblogic credentials for the administration server console and the OS credentials for the  admin server host.  Click Next button.  The next step allows you to configure you domain, to add a new manager server to the cluster you should select the cluster in the tree and click Add Server button. Select the newly added server in a tree, choose the target host and  enter the configuration details of your managed server. You can also add new machine and node manager details.  Please note that you cannot change the values in  Domain Location and Fusion Middleware Home fields, so these locations on the target host will be the same as for the admin server host.   Working directory on the target host should have enough free space to store FMW home binaries and domain configuration files.  In my experience the working directories should have at least 3 Gb of free space.  The last thing you should fill is the OS credentials for the target host. The next steps allows you to schedule the execution of the procedure, it is started immediately in my example. The last step is just a review the configuration for the domain scale up. Click Submit to launch the process. You can track the status of the procedure execution by selecting Deployments->Deployment Procedures->Procedure Completion Status in the EMGC Console.As you can see in the picture below, the procedure consists of the many steps, and I'm going to share my experience about the issues that I had at some of the steps. Please keep in mind that you can always continue the execution from the last successfully completed step by clicking Retry button.Check OUI Prerequisites  step may fail if the target host does  not pass prerequisites checks for Weblogic Server installation such as amount of RAM, linux packages installed, etc. Create FMW Clone Archive step may fail if you do not have enough free space in the working directory on the administration server host.Transfer cloning archive to targets  step  may fail if the EMGC agents on the admin server host or on target host are not secured.   You should secure the agent by issuing ./emctl secure agent  command from $AGENT_HOME/bin directory and entering the agent registration password.Both Transfer cloning archive to targets and Apply Clone at target hosts steps may fail if you do not have enough free space in the working directory on the target host. The most complicated issue I had on the Run Inventory Collection  step. The step failed and I noticed that the agent on the target server is also failed with the following error in the $AGENT_HOME/sysman/log/emagent.trc  log file:2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: Failed to upload file A0000008.xml: Fatal Error.Response received: 500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:34,310 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-2838952848 WARN  upload: FxferSend: received fatal error in header from repository: https://oel46-vmware:1159/em/uploadFATAL_ERROR::500|ORA-20603: The timezone of the multiagent target (/Farm_Localhost_MedRec_medrec_oradb/medrec_oradb,weblogic_domain)is not consistent with the timezone (America/Los_Angeles) reported by other agents.2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: number of fatal error exceeds the limit 32010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: agent will shutdown now2010-12-28 11:50:35,552 Thread-2838952848 ERROR : Signalled to Exit with status 55. Too many fatal upload failures2010-12-28 11:50:35,552 Thread-2838952848 ERROR upload: 1 Failure(s) in a row or XML error for A0000008.xml, retcode = -6, we give up2010-12-28 11:50:35,552 Thread-3044607680 ERROR main: EMAgent abnormal terminatingI checked the timezone of my domain target inside EMGC repositoryselect timezone_regionfrom mgmt_targets where target_type = 'weblogic_domain'  and display_name = 'medrec_oradb'"TIMEZONE_REGION""America/Los_Angeles"Then checked the timezone of my agents and indeed, they differedselect target_name, timezone_region from mgmt_targets where type_display_name = 'Agent'"TARGET_NAME"    "TIMEZONE_REGION""oel46-vmware:3872"    "America/Los_Angeles""wls1032.imc.fors.ru:3872"    "America/New_York"So I had to change the timezone on the wls1032 host and propagate this changes to the agent and to the EMGC repository. Here was the steps:issued system-config-date command on wls1032.imc.fors.ru  and set timezone to "America/Los_Angeles"propagated the changes to the agent bu executing ./emctl resetTZ agent  command from $AGENT_HOME/bin directoryconnected to EMGC repository as sysman and executed the following PL/SQL block:   begin      mgmt_target.set_agent_tzrgn('wls1032.imc.fors.ru:3872','America/Los_Angeles');      commit;   end;After that I had to clear the pending uploads on wls1032.imc.fors.ru:  rm -r $AGENT_HOME/sysman/emd/state/*  rm -r $AGENT_HOME/sysman/emd/collection/*  rm -r $AGENT_HOME/sysman/emd/upload/*  rm $AGENT_HOME/sysman/emd/lastupld.xml  rm $AGENT_HOME/sysman/emd/agntstmp.txt  $AGENT_HOME/bin/emctl start agent  $AGENT_HOME/bin/emctl clearstate agentThe last part of this solution was to resync the agent in EMGC console by clicking Agent Resynchronization button (please leave "Unblock agent on successful completion of agent resynchronization" checkbox checked in the next screen).After that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 host,  and my previous error disappeared,  but I catched another one: EMD upload error: Failed to upload file A0000004.xml: HTTP error.Response received: ERROR-400|Data will be rejected for upload from agent 'https://wls1032.imc.fors.ru:3872/emd/main/', max size limit for direct load exceeded [7544731/5242880]So the uploading XML file size was 7 Mb, and the limit on OMS was 5 Mb.  To increase the max file size limit to 20 Mb I had to connect to the OMS host and execute the following commands from $OMS_HOME/bin directory: ./emctl set property -name em.loader.maxDirectLoadFileSz -value 20971520 -module emoms ./emctl stop oms ./emctl start omsAfter that I issued ./emctl upload command from $AGENT_HOME/bin on the wls1032 one more time and it completed successfully.   The agent uploaded the configuration information to the EMGC  repository and I was able to see the results of my weblogic domain scale-up in EMGC Console.DeploymentsSo, now the weblogic cluster contains 2 managed servers located on the different hosts. This powerful feature of the Enterprise Manager Grid Control  is a part of  the WebLogic Server Management Pack Enterprise Edition.

    Read the article

  • BPMN is dead, long live BPEL!

    - by JuergenKress
    “BPMN is dead, long live BPEL” was the title of our panel discussion during the SOA & BPM Integration Days 2011. At the JAXenter my discussion summery was just published (in German). If you want to learn more about SOA & BPM make sure you register for our up-coming conference October 12th & 13th 2011 in Düsseldorf. The speakers include the top SOA and BPM experts in Germany: Thilo Frotscher & Kornelius Fuhrer & Björn Hardegen & Nicolai Josuttis & Michael Kopp & Dr. Dirk Krafzig & Jürgen Kress & Frank Leymann & Berthold Maier & Hajo Normann & Max J. Pucher & Bernd Rücker & Dr. Gregor Scheithauer & Danilo Schmiedel & Guido Schmutz & Dirk Slama & Heiko Spindler & Volker Stiehl & Bernd Trops & Clemens Utschig-Utschig & Tammo van Lessen & Dr. Hendrik Voigt & Torsten Winterberg  For details please become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website

    Read the article

  • SQL SERVER – Identify Most Resource Intensive Queries – SQL in Sixty Seconds #029 – Video

    - by pinaldave
    There are a few questions I often get asked. I wonder how interesting is that in our daily life all of us have to often need the same kind of information at the same time. Here is the example of the similar questions: How many user created tables are there in the database? How many non clustered indexes each of the tables in the database have? Is table Heap or has clustered index on it? How many rows each of the tables is contained in the database? I finally wrote down a very quick script (in less than sixty seconds when I originally wrote it) which can answer above questions. I also created a very quick video to explain the results and how to execute the script. Here is the complete script which I have used in the SQL in Sixty Seconds Video. SELECT [schema_name] = s.name, table_name = o.name, MAX(i1.type_desc) ClusteredIndexorHeap, COUNT(i.TYPE) NoOfNonClusteredIndex, p.rows FROM sys.indexes i INNER JOIN sys.objects o ON i.[object_id] = o.[object_id] INNER JOIN sys.schemas s ON o.[schema_id] = s.[schema_id] LEFT JOIN sys.partitions p ON p.OBJECT_ID = o.OBJECT_ID AND p.index_id IN (0,1) LEFT JOIN sys.indexes i1 ON i.OBJECT_ID = i1.OBJECT_ID AND i1.TYPE IN (0,1) WHERE o.TYPE IN ('U') AND i.TYPE = 2 GROUP BY s.name, o.name, p.rows ORDER BY schema_name, table_name Related Tips in SQL in Sixty Seconds: Find Row Count in Table – Find Largest Table in Database Find Row Count in Table – Find Largest Table in Database – T-SQL Identify Numbers of Non Clustered Index on Tables for Entire Database Index Levels, Page Count, Record Count and DMV – sys.dm_db_index_physical_stats Index Levels and Delete Operations – Page Level Observation What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • How to cross-reference many character encodings with ASCII OR UTFx?

    - by Garet Claborn
    I'm working with a binary structure, the goal of which is to index the significance of specific bits for any character encoding so that we may trigger events while doing specific checks against the profile. Each character encoding scheme has an associated system record. This record's leading value will be a C++ unsigned long long binary value and signifies the length, in bits, of encoded characters. Following the length are three values, each is a bit field of that length. offset_mask - defines the occurrence of non-printable characters within the min,max of print_mask range_mask - defines the occurrence of the most popular 50% of printable characters print_mask - defines the occurrence value of printable characters The structure of profiles has changed from the op of this question. Most likely I will try to factorize or compress these values in the long-term instead of starting out with ranges after reading more. I have to write some of the core functionality for these main reasons. It has to fit into a particular event architecture we are using, Better understanding of character encoding. I'm about to need it. Integrating into non-linear design is excluding many libraries without special hooks. I'm unsure if there is a standard, cross-encoding mechanism for communicating such data already. I'm just starting to look into how chardet might do profiling as suggested by @amon. The Unicode BOM would be easily enough (for my current project) if all encodings were Unicode. Of course ideally, one would like to support all encodings, but I'm not asking about implementation - only the general case. How can these profiles be efficiently populated, to produce a set of bitmasks which we can use to match strings with common characters in multiple languages? If you have any editing suggestions please feel free, I am a lightweight when it comes to localization, which is why I'm trying to reach out to the more experienced. Any caveats you may be able to help with will be appreciated.

    Read the article

  • Homepage 301 Redirect to SSL Homepage

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have 1 site where we implemented a 301 redirect on the homepage from http to https. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our webmaster account I notice that we are not being provided with any webmaster information (search queries, backlinks) related to our homepage under SSL. I performed a Fetch Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried that the fact that Google Fetch is not getting the correct Title Tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the SiteMap to ensure that Google is correctly indexing all our pages and being able to flow from the https to the http without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • Procedural Generation of tile-based 2d World

    - by Matthias
    I am writing a 2d game that uses tile-based top-down graphics to build the world (i.e. the ground plane). Manually made this works fine. Now I want to generate the ground plane procedurally at run time. In other words: I want to place the tiles (their textures) randomised on the fly. Of course I cannot create an endless ground plane, so I need to restrict how far from the player character (on which the camera focuses on) I procedurally generate the ground floor. My approach would be like this: I have a 2d grid that stores all tiles of the floor at their correct x/y coordinates within the game world. When the players moves the character, therefore also the camera, I constantly check whether there are empty locations in my x/y map within a max. distance from the character, i.e. cells in my virtual grid that have no tile set. In such a case I place a new tile there. Therefore the player would always see the ground plane without gaps or empty spots. I guess that would work, but I am not sure whether that would be the best approach. Is there a better alternative, maybe even a best-practice for my case?

    Read the article

  • LINQ and Aggregate function

    - by vik20000in
    LINQ also provides with itself important aggregate function. Aggregate function are function that are applied over a sequence like and return only one value like Average, count, sum, Maximum etc…Below are some of the Aggregate functions provided with LINQ and example of their implementation. Count     int[] primeFactorsOf300 = { 2, 2, 3, 5, 5 };     int uniqueFactors = primeFactorsOf300.Distinct().Count();The below example provided count for only odd number.     int[] primeFactorsOf300 = { 2, 2, 3, 5, 5 };     int uniqueFactors = primeFactorsOf300.Distinct().Count(n => n%2 = 1);  Sum     int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 };        double numSum = numbers.Sum();  Minimum      int minNum = numbers.Min(); Maximum      int maxNum = numbers.Max();Average      double averageNum = numbers.Average();  Aggregate      double[] doubles = { 1.7, 2.3, 1.9, 4.1, 2.9 };     double product = doubles.Aggregate((runningProduct, nextFactor) => runningProduct * nextFactor);  Vikram

    Read the article

  • Use depth bias for shadows in deferred shading

    - by cubrman
    We are building a deferred shading engine and we have a problem with shadows. To add shadows we use two maps: the first one stores the depth of the scene captured by the player's camera and the second one stores the depth of the scene captured by the light's camera. We then ran a shader that analyzes the two maps and outputs the third one with the ready shadow areas for the current frame. The problem we face is a classic one: Self-Shadowing: A standard way to solve this is to use the slope-scale depth bias and depth offsets, however as we are doing things in a deferred way we cannot employ this algorithm. Any attempts to set depth bias when capturing light's view depth produced no or unsatisfying results. So here is my question: MSDN article has a convoluted explanation of the slope-scale: bias = (m × SlopeScaleDepthBias) + DepthBias Where m is the maximum depth slope of the triangle being rendered, defined as: m = max( abs(delta z / delta x), abs(delta z / delta y) ) Could you explain how I can implement this algorithm manually in a shader? Maybe there are better ways to fix this problem for deferred shadows?

    Read the article

  • Performance-Driven Development

    - by BuckWoody
    I was reading a blog yesterday about the evils of SELECT *. The author pointed out that it's almost always a bad idea to use SELECT * for a query, but in the case of SQL Azure (or any cloud database, for that matter) it's especially bad, since you're paying for each transmission that comes down the line. A very good point indeed. This got me to thinking - shouldn't we treat ALL programming that way? In other words, wouldn't it make sense to pretend that we are paying for every chunk of data - a little less for a bit, a lot more for a BLOB or VARCHAR(MAX), that sort of thing? In effect, we really are paying for that. Which led me to the thought of Performance-Driven Development, or the act of programming with the goal of having the fastest code from the very outset. This isn't an original title, since a quick Bing-search shows me a couple of offerings from Forrester and a professional in Israel who already used that title, but the general idea I'm thinking of is assigning a "cost" to each code round-trip, be it network, storage, trip time and other variables, and then rewarding the developers that come up with the fastest code. I wonder what kind of throughput and round-trip times you could get if your developers were paid on a scale of how fast the application performed... Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Getting SQL table row counts via sysindexes vs. sys.indexes

    - by Bill Osuch
    Among the many useful SQL snippets I regularly use is this little bit that will return row counts in a table: SELECT so.name as TableName, MAX(si.rows) as [RowCount] FROM sysobjects so JOIN sysindexes si ON si.id = OBJECT_ID(so.name) WHERE so.xtype = 'U' GROUP BY so.name ORDER BY [RowCount] DESC This is handy to find tables that have grown wildly, zero-row tables that could (possibly) be dropped, or other clues into the data. Right off the bat you may spot some "non-ideal" code - I'm using sysobjects rather than sys.objects. What's the difference? In SQL Server 2005 and later, sysobjects is no longer a table, but a "compatibility view", meant for backward compatibility only. SELECT * from each and you'll see the different data that each returns. Microsoft advises that sysindexes could be removed in a future version of SQL Server, but this has never really been an issue for me since my company is still using SQL 2000. However, there are murmurs that we may actually migrate to 2008 some year, so I might as well go ahead and start using an updated version of this snippet on the servers that can handle it: SELECT so.name as TableName, ddps.row_count as [RowCount] FROM sys.objects so JOIN sys.indexes si ON si.OBJECT_ID = so.OBJECT_ID JOIN sys.dm_db_partition_stats AS ddps ON si.OBJECT_ID = ddps.OBJECT_ID  AND si.index_id = ddps.index_id WHERE si.index_id < 2  AND so.is_ms_shipped = 0 ORDER BY ddps.row_count DESC

    Read the article

  • SATA errors reported during boot: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0

    - by digby280
    I have noticed some error during the Linux boot. They seem to continue to occur after the boot adding lines to the log every few seconds. Once booted this normally does not appear to be causing any problems. However, around 1 in 10 boots results in a kernel panic and the computer has on two or three occasions suddenly rebooted after being powered on for a number of hours. I presume the cause of the reboot is a kernel panic as well. I am running Ubuntu 11.10 and I have had Ubuntu installed on the computer for around a year. I have googled around and not found anything useful. I have provided the kernel log lines and the output of smartctl. Can anyone explain exactly what these errors mean, or better still how to resolve them? Apr 2 16:51:27 dell580 kernel: [ 19.831140] EXT4-fs (sdb2): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 Apr 2 16:51:27 dell580 kernel: [ 19.934194] tg3 0000:03:00.0: eth0: Link is down Apr 2 16:51:28 dell580 kernel: [ 20.929468] tg3 0000:03:00.0: eth0: Link is up at 100 Mbps, full duplex Apr 2 16:51:28 dell580 kernel: [ 20.929471] tg3 0000:03:00.0: eth0: Flow control is on for TX and on for RX Apr 2 16:51:28 dell580 kernel: [ 20.929727] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Apr 2 16:51:29 dell580 kernel: [ 21.609381] EXT4-fs (sdb2): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 Apr 2 16:51:29 dell580 kernel: [ 21.616515] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:29 dell580 kernel: [ 21.616519] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:29 dell580 kernel: [ 21.616525] ata2.00: hard resetting link Apr 2 16:51:29 dell580 kernel: [ 21.934036] ata2.01: hard resetting link Apr 2 16:51:29 dell580 kernel: [ 22.408890] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:29 dell580 kernel: [ 22.408907] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:29 dell580 kernel: [ 22.440934] ata2.00: configured for UDMA/100 Apr 2 16:51:29 dell580 kernel: [ 22.449040] ata2.01: configured for UDMA/133 Apr 2 16:51:29 dell580 kernel: [ 22.449818] ata2: EH complete Apr 2 16:51:33 dell580 kernel: [ 26.122664] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:33 dell580 kernel: [ 26.122670] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:33 dell580 kernel: [ 26.122677] ata2.00: hard resetting link Apr 2 16:51:33 dell580 kernel: [ 26.442684] ata2.01: hard resetting link Apr 2 16:51:34 dell580 kernel: [ 26.925545] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:34 dell580 kernel: [ 26.925561] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:34 dell580 kernel: [ 26.961542] ata2.00: configured for UDMA/100 Apr 2 16:51:34 dell580 kernel: [ 26.969616] ata2.01: configured for UDMA/133 Apr 2 16:51:34 dell580 kernel: [ 26.970400] ata2: EH complete Apr 2 16:51:35 dell580 kernel: [ 28.111180] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:35 dell580 kernel: [ 28.111184] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:35 dell580 kernel: [ 28.111191] ata2.00: hard resetting link Apr 2 16:51:35 dell580 kernel: [ 28.429674] ata2.01: hard resetting link Apr 2 16:51:36 dell580 kernel: [ 28.904557] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:36 dell580 kernel: [ 28.904572] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:36 dell580 kernel: [ 28.936609] ata2.00: configured for UDMA/100 Apr 2 16:51:36 dell580 kernel: [ 28.944692] ata2.01: configured for UDMA/133 Apr 2 16:51:36 dell580 kernel: [ 28.945464] ata2: EH complete Apr 2 16:51:38 dell580 kernel: [ 31.581756] eth0: no IPv6 routers present Apr 2 16:51:38 dell580 kernel: [ 32.103066] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:38 dell580 kernel: [ 32.103074] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:38 dell580 kernel: [ 32.103085] ata2.00: hard resetting link Apr 2 16:51:38 dell580 kernel: [ 32.419669] ata2.01: hard resetting link Apr 2 16:51:39 dell580 kernel: [ 32.894518] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:39 dell580 kernel: [ 32.894533] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:39 dell580 kernel: [ 32.926536] ata2.00: configured for UDMA/100 Apr 2 16:51:39 dell580 kernel: [ 32.934715] ata2.01: configured for UDMA/133 Apr 2 16:51:39 dell580 kernel: [ 32.935578] ata2: EH complete Here's the output of smartctl for the drive. smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.0.0-17-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: SAMSUNG SpinPoint F1 DT Device Model: SAMSUNG HD103UJ Serial Number: S13PJ90QC19706 LU WWN Device Id: 5 0000f0 00b1c7960 Firmware Version: 1AA01113 User Capacity: 1,000,204,886,016 bytes [1.00 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 3b Local Time is: Mon Apr 2 17:13:48 2012 BST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 41) The self-test routine was interrupted by the host with a hard or soft reset. Total time to complete Offline data collection: (11772) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 197) minutes. Conveyance self-test routine recommended polling time: ( 21) minutes. SCT capabilities: (0x003f) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 100 100 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0007 076 076 011 Pre-fail Always - 7940 4 Start_Stop_Count 0x0032 099 099 000 Old_age Always - 521 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 253 253 051 Pre-fail Always - 0 8 Seek_Time_Performance 0x0025 100 100 015 Pre-fail Offline - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 642 10 Spin_Retry_Count 0x0033 100 100 051 Pre-fail Always - 0 11 Calibration_Retry_Count 0x0012 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 482 13 Read_Soft_Error_Rate 0x000e 100 100 000 Old_age Always - 0 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 759 184 End-to-End_Error 0x0033 100 100 000 Pre-fail Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 073 069 000 Old_age Always - 27 (Min/Max 16/27) 194 Temperature_Celsius 0x0022 073 067 000 Old_age Always - 27 (Min/Max 16/28) 195 Hardware_ECC_Recovered 0x001a 100 100 000 Old_age Always - 320028 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 099 099 000 Old_age Always - 1494 200 Multi_Zone_Error_Rate 0x000a 100 100 000 Old_age Always - 0 201 Soft_Read_Error_Rate 0x000a 253 253 000 Old_age Always - 0 SMART Error Log Version: 1 ATA Error Count: 211 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 211 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 0f 31 63 8f e1 Error: ICRC, ABRT 15 sectors at LBA = 0x018f6331 = 26174257 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 40 62 8f e1 08 00:01:00.460 READ DMA c8 00 20 00 7c 30 e0 08 00:01:00.450 READ DMA c8 00 00 10 49 8f e1 08 00:01:00.440 READ DMA c8 00 e0 20 d0 30 e0 08 00:01:00.420 READ DMA c8 00 00 c0 59 90 e1 08 00:01:00.400 READ DMA Error 210 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 cf e9 cf 66 e0 Error: ICRC, ABRT 207 sectors at LBA = 0x0066cfe9 = 6737897 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 b8 cf 66 e0 08 00:08:29.780 READ DMA c8 00 60 60 c9 18 e0 08 00:08:29.770 READ DMA c8 00 40 20 c9 18 e0 08 00:08:29.770 READ DMA c8 00 20 00 c9 18 e0 08 00:08:29.760 READ DMA c8 00 20 98 cf 66 e0 08 00:08:29.750 READ DMA Error 209 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 2f d1 74 e0 e0 Error: ICRC, ABRT 47 sectors at LBA = 0x00e074d1 = 14709969 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 00 74 e0 e0 08 00:00:30.940 READ DMA c8 00 20 18 36 de e0 08 00:00:30.930 READ DMA c8 00 08 48 f1 dd e0 08 00:00:30.930 READ DMA c8 00 08 a8 f0 dd e0 08 00:00:30.930 READ DMA c8 00 08 90 f0 dd e0 08 00:00:30.930 READ DMA Error 208 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 7f 21 88 9d e0 Error: ICRC, ABRT 127 sectors at LBA = 0x009d8821 = 10324001 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 a0 00 88 9d e0 08 00:00:27.610 READ DMA c8 00 58 a8 e7 9c e0 08 00:00:27.610 READ DMA c8 00 00 28 e6 9c e0 08 00:00:27.610 READ DMA c8 00 00 e0 e4 9c e0 08 00:00:27.610 READ DMA c8 00 00 90 e0 9c e0 08 00:00:27.600 READ DMA Error 207 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 04 51 26 6a 6a c3 e0 Error: ABRT at LBA = 0x00c36a6a = 12806762 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- ca 00 00 90 69 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 90 68 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 50 65 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 d0 64 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 90 63 c3 e0 08 00:29:39.350 WRITE DMA SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Interrupted (host reset) 90% 638 - # 2 Short offline Interrupted (host reset) 90% 638 - # 3 Extended offline Interrupted (host reset) 90% 638 - # 4 Short offline Interrupted (host reset) 90% 638 - # 5 Extended offline Interrupted (host reset) 90% 638 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • How do I classify using GLCM and SVM Classifier in Matlab?

    - by Gomathi
    I'm on a project of liver tumor segmentation and classification. I used Region Growing and FCM for liver and tumor segmentation respectively. Then, I used Gray Level Co-occurence matrix for texture feature extraction. I have to use Support Vector Machine for Classification. But I don't know how to normalize the feature vectors. Can anyone tell how to program it in Matlab? To the GLCM program, I gave the tumor segmented image as input. Was I correct? If so, I think, then, my output will also be correct. My glcm coding, as far as I have tried is, I = imread('fzliver3.jpg'); GLCM = graycomatrix(I,'Offset',[2 0;0 2]); stats = graycoprops(GLCM,'all') t1= struct2array(stats) I2 = imread('fzliver4.jpg'); GLCM2 = graycomatrix(I2,'Offset',[2 0;0 2]); stats2 = graycoprops(GLCM2,'all') t2= struct2array(stats2) I3 = imread('fzliver5.jpg'); GLCM3 = graycomatrix(I3,'Offset',[2 0;0 2]); stats3 = graycoprops(GLCM3,'all') t3= struct2array(stats3) t=[t1;t2;t3] xmin = min(t); xmax = max(t); scale = xmax-xmin; tf=(x-xmin)/scale Was this a correct implementation? Also, I get an error at the last line. My output is: stats = Contrast: [0.0510 0.0503] Correlation: [0.9513 0.9519] Energy: [0.8988 0.8988] Homogeneity: [0.9930 0.9935] t1 = Columns 1 through 6 0.0510 0.0503 0.9513 0.9519 0.8988 0.8988 Columns 7 through 8 0.9930 0.9935 stats2 = Contrast: [0.0345 0.0339] Correlation: [0.8223 0.8255] Energy: [0.9616 0.9617] Homogeneity: [0.9957 0.9957] t2 = Columns 1 through 6 0.0345 0.0339 0.8223 0.8255 0.9616 0.9617 Columns 7 through 8 0.9957 0.9957 stats3 = Contrast: [0.0230 0.0246] Correlation: [0.7450 0.7270] Energy: [0.9815 0.9813] Homogeneity: [0.9971 0.9970] t3 = Columns 1 through 6 0.0230 0.0246 0.7450 0.7270 0.9815 0.9813 Columns 7 through 8 0.9971 0.9970 t = Columns 1 through 6 0.0510 0.0503 0.9513 0.9519 0.8988 0.8988 0.0345 0.0339 0.8223 0.8255 0.9616 0.9617 0.0230 0.0246 0.7450 0.7270 0.9815 0.9813 Columns 7 through 8 0.9930 0.9935 0.9957 0.9957 0.9971 0.9970 ??? Error using ==> minus Matrix dimensions must agree. The images are:

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >