Search Results

Search found 28818 results on 1153 pages for 'main loop'.

Page 393/1153 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • How to include an external jar in gwt client side?

    - by Sergio del Amo
    I would like to use the org.apache.commons.validator.GenericValidator class in a view class of my GWT web app. I have read that I have to implicitely tell that I intend to use this external library. I thought adding the next line into my App.gwt.xml would work. <inherits name='org.apache.commons.validator.GenericValidator'/> I get the next error: Loading inherited module 'org.apache.commons.validator.GenericValidator' [ERROR] Unable to find 'org/apache/commons/validator/GenericValidator.gwt.xml' on your classpath; could be a typo, or maybe you forgot to include a classpath entry for source? [ERROR] Line 13: Unexpected exception while processing element 'inherits' com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:239) at com.google.gwt.dev.cfg.ModuleDefSchema$BodySchema.__inherits_begin(ModuleDefSchema.java:354) at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.util.xml.HandlerMethod.invokeBegin(HandlerMethod.java:223) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.startElement(ReflectiveParser.java:270) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1339) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:327) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) [ERROR] Failure while parsing XML com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.util.xml.DefaultSchema.onHandlerException(DefaultSchema.java:56) at com.google.gwt.dev.util.xml.Schema.onHandlerException(Schema.java:66) at com.google.gwt.dev.util.xml.Schema.onHandlerException(Schema.java:66) at com.google.gwt.dev.util.xml.HandlerMethod.invokeBegin(HandlerMethod.java:233) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.startElement(ReflectiveParser.java:270) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1339) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:327) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) [ERROR] Unexpected error while processing XML com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:351) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) Anyone knows how it works?

    Read the article

  • How to include an external jar in a GWT module?

    - by Sergio del Amo
    I would like to use the org.apache.commons.validator.GenericValidator class in a view class of my GWT web app. I have read that I have to implicitely tell that I intend to use this external library. I thought adding the next line into my App.gwt.xml would work. <inherits name='org.apache.commons.validator.GenericValidator'/> I get the next error: Loading inherited module 'org.apache.commons.validator.GenericValidator' [ERROR] Unable to find 'org/apache/commons/validator/GenericValidator.gwt.xml' on your classpath; could be a typo, or maybe you forgot to include a classpath entry for source? [ERROR] Line 13: Unexpected exception while processing element 'inherits' com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:239) at com.google.gwt.dev.cfg.ModuleDefSchema$BodySchema.__inherits_begin(ModuleDefSchema.java:354) at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.util.xml.HandlerMethod.invokeBegin(HandlerMethod.java:223) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.startElement(ReflectiveParser.java:270) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1339) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:327) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) [ERROR] Failure while parsing XML com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.util.xml.DefaultSchema.onHandlerException(DefaultSchema.java:56) at com.google.gwt.dev.util.xml.Schema.onHandlerException(Schema.java:66) at com.google.gwt.dev.util.xml.Schema.onHandlerException(Schema.java:66) at com.google.gwt.dev.util.xml.HandlerMethod.invokeBegin(HandlerMethod.java:233) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.startElement(ReflectiveParser.java:270) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.startElement(AbstractSAXParser.java:501) at com.sun.org.apache.xerces.internal.parsers.AbstractXMLDocumentParser.emptyElement(AbstractXMLDocumentParser.java:179) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanStartElement(XMLDocumentFragmentScannerImpl.java:1339) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2747) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:327) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) [ERROR] Unexpected error while processing XML com.google.gwt.core.ext.UnableToCompleteException: (see previous log entries) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.parse(ReflectiveParser.java:351) at com.google.gwt.dev.util.xml.ReflectiveParser$Impl.access$100(ReflectiveParser.java:48) at com.google.gwt.dev.util.xml.ReflectiveParser.parse(ReflectiveParser.java:398) at com.google.gwt.dev.cfg.ModuleDefLoader.nestedLoad(ModuleDefLoader.java:257) at com.google.gwt.dev.cfg.ModuleDefLoader$1.load(ModuleDefLoader.java:169) at com.google.gwt.dev.cfg.ModuleDefLoader.doLoadModule(ModuleDefLoader.java:283) at com.google.gwt.dev.cfg.ModuleDefLoader.loadFromClassPath(ModuleDefLoader.java:141) at com.google.gwt.dev.Compiler.run(Compiler.java:184) at com.google.gwt.dev.Compiler$1.run(Compiler.java:152) at com.google.gwt.dev.CompileTaskRunner.doRun(CompileTaskRunner.java:87) at com.google.gwt.dev.CompileTaskRunner.runWithAppropriateLogger(CompileTaskRunner.java:81) at com.google.gwt.dev.Compiler.main(Compiler.java:159) I have commons.validator-1.3.1.jar in war/WEB-INF/lib I am using eclipse with Google Plugin. Anyone knows how it works?

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

  • Can not execute Sonar

    - by senzacionale
    my pom.xml <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>RCC</groupId> <artifactId>tmp</artifactId> <name>tmp</name> <version>1.0</version> <build> <sourceDirectory>C:\Projekti\KIS\Model\src </sourceDirectory> <outputDirectory>C:\Projekti\KIS\Model\classes</outputDirectory> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.5</source> <target>1.5</target> <excludes> <exclude>**/*.*</exclude> </excludes> </configuration> </plugin> </plugins> </build> <properties> <sonar.dynamicAnalysis>false</sonar.dynamicAnalysis> </properties> </project> running sonnar C:\Projekti\Metrics>mvn sonar:sonar -e + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'sonar'. [INFO] ------------------------------------------------------------------------ [INFO] Building tmp [INFO] task-segment: [sonar:sonar] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] [sonar:sonar {execution: default-cli}] [INFO] Sonar host: http://localhost:9000 [INFO] Sonar version: 2.1.2 [INFO] [sonar-core:internal {execution: default-internal}] [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Can not execute Sonar Embedded error: Can not analyze the project org.apache.derby.jdbc.ClientDriver [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Can not execute Sonar at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:719) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:284) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: Can not execute Sonar at org.codehaus.mojo.sonar.Bootstraper.executeMojo(Bootstraper.java:87) at org.codehaus.mojo.sonar.Bootstraper.start(Bootstraper.java:65) at org.codehaus.mojo.sonar.SonarMojo.execute(SonarMojo.java:117) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more Caused by: org.apache.maven.plugin.MojoExecutionException: Can not analyze the project at org.sonar.maven2.BatchMojo.executeBatch(BatchMojo.java:152) at org.sonar.maven2.BatchMojo.execute(BatchMojo.java:131) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.codehaus.mojo.sonar.Bootstraper.executeMojo(Bootstraper.java:82) ... 21 more Caused by: org.picocontainer.PicoLifecycleException: PicoLifecycleException: method 'public void org.sonar.api.database.AbstractDatabaseConnector.start()', instance 'org.sonar.api.database.DriverDatabaseConnector@c87621, java.lang.RuntimeException: wrapper at org.picocontainer.monitors.NullComponentMonitor.lifecycleInvocationFailed(NullComponentMonitor.java:77) at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.monitorAndThrowReflectionLifecycleException(ReflectionLifecycleStrategy.java:132) at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:115) at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.start(ReflectionLifecycleStrategy.java:89) at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.start(AbstractInjectionFactory.java:84) at org.picocontainer.behaviors.AbstractBehavior.start(AbstractBehavior.java:169) at org.picocontainer.behaviors.Stored$RealComponentLifecycle.start(Stored.java:132) at org.picocontainer.behaviors.Stored.start(Stored.java:110) at org.picocontainer.DefaultPicoContainer.potentiallyStartAdapter(DefaultPicoContainer.java:996) at org.picocontainer.DefaultPicoContainer.startAdapters(DefaultPicoContainer.java:989) at org.picocontainer.DefaultPicoContainer.start(DefaultPicoContainer.java:746) at org.sonar.batch.AggregatorBatch.execute(AggregatorBatch.java:84) at org.sonar.maven2.BatchMojo.executeBatch(BatchMojo.java:149) ... 24 more Caused by: java.lang.RuntimeException: wrapper at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.monitorAndThrowReflectionLifecycleException(ReflectionLifecycleStrategy.java:130) ... 35 more Caused by: org.sonar.api.database.DatabaseException: Cannot open connection to database: SQL driver not found org.apache.derby.jdbc.ClientDriver at org.sonar.api.database.AbstractDatabaseConnector.testConnection(AbstractDatabaseConnector.java:182) at org.sonar.api.database.AbstractDatabaseConnector.start(AbstractDatabaseConnector.java:94) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:110) ... 34 more Caused by: java.sql.SQLException: SQL driver not found org.apache.derby.jdbc.ClientDriver at org.sonar.api.database.DriverDatabaseConnector.getConnection(DriverDatabaseConnector.java:70) at org.sonar.api.database.AbstractDatabaseConnector.testConnection(AbstractDatabaseConnector.java:178) ... 40 more Caused by: java.lang.ClassNotFoundException: org.apache.derby.jdbc.ClientDriver at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at org.codehaus.classworlds.RealmClassLoader.loadClassDirect(RealmClassLoader.java:195) at org.codehaus.classworlds.DefaultClassRealm.loadClass(DefaultClassRealm.java:255) at org.codehaus.classworlds.DefaultClassRealm.loadClass(DefaultClassRealm.java:274) at org.codehaus.classworlds.RealmClassLoader.loadClass(RealmClassLoader.java:214) at java.lang.ClassLoader.loadClass(ClassLoader.java:251) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:164) at org.sonar.api.database.DriverDatabaseConnector.getConnection(DriverDatabaseConnector.java:68) ... 41 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Wed Jun 09 12:12:55 CEST 2010 [INFO] Final Memory: 12M/23M [INFO] ------------------------------------------------------------------------ any idea why not working Sonar with maven?

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 3 – Table per Concrete Type (TPC) and Choosing Strategy Guidelines

    - by mortezam
    This is the third (and last) post in a series that explains different approaches to map an inheritance hierarchy with EF Code First. I've described these strategies in previous posts: Part 1 – Table per Hierarchy (TPH) Part 2 – Table per Type (TPT)In today’s blog post I am going to discuss Table per Concrete Type (TPC) which completes the inheritance mapping strategies supported by EF Code First. At the end of this post I will provide some guidelines to choose an inheritance strategy mainly based on what we've learned in this series. TPC and Entity Framework in the Past Table per Concrete type is somehow the simplest approach suggested, yet using TPC with EF is one of those concepts that has not been covered very well so far and I've seen in some resources that it was even discouraged. The reason for that is just because Entity Data Model Designer in VS2010 doesn't support TPC (even though the EF runtime does). That basically means if you are following EF's Database-First or Model-First approaches then configuring TPC requires manually writing XML in the EDMX file which is not considered to be a fun practice. Well, no more. You'll see that with Code First, creating TPC is perfectly possible with fluent API just like other strategies and you don't need to avoid TPC due to the lack of designer support as you would probably do in other EF approaches. Table per Concrete Type (TPC)In Table per Concrete type (aka Table per Concrete class) we use exactly one table for each (nonabstract) class. All properties of a class, including inherited properties, can be mapped to columns of this table, as shown in the following figure: As you can see, the SQL schema is not aware of the inheritance; effectively, we’ve mapped two unrelated tables to a more expressive class structure. If the base class was concrete, then an additional table would be needed to hold instances of that class. I have to emphasize that there is no relationship between the database tables, except for the fact that they share some similar columns. TPC Implementation in Code First Just like the TPT implementation, we need to specify a separate table for each of the subclasses. We also need to tell Code First that we want all of the inherited properties to be mapped as part of this table. In CTP5, there is a new helper method on EntityMappingConfiguration class called MapInheritedProperties that exactly does this for us. Here is the complete object model as well as the fluent API to create a TPC mapping: public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } }          public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } }          public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } }      public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; }              protected override void OnModelCreating(ModelBuilder modelBuilder)     {         modelBuilder.Entity<BankAccount>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("BankAccounts");         });         modelBuilder.Entity<CreditCard>().Map(m =>         {             m.MapInheritedProperties();             m.ToTable("CreditCards");         });                 } } The Importance of EntityMappingConfiguration ClassAs a side note, it worth mentioning that EntityMappingConfiguration class turns out to be a key type for inheritance mapping in Code First. Here is an snapshot of this class: namespace System.Data.Entity.ModelConfiguration.Configuration.Mapping {     public class EntityMappingConfiguration<TEntityType> where TEntityType : class     {         public ValueConditionConfiguration Requires(string discriminator);         public void ToTable(string tableName);         public void MapInheritedProperties();     } } As you have seen so far, we used its Requires method to customize TPH. We also used its ToTable method to create a TPT and now we are using its MapInheritedProperties along with ToTable method to create our TPC mapping. TPC Configuration is Not Done Yet!We are not quite done with our TPC configuration and there is more into this story even though the fluent API we saw perfectly created a TPC mapping for us in the database. To see why, let's start working with our object model. For example, the following code creates two new objects of BankAccount and CreditCard types and tries to add them to the database: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount();     CreditCard creditCard = new CreditCard() { CardType = 1 };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Running this code throws an InvalidOperationException with this message: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges. The reason we got this exception is because DbContext.SaveChanges() internally invokes SaveChanges method of its internal ObjectContext. ObjectContext's SaveChanges method on its turn by default calls AcceptAllChanges after it has performed the database modifications. AcceptAllChanges method merely iterates over all entries in ObjectStateManager and invokes AcceptChanges on each of them. Since the entities are in Added state, AcceptChanges method replaces their temporary EntityKey with a regular EntityKey based on the primary key values (i.e. BillingDetailId) that come back from the database and that's where the problem occurs since both the entities have been assigned the same value for their primary key by the database (i.e. on both BillingDetailId = 1) and the problem is that ObjectStateManager cannot track objects of the same type (i.e. BillingDetail) with the same EntityKey value hence it throws. If you take a closer look at the TPC's SQL schema above, you'll see why the database generated the same values for the primary keys: the BillingDetailId column in both BankAccounts and CreditCards table has been marked as identity. How to Solve The Identity Problem in TPC As you saw, using SQL Server’s int identity columns doesn't work very well together with TPC since there will be duplicate entity keys when inserting in subclasses tables with all having the same identity seed. Therefore, to solve this, either a spread seed (where each table has its own initial seed value) will be needed, or a mechanism other than SQL Server’s int identity should be used. Some other RDBMSes have other mechanisms allowing a sequence (identity) to be shared by multiple tables, and something similar can be achieved with GUID keys in SQL Server. While using GUID keys, or int identity keys with different starting seeds will solve the problem but yet another solution would be to completely switch off identity on the primary key property. As a result, we need to take the responsibility of providing unique keys when inserting records to the database. We will go with this solution since it works regardless of which database engine is used. Switching Off Identity in Code First We can switch off identity simply by placing DatabaseGenerated attribute on the primary key property and pass DatabaseGenerationOption.None to its constructor. DatabaseGenerated attribute is a new data annotation which has been added to System.ComponentModel.DataAnnotations namespace in CTP5: public abstract class BillingDetail {     [DatabaseGenerated(DatabaseGenerationOption.None)]     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } As always, we can achieve the same result by using fluent API, if you prefer that: modelBuilder.Entity<BillingDetail>()             .Property(p => p.BillingDetailId)             .HasDatabaseGenerationOption(DatabaseGenerationOption.None); Working With The Object Model Our TPC mapping is ready and we can try adding new records to the database. But, like I said, now we need to take care of providing unique keys when creating new objects: using (var context = new InheritanceMappingContext()) {     BankAccount bankAccount = new BankAccount()      {          BillingDetailId = 1                          };     CreditCard creditCard = new CreditCard()      {          BillingDetailId = 2,         CardType = 1     };                      context.BillingDetails.Add(bankAccount);     context.BillingDetails.Add(creditCard);     context.SaveChanges(); } Polymorphic Associations with TPC is Problematic The main problem with this approach is that it doesn’t support Polymorphic Associations very well. After all, in the database, associations are represented as foreign key relationships and in TPC, the subclasses are all mapped to different tables so a polymorphic association to their base class (abstract BillingDetail in our example) cannot be represented as a simple foreign key relationship. For example, consider the the domain model we introduced here where User has a polymorphic association with BillingDetail. This would be problematic in our TPC Schema, because if User has a many-to-one relationship with BillingDetail, the Users table would need a single foreign key column, which would have to refer both concrete subclass tables. This isn’t possible with regular foreign key constraints. Schema Evolution with TPC is Complex A further conceptual problem with this mapping strategy is that several different columns, of different tables, share exactly the same semantics. This makes schema evolution more complex. For example, a change to a base class property results in changes to multiple columns. It also makes it much more difficult to implement database integrity constraints that apply to all subclasses. Generated SQLLet's examine SQL output for polymorphic queries in TPC mapping. For example, consider this polymorphic query for all BillingDetails and the resulting SQL statements that being executed in the database: var query = from b in context.BillingDetails select b; Just like the SQL query generated by TPT mapping, the CASE statements that you see in the beginning of the query is merely to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type). TPC's SQL Queries are Union Based As you can see in the above screenshot, the first SELECT uses a FROM-clause subquery (which is selected with a red rectangle) to retrieve all instances of BillingDetails from all concrete class tables. The tables are combined with a UNION operator, and a literal (in this case, 0 and 1) is inserted into the intermediate result; (look at the lines highlighted in yellow.) EF reads this to instantiate the correct class given the data from a particular row. A union requires that the queries that are combined, project over the same columns; hence, EF has to pad and fill up nonexistent columns with NULL. This query will really perform well since here we can let the database optimizer find the best execution plan to combine rows from several tables. There is also no Joins involved so it has a better performance than the SQL queries generated by TPT where a Join is required between the base and subclasses tables. Choosing Strategy GuidelinesBefore we get into this discussion, I want to emphasize that there is no one single "best strategy fits all scenarios" exists. As you saw, each of the approaches have their own advantages and drawbacks. Here are some rules of thumb to identify the best strategy in a particular scenario: If you don’t require polymorphic associations or queries, lean toward TPC—in other words, if you never or rarely query for BillingDetails and you have no class that has an association to BillingDetail base class. I recommend TPC (only) for the top level of your class hierarchy, where polymorphism isn’t usually required, and when modification of the base class in the future is unlikely. If you do require polymorphic associations or queries, and subclasses declare relatively few properties (particularly if the main difference between subclasses is in their behavior), lean toward TPH. Your goal is to minimize the number of nullable columns and to convince yourself (and your DBA) that a denormalized schema won’t create problems in the long run. If you do require polymorphic associations or queries, and subclasses declare many properties (subclasses differ mainly by the data they hold), lean toward TPT. Or, depending on the width and depth of your inheritance hierarchy and the possible cost of joins versus unions, use TPC. By default, choose TPH only for simple problems. For more complex cases (or when you’re overruled by a data modeler insisting on the importance of nullability constraints and normalization), you should consider the TPT strategy. But at that point, ask yourself whether it may not be better to remodel inheritance as delegation in the object model (delegation is a way of making composition as powerful for reuse as inheritance). Complex inheritance is often best avoided for all sorts of reasons unrelated to persistence or ORM. EF acts as a buffer between the domain and relational models, but that doesn’t mean you can ignore persistence concerns when designing your classes. SummaryIn this series, we focused on one of the main structural aspect of the object/relational paradigm mismatch which is inheritance and discussed how EF solve this problem as an ORM solution. We learned about the three well-known inheritance mapping strategies and their implementations in EF Code First. Hopefully it gives you a better insight about the mapping of inheritance hierarchies as well as choosing the best strategy for your particular scenario. Happy New Year and Happy Code-Firsting! References ADO.NET team blog Java Persistence with Hibernate book a { color: #5A99FF; } a:visited { color: #5A99FF; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } .exception { background-color: #f0f0f0; font-style: italic; padding-bottom: 5px; padding-left: 5px; padding-top: 5px; padding-right: 5px; }

    Read the article

  • Can't connect to WCF service on Android

    - by illvm
    I am trying to connect to a .NET WCF service in Android using kSOAP2 (v 2.1.2), but I keep getting a fatal exception whenever I try to make the service call. I'm having a bit of difficulty tracking down the error and can't seem to figure out why it's happening. The code I am using is below: package org.example.android; import org.ksoap2.SoapEnvelope; import org.ksoap2.serialization.PropertyInfo; import org.ksoap2.serialization.SoapObject; import org.ksoap2.serialization.SoapSerializationEnvelope; import org.ksoap2.transport.HttpTransport; import android.app.Activity; import android.app.AlertDialog; import android.content.DialogInterface; import android.content.Intent; import android.os.Bundle; public class ValidateUser extends Activity { private static final String SOAP_ACTION = "http://tempuri.org/mobile/ValidateUser"; private static final String METHOD_NAME = "ValidateUser"; private static final String NAMESPACE = "http://tempuri.org/mobile/"; private static final String URL = "http://192.168.1.2:8002/WebService.Mobile.svc"; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.validate_user); Intent intent = getIntent(); Bundle extras = intent.getExtras(); if (extras == null) { this.finish(); } String username = extras.getString("username"); String password = extras.getString("password"); Boolean validUser = false; try { SoapObject request = new SoapObject(NAMESPACE, METHOD_NAME); PropertyInfo uName = new PropertyInfo(); uName.name = "userName"; PropertyInfo pWord = new PropertyInfo(); pWord.name = "passWord"; request.addProperty(uName, username); request.addProperty(pWord, password); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); envelope.dotNet = true; envelope.setOutputSoapObject(request); HttpTransport androidHttpTransport = new HttpTransport(URL); androidHttpTransport.call(SOAP_ACTION, envelope); // error occurs here Integer userId = (Integer)envelope.getResponse(); validUser = (userId != 0); } catch (Exception ex) { } } private void exit () { this.finish(); } } EDIT: Remove the old stack traces. In summary, the first problem was the program being unable to open a connection due to missing methods or libraries due to using vanilla kSOAP2 rather than a modified library for Android (kSOAP2-Android). The second issue was a settings issue. In the Manifest I did not add the following setting: <uses-permission android:name="android.permission.INTERNET" /> I am now having an issue with the XMLPullParser which I need to figure out. 12-23 10:58:06.480: ERROR/SOCKETLOG(210): add_recv_stats recv 0 12-23 10:58:06.710: WARN/System.err(210): org.xmlpull.v1.XmlPullParserException: unexpected type (position:END_DOCUMENT null@1:0 in java.io.InputStreamReader@433fb070) 12-23 10:58:06.710: WARN/System.err(210): at org.kxml2.io.KXmlParser.exception(KXmlParser.java:243) 12-23 10:58:06.720: WARN/System.err(210): at org.kxml2.io.KXmlParser.nextTag(KXmlParser.java:1363) 12-23 10:58:06.720: WARN/System.err(210): at org.ksoap2.SoapEnvelope.parse(SoapEnvelope.java:126) 12-23 10:58:06.720: WARN/System.err(210): at org.ksoap2.transport.Transport.parseResponse(Transport.java:63) 12-23 10:58:06.720: WARN/System.err(210): at org.ksoap2.transport.HttpTransportSE.call(HttpTransportSE.java:100) 12-23 10:58:06.730: WARN/System.err(210): at org.example.android.ValidateUser.onCreate(ValidateUser.java:68) 12-23 10:58:06.730: WARN/System.err(210): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1122) 12-23 10:58:06.730: WARN/System.err(210): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2104) 12-23 10:58:06.730: WARN/System.err(210): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2157) 12-23 10:58:06.730: WARN/System.err(210): at android.app.ActivityThread.access$1800(ActivityThread.java:112) 12-23 10:58:06.730: WARN/System.err(210): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1581) 12-23 10:58:06.730: WARN/System.err(210): at android.os.Handler.dispatchMessage(Handler.java:88) 12-23 10:58:06.730: WARN/System.err(210): at android.os.Looper.loop(Looper.java:123) 12-23 10:58:06.730: WARN/System.err(210): at android.app.ActivityThread.main(ActivityThread.java:3739) 12-23 10:58:06.730: WARN/System.err(210): at java.lang.reflect.Method.invokeNative(Native Method) 12-23 10:58:06.730: WARN/System.err(210): at java.lang.reflect.Method.invoke(Method.java:515) 12-23 10:58:06.730: WARN/System.err(210): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:739) 12-23 10:58:06.730: WARN/System.err(210): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:497) 12-23 10:58:06.730: WARN/System.err(210): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • Parallelism in .NET – Part 6, Declarative Data Parallelism

    - by Reed
    When working with a problem that can be decomposed by data, we have a collection, and some operation being performed upon the collection.  I’ve demonstrated how this can be parallelized using the Task Parallel Library and imperative programming using imperative data parallelism via the Parallel class.  While this provides a huge step forward in terms of power and capabilities, in many cases, special care must still be given for relative common scenarios. C# 3.0 and Visual Basic 9.0 introduced a new, declarative programming model to .NET via the LINQ Project.  When working with collections, we can now write software that describes what we want to occur without having to explicitly state how the program should accomplish the task.  By taking advantage of LINQ, many operations become much shorter, more elegant, and easier to understand and maintain.  Version 4.0 of the .NET framework extends this concept into the parallel computation space by introducing Parallel LINQ. Before we delve into PLINQ, let’s begin with a short discussion of LINQ.  LINQ, the extensions to the .NET Framework which implement language integrated query, set, and transform operations, is implemented in many flavors.  For our purposes, we are interested in LINQ to Objects.  When dealing with parallelizing a routine, we typically are dealing with in-memory data storage.  More data-access oriented LINQ variants, such as LINQ to SQL and LINQ to Entities in the Entity Framework fall outside of our concern, since the parallelism there is the concern of the data base engine processing the query itself. LINQ (LINQ to Objects in particular) works by implementing a series of extension methods, most of which work on IEnumerable<T>.  The language enhancements use these extension methods to create a very concise, readable alternative to using traditional foreach statement.  For example, let’s revisit our minimum aggregation routine we wrote in Part 4: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re doing a very simple computation, but writing this in an imperative style.  This can be loosely translated to English as: Create a very large number, and save it in min Loop through each item in the collection. For every item: Perform some computation, and save the result If the computation is less than min, set min to the computation Although this is fairly easy to follow, it’s quite a few lines of code, and it requires us to read through the code, step by step, line by line, in order to understand the intention of the developer. We can rework this same statement, using LINQ: double min = collection.Min(item => item.PerformComputation()); Here, we’re after the same information.  However, this is written using a declarative programming style.  When we see this code, we’d naturally translate this to English as: Save the Min value of collection, determined via calling item.PerformComputation() That’s it – instead of multiple logical steps, we have one single, declarative request.  This makes the developer’s intentions very clear, and very easy to follow.  The system is free to implement this using whatever method required. Parallel LINQ (PLINQ) extends LINQ to Objects to support parallel operations.  This is a perfect fit in many cases when you have a problem that can be decomposed by data.  To show this, let’s again refer to our minimum aggregation routine from Part 4, but this time, let’s review our final, parallelized version: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Here, we’re doing the same computation as above, but fully parallelized.  Describing this in English becomes quite a feat: Create a very large number, and save it in min Create a temporary object we can use for locking Call Parallel.ForEach, specifying three delegates For the first delegate: Initialize a local variable to hold the local state to a very large number For the second delegate: For each item in the collection, perform some computation, save the result If the result is less than our local state, save the result in local state For the final delegate: Take a lock on our temporary object to protect our min variable Save the min of our min and local state variables Although this solves our problem, and does it in a very efficient way, we’ve created a set of code that is quite a bit more difficult to understand and maintain. PLINQ provides us with a very nice alternative.  In order to use PLINQ, we need to learn one new extension method that works on IEnumerable<T> – ParallelEnumerable.AsParallel(). That’s all we need to learn in order to use PLINQ: one single method.  We can write our minimum aggregation in PLINQ very simply: double min = collection.AsParallel().Min(item => item.PerformComputation()); By simply adding “.AsParallel()” to our LINQ to Objects query, we converted this to using PLINQ and running this computation in parallel!  This can be loosely translated into English easily, as well: Process the collection in parallel Get the Minimum value, determined by calling PerformComputation on each item Here, our intention is very clear and easy to understand.  We just want to perform the same operation we did in serial, but run it “as parallel”.  PLINQ completely extends LINQ to Objects: the entire functionality of LINQ to Objects is available.  By simply adding a call to AsParallel(), we can specify that a collection should be processed in parallel.  This is simple, safe, and incredibly useful.

    Read the article

  • Rails 3 / RVM - Acts_as_list compiled locally - Why Can't Ruby See This Gem?

    - by rabbit on rails
    I cannot figure out why rails/ruby cannot see this gem, despite each telling me that the gem is visible. I compiled this gem locally from a github branch since the main version seems to be broken in Rails 3. Or perhaps I am missing something else entirely. Ovid:lightserve dlipa$ gem list *** LOCAL GEMS *** .. acts_as_list (0.2.1) .. And Ovid:lightserve dlipa$ cat Gemfile ... gem "acts_as_list", "0.2.1" ... And Ovid:lightserve dlipa$ bundle install ... Using acts_as_list (0.2.1) Your bundle is updated! Use `bundle show [gemname]` to see where a bundled gem is installed But Ovid:lightserve dlipa$ r c RubyGems Environment: - RUBYGEMS VERSION: 1.6.1 - RUBY VERSION: 1.9.2 (2011-02-18 patchlevel 180) [x86_64-darwin10.6.0] - INSTALLATION DIRECTORY: /Users/dlipa/.rvm/gems/ruby-1.9.2-p180 - RUBY EXECUTABLE: /Users/dlipa/.rvm/rubies/ruby-1.9.2-p180/bin/ruby - EXECUTABLE DIRECTORY: /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-darwin-10 - GEM PATHS: - /Users/dlipa/.rvm/gems/ruby-1.9.2-p180 - /Users/dlipa/.rvm/gems/ruby-1.9.2-p180@global - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - :sources => ["http://rubygems.org/", "http://gems.github.com"] - REMOTE SOURCES: - http://rubygems.org/ - http://gems.github.com Loading development environment (Rails 3.0.5) ruby-1.9.2-p180 :001 > require 'acts_as_list' LoadError: no such file to load -- acts_as_list from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:239:in `require' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:239:in `block in require' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:225:in `block in load_dependency' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:596:in `new_constants_in' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:225:in `load_dependency' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:239:in `require' from (irb):1 from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/railties-3.0.5/lib/rails/commands/console.rb:44:in `start' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/railties-3.0.5/lib/rails/commands/console.rb:8:in `start' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/railties-3.0.5/lib/rails/commands.rb:23:in `<top (required)>' from script/rails:6:in `require' from script/rails:6:in `<main>' ruby-1.9.2-p180 :002 > And Ovid:lightserve dlipa$ irb ruby-1.9.2-p180 :001 > require 'acts_as_list' LoadError: no such file to load -- acts_as_list from /Users/dlipa/.rvm/rubies/ruby-1.9.2-p180/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /Users/dlipa/.rvm/rubies/ruby-1.9.2-p180/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from (irb):1 from /Users/dlipa/.rvm/rubies/ruby-1.9.2-p180/bin/irb:16:in `<main>' ruby-1.9.2-p180 :002 > Can anyone explain why this might be happening? I'd really appreciate it! ** UPDATE -- Response to Andrew Marshall's suggestion** I changed Gemfile to read the gem directly from git, but it did not resolve the problem. Does this mean that there is a problem with this gem? The error message is not very helpful ;-) Removed: Ovid:lightserve dlipa$ bundle show acts_as_list Could not find gem 'acts_as_list' in the current bundle. Then added back via: gem "acts_as_list", :git => "git://github.com/vpereira/acts_as_list.git" Ovid:lightserve dlipa$ bundle install Updating git://github.com/vpereira/acts_as_list.git ... Same problem even though bundle show matches the commit on that page: Ovid:lightserve dlipa$ bundle show acts_as_list /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/bundler/gems/acts_as_list-4cb76a8b198c Ovid:lightserve dlipa$ irb ruby-1.9.2-p180 :001 > require 'acts_as_list' LoadError: no such file to load -- acts_as_list from /Users/dlipa/.rvm/rubies/ruby-1.9.2-.. I just looked in the gem and it appears there is no file called 'acts_as_list' in the gem. So it appears to be idiosyncratic, albeit poorly reported by Rails/Ruby. The API appears to have changed to: ruby-1.9.2-p180 :003 > require 'active_record/acts/list' => nil ruby-1.9.2-p180 :004 > ActiveRecord::Acts::List => ActiveRecord::Acts::List

    Read the article

  • wss4j: - Cannot find key for alias: monit

    - by feiroox
    Hi I'm using axis1.4 and wss4j. When I define in client-config.wsdd for WSDoAllSender and WSDoAllReceiver different signaturePropFiles where I have different key stores defined with different certificates, I'm able to have different certificates for sending and receiving. But when I use the same signaturePropFiles' with the same keystore. I get this message when I try to send a message: org.apache.ws.security.components.crypto.CryptoBase -- Cannot find key for alias: [monit] in keystore of type [jks] from provider [SUN version 1.5] with size [2] and aliases: {other, monit} - Error during Signature: ; nested exception is: org.apache.ws.security.WSSecurityException: Signature creation failed; nested exception is: java.lang.Exception: Cannot find key for alias: [monit] org.apache.ws.security.WSSecurityException: Error during Signature: ; nested exception is: org.apache.ws.security.WSSecurityException: Signature creation failed; nested exception is: java.lang.Exception: Cannot find key for alias: [monit] at org.apache.ws.security.action.SignatureAction.execute(SignatureAction.java:60) at org.apache.ws.security.handler.WSHandler.doSenderAction(WSHandler.java:202) at org.apache.ws.axis.security.WSDoAllSender.invoke(WSDoAllSender.java:168) at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrategy.java:32) at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118) at org.apache.axis.SimpleChain.invoke(SimpleChain.java:83) at org.apache.axis.client.AxisClient.invoke(AxisClient.java:127) at org.apache.axis.client.Call.invokeEngine(Call.java:2784) at org.apache.axis.client.Call.invoke(Call.java:2767) at org.apache.axis.client.Call.invoke(Call.java:2443) at org.apache.axis.client.Call.invoke(Call.java:2366) at org.apache.axis.client.Call.invoke(Call.java:1812) at cz.ing.oopf.model.wsclient.ModelWebServiceSoapBindingStub.getStatus(ModelWebServiceSoapBindingStub.java:213) at cz.ing.oopf.wsgemonitor.monitor.util.MonitorUtil.checkStatus(MonitorUtil.java:18) at cz.ing.oopf.wsgemonitor.monitor.Test02WsMonitor.runTest(Test02WsMonitor.java:23) at cz.ing.oopf.wsgemonitor.Main.main(Main.java:75) Caused by: org.apache.ws.security.WSSecurityException: Signature creation failed; nested exception is: java.lang.Exception: Cannot find key for alias: [monit] at org.apache.ws.security.message.WSSecSignature.computeSignature(WSSecSignature.java:721) at org.apache.ws.security.message.WSSecSignature.build(WSSecSignature.java:780) at org.apache.ws.security.action.SignatureAction.execute(SignatureAction.java:57) ... 15 more Caused by: java.lang.Exception: Cannot find key for alias: [monit] at org.apache.ws.security.components.crypto.CryptoBase.getPrivateKey(CryptoBase.java:214) at org.apache.ws.security.message.WSSecSignature.computeSignature(WSSecSignature.java:713) ... 17 more How to have two certificates for wss4j in the same keystore? why it cannot find my certificate there when i have two certificates in one keystore. I have the same password for both certificates regarding PWCallback (CallbackHandler) My properties file: org.apache.ws.security.crypto.provider=org.apache.ws.security.components.crypto.Merlin org.apache.ws.security.crypto.merlin.keystore.type=jks org.apache.ws.security.crypto.merlin.keystore.password=keystore org.apache.ws.security.crypto.merlin.keystore.alias=monit org.apache.ws.security.crypto.merlin.alias.password=*** org.apache.ws.security.crypto.merlin.file=key.jks My client-config.wsdd: <deployment xmlns="http://xml.apache.org/axis/wsdd/" xmlns:java="http://xml.apache.org/axis/wsdd/providers/java"> <globalConfiguration> <requestFlow> <handler name="WSSecurity" type="java:org.apache.ws.axis.security.WSDoAllSender"> <parameter name="user" value="monit"/> <parameter name="passwordCallbackClass" value="cz.ing.oopf.common.ws.PWCallback"/> <parameter name="action" value="Signature"/> <parameter name="signaturePropFile" value="monit.properties"/> <parameter name="signatureKeyIdentifier" value="DirectReference" /> <parameter name="mustUnderstand" value="0"/> </handler> <handler type="java:org.apache.axis.handlers.JWSHandler"> <parameter name="scope" value="session"/> </handler> <handler type="java:org.apache.axis.handlers.JWSHandler"> <parameter name="scope" value="request"/> <parameter name="extension" value=".jwr"/> </handler> </requestFlow> <responseFlow> <handler name="DoSecurityReceiver" type="java:org.apache.ws.axis.security.WSDoAllReceiver"> <parameter name="user" value="other"/> <parameter name="passwordCallbackClass" value="cz.ing.oopf.common.ws.PWCallback"/> <parameter name="action" value="Signature"/> <parameter name="signaturePropFile" value="other.properties"/> <parameter name="signatureKeyIdentifier" value="DirectReference" /> </handler> </responseFlow> </globalConfiguration> <transport name="http" pivot="java:org.apache.axis.transport.http.HTTPSender"> </transport> </deployment> Listing from keytool: keytool -keystore monit-key.jks -v -list Enter keystore password: Keystore type: JKS Keystore provider: SUN Your keystore contains 2 entries Alias name: other Creation date: Jul 22, 2009 Entry type: PrivateKeyEntry Certificate chain length: 1 Certificate[1]: .... Alias name: monit Creation date: Oct 19, 2009 Entry type: trustedCertEntry

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How can I get penetration depth from Minkowski Portal Refinement / Xenocollide?

    - by Raven Dreamer
    I recently got an implementation of Minkowski Portal Refinement (MPR) successfully detecting collision. Even better, my implementation returns a good estimate (local minimum) direction for the minimum penetration depth. So I took a stab at adjusting the algorithm to return the penetration depth in an arbitrary direction, and was modestly successful - my altered method works splendidly for face-edge collision resolution! What it doesn't currently do, is correctly provide the minimum penetration depth for edge-edge scenarios, such as the case on the right: What I perceive to be happening, is that my current method returns the minimum penetration depth to the nearest vertex - which works fine when the collision is actually occurring on the plane of that vertex, but not when the collision happens along an edge. Is there a way I can alter my method to return the penetration depth to the point of collision, rather than the nearest vertex? Here's the method that's supposed to return the minimum penetration distance along a specific direction: public static Vector3 CalcMinDistance(List<Vector3> shape1, List<Vector3> shape2, Vector3 dir) { //holding variables Vector3 n = Vector3.zero; Vector3 swap = Vector3.zero; // v0 = center of Minkowski sum v0 = Vector3.zero; // Avoid case where centers overlap -- any direction is fine in this case //if (v0 == Vector3.zero) return Vector3.zero; //always pass in a valid direction. // v1 = support in direction of origin n = -dir; //get the differnce of the minkowski sum Vector3 v11 = GetSupport(shape1, -n); Vector3 v12 = GetSupport(shape2, n); v1 = v12 - v11; //if the support point is not in the direction of the origin if (v1.Dot(n) <= 0) { //Debug.Log("Could find no points this direction"); return Vector3.zero; } // v2 - support perpendicular to v1,v0 n = v1.Cross(v0); if (n == Vector3.zero) { //v1 and v0 are parallel, which means //the direction leads directly to an endpoint n = v1 - v0; //shortest distance is just n //Debug.Log("2 point return"); return n; } //get the new support point Vector3 v21 = GetSupport(shape1, -n); Vector3 v22 = GetSupport(shape2, n); v2 = v22 - v21; if (v2.Dot(n) <= 0) { //can't reach the origin in this direction, ergo, no collision //Debug.Log("Could not reach edge?"); return Vector2.zero; } // Determine whether origin is on + or - side of plane (v1,v0,v2) //tests linesegments v0v1 and v0v2 n = (v1 - v0).Cross(v2 - v0); float dist = n.Dot(v0); // If the origin is on the - side of the plane, reverse the direction of the plane if (dist > 0) { //swap the winding order of v1 and v2 swap = v1; v1 = v2; v2 = swap; //swap the winding order of v11 and v12 swap = v12; v12 = v11; v11 = swap; //swap the winding order of v11 and v12 swap = v22; v22 = v21; v21 = swap; //and swap the plane normal n = -n; } /// // Phase One: Identify a portal while (true) { // Obtain the support point in a direction perpendicular to the existing plane // Note: This point is guaranteed to lie off the plane Vector3 v31 = GetSupport(shape1, -n); Vector3 v32 = GetSupport(shape2, n); v3 = v32 - v31; if (v3.Dot(n) <= 0) { //can't enclose the origin within our tetrahedron //Debug.Log("Could not reach edge after portal?"); return Vector3.zero; } // If origin is outside (v1,v0,v3), then eliminate v2 and loop if (v1.Cross(v3).Dot(v0) < 0) { //failed to enclose the origin, adjust points; v2 = v3; v21 = v31; v22 = v32; n = (v1 - v0).Cross(v3 - v0); continue; } // If origin is outside (v3,v0,v2), then eliminate v1 and loop if (v3.Cross(v2).Dot(v0) < 0) { //failed to enclose the origin, adjust points; v1 = v3; v11 = v31; v12 = v32; n = (v3 - v0).Cross(v2 - v0); continue; } bool hit = false; /// // Phase Two: Refine the portal int phase2 = 0; // We are now inside of a wedge... while (phase2 < 20) { phase2++; // Compute normal of the wedge face n = (v2 - v1).Cross(v3 - v1); n.Normalize(); // Compute distance from origin to wedge face float d = n.Dot(v1); // If the origin is inside the wedge, we have a hit if (d > 0 ) { //Debug.Log("Do plane test here"); float T = n.Dot(v2) / n.Dot(dir); Vector3 pointInPlane = (dir * T); return pointInPlane; } // Find the support point in the direction of the wedge face Vector3 v41 = GetSupport(shape1, -n); Vector3 v42 = GetSupport(shape2, n); v4 = v42 - v41; float delta = (v4 - v3).Dot(n); float separation = -(v4.Dot(n)); if (delta <= kCollideEpsilon || separation >= 0) { //Debug.Log("Non-convergance detected"); //Debug.Log("Do plane test here"); return Vector3.zero; } // Compute the tetrahedron dividing face (v4,v0,v1) float d1 = v4.Cross(v1).Dot(v0); // Compute the tetrahedron dividing face (v4,v0,v2) float d2 = v4.Cross(v2).Dot(v0); // Compute the tetrahedron dividing face (v4,v0,v3) float d3 = v4.Cross(v3).Dot(v0); if (d1 < 0) { if (d2 < 0) { // Inside d1 & inside d2 ==> eliminate v1 v1 = v4; v11 = v41; v12 = v42; } else { // Inside d1 & outside d2 ==> eliminate v3 v3 = v4; v31 = v41; v32 = v42; } } else { if (d3 < 0) { // Outside d1 & inside d3 ==> eliminate v2 v2 = v4; v21 = v41; v22 = v42; } else { // Outside d1 & outside d3 ==> eliminate v1 v1 = v4; v11 = v41; v12 = v42; } } } return Vector3.zero; } }

    Read the article

  • UITableView issue when using separate delegate/dataSource

    - by Adam Alexander
    General Description: To start with what works, I have a UITableView which has been placed onto an Xcode-generated view using Interface Builder. The view's File Owner is set to an Xcode-generated subclass of UIViewController. To this subclass I have added working implementations of numberOfSectionsInTableView: tableView:numberOfRowsInSection: and tableView:cellForRowAtIndexPath: and the Table View's dataSource and delegate are connected to this class via the File Owner in Interface Builder. The above configuration works with no problems. The issue occurs when I want to move this Table View's dataSource and delegate implementations out to a separate class, most likely because there are other controls on the View besides the Table View and I'd like to move the Table View-related code out to its own class. To accomplish this, I try the following: Create a new subclass of UITableViewController in Xcode Move the known-good implementations of numberOfSectionsInTableView: tableView:numberOfRowsInSection: and tableView:cellForRowAtIndexPath: to the new subclass Drag a Table View Controller to the top level of the existing XIB in InterfaceBuilder, delete the View/TableView that are automatically created for this Table View Controller, then set the Table View Controller's class to match the new subclass Remove the previously-working Table View's existing dataSource and delegate connections and connect them to the new Table View Controller When complete, I do not have a working Table View. I end up with one of three outcomes which can seemingly happen at random: When the Table View loads, I get a runtime error indicating I am sending tableView:cellForRowAtIndexPath: to an object which does not recognize it When the Table View loads, the project breaks into the debugger without error There is no error, but the Table View does not appear With some debugging and having created a basic project just to reproduce this issue, I am usually seeing the 3rd option above (no error but no visible table view). I added some NSLog calls and found that although numberOfSectionsInTableView and numberOfRowsInSection are both getting called, cellForRowAtIndexPath is not. I am convinced I'm missing something really simple and was hoping the answer may be obvious to someone with more experience than I have. If this doesn't turn out to be an easy answer I would be happy to update with some code or a sample project. Thanks for your time! Complete steps to reproduce: Create a new iPhone OS, View-Based Application in Xcode and call it TableTest Open TableTestViewController.xib in Interface Builder and drag a Table View onto the provided view surface. Connect the Table View's dataSource and delegate outlets to File's Owner, which should already represent the TableTestViewController class. Save your changes Back in Xcode, add the following code to TableTestViewController.m: - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { NSLog(@"Returning num sections"); return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { NSLog(@"Returning num rows"); return 1; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { NSLog(@"Trying to return cell"); static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:CellIdentifier] autorelease]; } cell.text = @"Hello"; NSLog(@"Returning cell"); return cell; } Build and Go, and you should see the word Hello appear in the TableView Now to attempt to move this TableView's logic out to a separate class, first create a new file in Xcode, choosing UITableViewController subclass and calling the class TableTestTableViewController Remove the above code snippet from TableTestViewController.m and place it into TableTestTableViewController.m, replacing the default implementation of these three methods with ours. Back in Interface Builder within the same TableTestViewController.xib file, drag a Table View Controller into the main IB window and delete the new Table View object that automatically came with it Set the class for this new Table View Controller to TableTestTableViewController Remove the dataSource and delegate bindings from the existing, previously-working Table View and reconnect the same two bindings to the new Table Test Table View Controller we created. Save changes, Build and Go, and if you're getting the results I'm getting, note the Table View no longer functions properly Solution: With some more troubleshooting and some assistance from the iPhone Developer Forums at https://devforums.apple.com/message/5453, I've documented a solution! The main UIViewController subclass of the project needs an outlet pointing to the UITableViewController instance. To accomplish this, simply add the following to the primary view's header (TableTestViewController.h): #import "TableTestTableViewController.h" and IBOutlet TableTestTableViewController *myTableViewController; Then, in Interface Builder, connect the new outlet from File's Owner to Table Test Table View Controller in the main IB window. No changes are necessary in the UI part of the XIB. Simply having this outlet in place, even though no user code directly uses it, resolves the problem completely. Thanks to those who've helped and credit goes to BaldEagle on the iPhone Developer Forums for finding the solution.

    Read the article

  • Enterprise Process Maps: A Process Picture worth a Million Words

    - by raul.goycoolea
    p { margin-bottom: 0.08in; }h1 { margin-top: 0.33in; margin-bottom: 0in; color: rgb(54, 95, 145); page-break-inside: avoid; }h1.western { font-family: "Cambria",serif; font-size: 14pt; }h1.cjk { font-family: "DejaVu Sans"; font-size: 14pt; }h1.ctl { font-size: 14pt; } Getting Started with Business Transformations A well-known proverb states that "A picture is worth a thousand words." In relation to Business Process Management (BPM), a credible analyst might have a few questions. What if the picture was taken from some particular angle, like directly overhead? What if it was taken from only an inch away or a mile away? What if the photographer did not focus the camera correctly? Does the value of the picture depend on who is looking at it? Enterprise Process Maps are analogous in this sense of relative value. Every BPM project (holistic BPM kick-off, enterprise system implementation, Service-oriented Architecture, business process transformation, corporate performance management, etc.) should be begin with a clear understanding of the business environment, from the biggest picture representations down to the lowest level required or desired for the particular project type, scope and objectives. The Enterprise Process Map serves as an entry point for the process architecture and is defined: the single highest level of process mapping for an organization. It is constructed and evaluated during the Strategy Phase of the Business Process Management Lifecycle. (see Figure 1) Fig. 1: Business Process Management Lifecycle Many organizations view such maps as visual abstractions, constructed for the single purpose of process categorization. This, in turn, results in a lesser focus on the inherent intricacies of the Enterprise Process view, which are explored in the course of this paper. With the main focus of a large scale process documentation effort usually underlying an ERP or other system implementation, it is common for the work to be driven by the desire to "get to the details," and to the type of modeling that will derive near-term tangible results. For instance, a project in American Pharmaceutical Company X is driven by the Director of IT. With 120+ systems in place, and a lack of standardized processes across the United States, he and the VP of IT have decided to embark on a long-term ERP implementation. At the forethought of both are questions, such as: How does my application architecture map to the business? What are each application's functionalities, and where do the business processes utilize them? Where can we retire legacy systems? Well-developed BPM methodologies prescribe numerous model types to capture such information and allow for thorough analysis in these areas. Process to application maps, Event Driven Process Chains, etc. provide this level of detail and facilitate the completion of such project-specific questions. These models and such analysis are appropriately carried out at a relatively low level of process detail. (see figure 2) Fig. 2: The Level Concept, Generic Process HierarchySome of the questions remaining are ones of documentation longevity, the continuation of BPM practice in the organization, process governance and ownership, process transparency and clarity in business process objectives and strategy. The Level Concept in Brief Figure 2 shows a generic, four-level process hierarchy depicting the breakdown of a "Process Area" into progressively more detailed process classifications. The number of levels and the names of these levels are flexible, and can be fit to the standards of the organization's chosen terminology or any other chosen reference model that makes logical sense for both short and long term process description. It is at Level 1 (in this case the Process Area level), that the Enterprise Process Map is created. This map and its contained objects become the foundation for a top-down approach to subsequent mapping, object relationship development, and analysis of the organization's processes and its supporting infrastructure. Additionally, this picture serves as a communication device, at an executive level, describing the design of the business in its service to a customer. It seems, then, imperative that the process development effort, and this map, start off on the right foot. Figuring out just what that right foot is, however, is critical and trend-setting in an evolving organization. Key Considerations Enterprise Process Maps are usually not as living and breathing as other process maps. Just as it would be an extremely difficult task to change the foundation of the Sears Tower or a city plan for the entire city of Chicago, the Enterprise Process view of an organization usually remains unchanged once developed (unless, of course, an organization is at a stage where it is capable of true, high-level process innovation). Regardless, the Enterprise Process map is a key first step, and one that must be taken in a precise way. What makes this groundwork solid depends on not only the materials used to construct it (process areas), but also the layout plan and knowledge base of what will be built (the entire process architecture). It seems reasonable that care and consideration are required to create this critical high level map... but what are the important factors? Does the process modeler need to worry about how many process areas there are? About who is looking at it? Should he only use the color pink because it's his boss' favorite color? Interestingly, and perhaps surprisingly, these are all valid considerations that may just require a bit of structure. Below are Three Key Factors to consider when building an Enterprise Process Map: Company Strategic Focus Process Categorization: Customer is Core End-to-end versus Functional Processes Company Strategic Focus As mentioned above, the Enterprise Process Map is created during the Strategy Phase of the Business Process Management Lifecycle. From Oracle Business Process Management methodology for business transformation, it is apparent that business processes exist for the purpose of achieving the strategic objectives of an organization. In a prescribed, top-down approach to process development, it must be ensured that each process fulfills its objectives, and in an aggregated manner, drives fulfillment of the strategic objectives of the company, whether for particular business segments or in a broader sense. This is a crucial point, as the strategic messages of the company must therefore resound in its process maps, in particular one that spans the processes of the complete business: the Enterprise Process Map. One simple example from Company X is shown below (see figure 3). Fig. 3: Company X Enterprise Process Map In reviewing Company X's Enterprise Process Map, one can immediately begin to understand the general strategic mindset of the organization. It shows that Company X is focused on its customers, defining 10 of its process areas belonging to customer-focused categories. Additionally, the organization views these end-customer-oriented process areas as part of customer-fulfilling value chains, while support process areas do not provide as much contiguous value. However, by including both support and strategic process categorizations, it becomes apparent that all processes are considered vital to the success of the customer-oriented focus processes. Below is an example from Company Y (see figure 4). Fig. 4: Company Y Enterprise Process Map Company Y, although also a customer-oriented company, sends a differently focused message with its depiction of the Enterprise Process Map. Along the top of the map is the company's product tree, overarching the process areas, which when executed deliver the products themselves. This indicates one strategic objective of excellence in product quality. Additionally, the view represents a less linear value chain, with strong overlaps of the various process areas. Marketing and quality management are seen as a key support processes, as they span the process lifecycle. Often, companies may incorporate graphics, logos and symbols representing customers and suppliers, and other objects to truly send the strategic message to the business. Other times, Enterprise Process Maps may show high level of responsibility to organizational units, or the application types that support the process areas. It is possible that hundreds of formats and focuses can be applied to an Enterprise Process Map. What is of vital importance, however, is which formats and focuses are chosen to truly represent the direction of the company, and serve as a driver for focusing the business on the strategic objectives set forth in that right. Process Categorization: Customer is Core In the previous two examples, processes were grouped using differing categories and techniques. Company X showed one support and three customer process categorizations using encompassing chevron objects; Customer Y achieved a less distinct categorization using a gradual color scheme. Either way, and in general, modeling of the process areas becomes even more valuable and easily understood within the context of business categorization, be it strategic or otherwise. But how one categorizes their processes is typically more complex than simply choosing object shapes and colors. Previously, it was stated that the ideal is a prescribed top-down approach to developing processes, to make certain linkages all the way back up to corporate strategy. But what about external influences? What forces push and pull corporate strategy? Industry maturity, product lifecycle, market profitability, competition, etc. can all drive the critical success factors of a particular business segment, or the company as a whole, in addition to previous corporate strategy. This may seem to be turning into a discussion of theory, but that is far from the case. In fact, in years of recent study and evolution of the way businesses operate, cross-industry and across the globe, one invariable has surfaced with such strength to make it undeniable in the game plan of any strategy fit for survival. That constant is the customer. Many of a company's critical success factors, in any business segment, relate to the customer: customer retention, satisfaction, loyalty, etc. Businesses serve customers, and so do a business's processes, mapped or unmapped. The most effective way to categorize processes is in a manner that visualizes convergence to what is core for a company. It is the value chain, beginning with the customer in mind, and ending with the fulfillment of that customer, that becomes the core or the centerpiece of the Enterprise Process Map. (See figure 5) Fig. 5: Company Z Enterprise Process Map Company Z has what may be viewed as several different perspectives or "cuts" baked into their Enterprise Process Map. It has divided its processes into three main categories (top, middle, and bottom) of Management Processes, the Core Value Chain and Supporting Processes. The Core category begins with Corporate Marketing (which contains the activities of beginning to engage customers) and ends with Customer Service Management. Within the value chain, this company has divided into the focus areas of their two primary business lines, Foods and Beverages. Does this mean that areas, such as Strategy, Information Management or Project Management are not as important as those in the Core category? No! In some cases, though, depending on the organization's understanding of high-level BPM concepts, use of category names, such as "Core," "Management" or "Support," can be a touchy subject. What is important to understand, is that no matter the nomenclature chosen, the Core processes are those that drive directly to customer value, Support processes are those which make the Core processes possible to execute, and Management Processes are those which steer and influence the Core. Some common terms for these three basic categorizations are Core, Customer Fulfillment, Customer Relationship Management, Governing, Controlling, Enabling, Support, etc. End-to-end versus Functional Processes Every high and low level of process: function, task, activity, process/work step (whatever an organization calls it), should add value to the flow of business in an organization. Suppose that within the process "Deliver package," there is a documented task titled "Stop for ice cream." It doesn't take a process expert to deduce the room for improvement. Though stopping for ice cream may create gain for the one person performing it, it likely benefits neither the organization nor, more importantly, the customer. In most cases, "Stop for ice cream" wouldn't make it past the first pass of To-Be process development. What would make the cut, however, would be a flow of tasks that, each having their own value add, build up to greater and greater levels of process objective. In this case, those tasks would combine to achieve a status of "package delivered." Figure 3 shows a simple example: Just as the package can only be delivered (outcome of the process) without first being retrieved, loaded, and the travel destination reached (outcomes of the process steps), some higher level of process "Play Practical Joke" (e.g., main process or process area) cannot be completed until a package is delivered. It seems that isolated or functionally separated processes, such as "Deliver Package" (shown in Figure 6), are necessary, but are always part of a bigger value chain. Each of these individual processes must be analyzed within the context of that value chain in order to ensure successful end-to-end process performance. For example, this company's "Create Joke Package" process could be operating flawlessly and efficiently, but if a joke is never developed, it cannot be created, so the end-to-end process breaks. Fig. 6: End to End Process Construction That being recognized, it is clear that processes must be viewed as end-to-end, customer-to-customer, and in the context of company strategy. But as can also be seen from the previous example, these vital end-to-end processes cannot be built without the functionally oriented building blocks. Without one, the other cannot be had, or at least not in a complete and organized fashion. As it turns out, but not discussed in depth here, the process modeling effort, BPM organizational development, and comprehensive coverage cannot be fully realized without a semi-functional, process-oriented approach. Then, an Enterprise Process Map should be concerned with both views, the building blocks, and access points to the business-critical end-to-end processes, which they construct. Without the functional building blocks, all streams of work needed for any business transformation would be lost mess of process disorganization. End-to-end views are essential for utilization in optimization in context, understanding customer impacts, base-lining all project phases and aligning objectives. Including both views on an Enterprise Process Map allows management to understand the functional orientation of the company's processes, while still providing access to end-to-end processes, which are most valuable to them. (See figures 7 and 8). Fig. 7: Simplified Enterprise Process Map with end-to-end Access Point The above examples show two unique ways to achieve a successful Enterprise Process Map. The first example is a simple map that shows a high level set of process areas and a separate section with the end-to-end processes of concern for the organization. This particular map is filtered to show just one vital end-to-end process for a project-specific focus. Fig. 8: Detailed Enterprise Process Map showing connected Functional Processes The second example shows a more complex arrangement and categorization of functional processes (the names of each process area has been removed). The end-to-end perspective is achieved at this level through the connections (interfaces at lower levels) between these functional process areas. An important point to note is that the organization of these two views of the Enterprise Process Map is dependent, in large part, on the orientation of its audience, and the complexity of the landscape at the highest level. If both are not apparent, the Enterprise Process Map is missing an opportunity to serve as a holistic, high-level view. Conclusion In the world of BPM, and specifically regarding Enterprise Process Maps, a picture can be worth as many words as the thought and effort that is put into it. Enterprise Process Maps alone cannot change an organization, but they serve more purposes than initially meet the eye, and therefore must be designed in a way that enables a BPM mindset, business process understanding and business transformation efforts. Every Enterprise Process Map will and should be different when looking across organizations. Its design will be driven by company strategy, a level of customer focus, and functional versus end-to-end orientations. This high-level description of the considerations of the Enterprise Process Maps is not a prescriptive "how to" guide. However, a company attempting to create one may not have the practical BPM experience to truly explore its options or impacts to the coming work of business process transformation. The biggest takeaway is that process modeling, at all levels, is a science and an art, and art is open to interpretation. It is critical that the modeler of the highest level of process mapping be a cognoscente of the message he is delivering and the factors at hand. Without sufficient focus on the design of the Enterprise Process Map, an entire BPM effort may suffer. For additional information please check: Oracle Business Process Management.

    Read the article

  • ASP.NET MVC 3: Layouts and Sections with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: Introducing Razor (July 2nd) New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (Dec 16th) Layouts and Sections with Razor (Today) In today’s post I’m going to go into more details about how Layout pages work with Razor.  In particular, I’m going to cover how you can have multiple, non-contiguous, replaceable “sections” within a layout file – and enable views based on layouts to optionally “fill in” these different sections at runtime.  The Razor syntax for doing this is clean and concise. I’ll also show how you can dynamically check at runtime whether a particular layout section has been defined, and how you can provide alternate content (or even an alternate layout) in the event that a section isn’t specified within a view template.  This provides a powerful and easy way to customize the UI of your site and make it clean and DRY from an implementation perspective. What are Layouts? You typically want to maintain a consistent look and feel across all of the pages within your web-site/application.  ASP.NET 2.0 introduced the concept of “master pages” which helps enable this when using .aspx based pages or templates.  Razor also supports this concept with a feature called “layouts” – which allow you to define a common site template, and then inherit its look and feel across all the views/pages on your site. I previously discussed the basics of how layout files work with Razor in my ASP.NET MVC 3: Layouts with Razor blog post.  Today’s post will go deeper and discuss how you can define multiple, non-contiguous, replaceable regions within a layout file that you can then optionally “fill in” at runtime. Site Layout Scenario Let’s look at how we can implement a common site layout scenario with ASP.NET MVC 3 and Razor.  Specifically, we’ll implement some site UI where we have a common header and footer on all of our pages.  We’ll also add a “sidebar” section to the right of our common site layout.  On some pages we’ll customize the SideBar to contain content specific to the page it is included on: And on other pages (that do not have custom sidebar content) we will fall back and provide some “default content” to the sidebar: We’ll use ASP.NET MVC 3 and Razor to enable this customization in a nice, clean way.  Below are some step-by-step tutorial instructions on how to build the above site with ASP.NET MVC 3 and Razor. Part 1: Create a New Project with a Layout for the “Body” section We’ll begin by using the “File->New Project” menu command within Visual Studio to create a new ASP.NET MVC 3 Project.  We’ll create the new project using the “Empty” template option: This will create a new project that has no default controllers in it: Creating a HomeController We will then right-click on the “Controllers” folder of our newly created project and choose the “Add->Controller” context menu command.  This will bring up the “Add Controller” dialog: We’ll name the new controller we create “HomeController”.  When we click the “Add” button Visual Studio will add a HomeController class to our project with a default “Index” action method that returns a view: We won’t need to write any Controller logic to implement this sample – so we’ll leave the default code as-is.  Creating a View Template Our next step will be to implement the view template associated with the HomeController’s Index action method.  To implement the view template, we will right-click within the “HomeController.Index()” method and select the “Add View” command to create a view template for our home page: This will bring up the “Add View” dialog within Visual Studio.  We do not need to change any of the default settings within the above dialog (the name of the template was auto-populated to Index because we invoked the “Add View” context menu command within the Index method).  When we click the “Add” Button within the dialog, a Razor-based “Index.cshtml” view template will be added to the \Views\Home\ folder within our project.  Let’s add some simple default static content to it: Notice above how we don’t have an <html> or <body> section defined within our view template.  This is because we are going to rely on a layout template to supply these elements and use it to define the common site layout and structure for our site (ensuring that it is consistent across all pages and URLs within the site).  Customizing our Layout File Let’s open and customize the default “_Layout.cshtml” file that was automatically added to the \Views\Shared folder when we created our new project: The default layout file (shown above) is pretty basic and simply outputs a title (if specified in either the Controller or the View template) and adds links to a stylesheet and jQuery.  The call to “RenderBody()” indicates where the main body content of our Index.cshtml file will merged into the output sent back to the browser. Let’s modify the Layout template to add a common header, footer and sidebar to the site: We’ll then edit the “Site.css” file within the \Content folder of our project and add 4 CSS rules to it: And now when we run the project and browse to the home “/” URL of our project we’ll see a page like below: Notice how the content of the HomeController’s Index view template and the site’s Shared Layout template have been merged together into a single HTML response.  Below is what the HTML sent back from the server looks like: Part 2: Adding a “SideBar” Section Our site so far has a layout template that has only one “section” in it – what we call the main “body” section of the response.  Razor also supports the ability to add additional "named sections” to layout templates as well.  These sections can be defined anywhere in the layout file (including within the <head> section of the HTML), and allow you to output dynamic content to multiple, non-contiguous, regions of the final response. Defining the “SideBar” section in our Layout Let’s update our Layout template to define an additional “SideBar” section of content that will be rendered within the <div id=”sidebar”> region of our HTML.  We can do this by calling the RenderSection(string sectionName, bool required) helper method within our Layout.cshtml file like below:   The first parameter to the “RenderSection()” helper method specifies the name of the section we want to render at that location in the layout template.  The second parameter is optional, and allows us to define whether the section we are rendering is required or not.  If a section is “required”, then Razor will throw an error at runtime if that section is not implemented within a view template that is based on the layout file (which can make it easier to track down content errors).  If a section is not required, then its presence within a view template is optional, and the above RenderSection() code will render nothing at runtime if it isn’t defined. Now that we’ve made the above change to our layout file, let’s hit refresh in our browser and see what our Home page now looks like: Notice how we currently have no content within our SideBar <div> – that is because the Index.cshtml view template doesn’t implement our new “SideBar” section yet. Implementing the “SideBar” Section in our View Template Let’s change our home-page so that it has a SideBar section that outputs some custom content.  We can do that by opening up the Index.cshtml view template, and by adding a new “SiderBar” section to it.  We’ll do this using Razor’s @section SectionName { } syntax: We could have put our SideBar @section declaration anywhere within the view template.  I think it looks cleaner when defined at the top or bottom of the file – but that is simply personal preference.  You can include any content or code you want within @section declarations.  Notice above how I have a C# code nugget that outputs the current time at the bottom of the SideBar section.  I could have also written code that used ASP.NET MVC’s HTML/AJAX helper methods and/or accessed any strongly-typed model objects passed to the Index.cshtml view template. Now that we’ve made the above template changes, when we hit refresh in our browser again we’ll see that our SideBar content – that is specific to the Home Page of our site – is now included in the page response sent back from the server: The SideBar section content has been merged into the proper location of the HTML response : Part 3: Conditionally Detecting if a Layout Section Has Been Implemented Razor provides the ability for you to conditionally check (from within a layout file) whether a section has been defined within a view template, and enables you to output an alternative response in the event that the section has not been defined.  This provides a convenient way to specify default UI for optional layout sections.  Let’s modify our Layout file to take advantage of this capability.  Below we are conditionally checking whether the “SideBar” section has been defined without the view template being rendered (using the IsSectionDefined() method), and if so we render the section.  If the section has not been defined, then we now instead render some default content for the SideBar:  Note: You want to make sure you prefix calls to the RenderSection() helper method with a @ character – which will tell Razor to execute the HelperResult it returns and merge in the section content in the appropriate place of the output.  Notice how we wrote @RenderSection(“SideBar”) above instead of just RenderSection(“SideBar”).  Otherwise you’ll get an error. Above we are simply rendering an inline static string (<p>Default SideBar Content</p>) if the section is not defined.  A real-world site would more likely refactor this default content to be stored within a separate partial template (which we’d render using the Html.RenderPartial() helper method within the else block) or alternatively use the Html.Action() helper method within the else block to encapsulate both the logic and rendering of the default sidebar. When we hit refresh on our home-page, we will still see the same custom SideBar content we had before.  This is because we implemented the SideBar section within our Index.cshtml view template (and so our Layout rendered it): Let’s now implement a “/Home/About” URL for our site by adding a new “About” action method to our HomeController: The About() action method above simply renders a view back to the client when invoked.  We can implement the corresponding view template for this action by right-clicking within the “About()” method and using the “Add View” menu command (like before) to create a new About.cshtml view template.  We’ll implement the About.cshtml view template like below. Notice that we are not defining a “SideBar” section within it: When we browse the /Home/About URL we’ll see the content we supplied above in the main body section of our response, and the default SideBar content will rendered: The layout file determined at runtime that a custom SideBar section wasn’t present in the About.cshtml view template, and instead rendered the default sidebar content. One Last Tweak… Let’s suppose that at a later point we decide that instead of rendering default side-bar content, we just want to hide the side-bar entirely from pages that don’t have any custom sidebar content defined.  We could implement this change simply by making a small modification to our layout so that the sidebar content (and its surrounding HTML chrome) is only rendered if the SideBar section is defined.  The code to do this is below: Razor is flexible enough so that we can make changes like this and not have to modify any of our view templates (nor make change any Controller logic changes) to accommodate this.  We can instead make just this one modification to our Layout file and the rest happens cleanly.  This type of flexibility makes Razor incredibly powerful and productive. Summary Razor’s layout capability enables you to define a common site template, and then inherit its look and feel across all the views/pages on your site. Razor enables you to define multiple, non-contiguous, “sections” within layout templates that can be “filled-in” by view templates.  The @section {} syntax for doing this is clean and concise.  Razor also supports the ability to dynamically check at runtime whether a particular section has been defined, and to provide alternate content (or even an alternate layout) in the event that it isn’t specified.  This provides a powerful and easy way to customize the UI of your site - and make it clean and DRY from an implementation perspective. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • ODI 12c - Parallel Table Load

    - by David Allan
    In this post we will look at the ODI 12c capability of parallel table load from the aspect of the mapping developer and the knowledge module developer - two quite different viewpoints. This is about parallel table loading which isn't to be confused with loading multiple targets per se. It supports the ability for ODI mappings to be executed concurrently especially if there is an overlap of the datastores that they access, so any temporary resources created may be uniquely constructed by ODI. Temporary objects can be anything basically - common examples are staging tables, indexes, views, directories - anything in the ETL to help the data integration flow do its job. In ODI 11g users found a few workarounds (such as changing the technology prefixes - see here) to build unique temporary names but it was more of a challenge in error cases. ODI 12c mappings by default operate exactly as they did in ODI 11g with respect to these temporary names (this is also true for upgraded interfaces and scenarios) but can be configured to support the uniqueness capabilities. We will look at this feature from two aspects; that of a mapping developer and that of a developer (of procedures or KMs). 1. Firstly as a Mapping Developer..... 1.1 Control when uniqueness is enabled A new property is available to set unique name generation on/off. When unique names have been enabled for a mapping, all temporary names used by the collection and integration objects will be generated using unique names. This property is presented as a check-box in the Property Inspector for a deployment specification. 1.2 Handle cleanup after successful execution Provided that all temporary objects that are created have a corresponding drop statement then all of the temporary objects should be removed during a successful execution. This should be the case with the KMs developed by Oracle. 1.3 Handle cleanup after unsuccessful execution If an execution failed in ODI 11g then temporary tables would have been left around and cleaned up in the subsequent run. In ODI 12c, KM tasks can now have a cleanup-type task which is executed even after a failure in the main tasks. These cleanup tasks will be executed even on failure if the property 'Remove Temporary Objects on Error' is set. If the agent was to crash and not be able to execute this task, then there is an ODI tool (OdiRemoveTemporaryObjects here) you can invoke to cleanup the tables - it supports date ranges and the like. That's all there is to it from the aspect of the mapping developer it's much, much simpler and straightforward. You can now execute the same mapping concurrently or execute many mappings using the same resource concurrently without worrying about conflict.  2. Secondly as a Procedure or KM Developer..... In the ODI Operator the executed code shows the actual name that is generated - you can also see the runtime code prior to execution (introduced in 11.1.1.7), for example below in the code type I selected 'Pre-executed Code' this lets you see the code about to be processed and you can also see the executed code (which is the default view). References to the collection (C$) and integration (I$) names will be automatically made unique by using the odiRef APIs - these objects will have unique names whenever concurrency has been enabled for a particular mapping deployment specification. It's also possible to use name uniqueness functions in procedures and your own KMs. 2.1 New uniqueness tags  You can also make your own temporary objects have unique names by explicitly including either %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG in the name passed to calls to the odiRef APIs. Such names would always include the unique tag regardless of the concurrency setting. To illustrate, let's look at the getObjectName() method. At <% expansion time, this API will append %UNIQUE_STEP_TAG to the object name for collection and integration tables. The name parameter passed to this API may contain  %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. This API always generates to the <? version of getObjectName() At execution time this API will replace the unique tag macros with a string that is unique to the current execution scope. The returned name will conform to the name-length restriction for the target technology, and its pattern for the unique tag. Any necessary truncation will be performed against the initial name for the object and any other fixed text that may have been specified. Examples are:- <?=odiRef.getObjectName("L", "%COL_PRFEMP%UNIQUE_STEP_TAG", "D")?> SCOTT.C$_EABH7QI1BR1EQI3M76PG9SIMBQQ <?=odiRef.getObjectName("L", "EMP%UNIQUE_STEP_TAG_AE", "D")?> SCOTT.EMPAO96Q2JEKO0FTHQP77TMSAIOSR_ Methods which have this kind of support include getFrom, getTableName, getTable, getObjectShortName and getTemporaryIndex. There are APIs for retrieving this tag info also, the getInfo API has been extended with the following properties (the UNIQUE* properties can also be used in ODI procedures); UNIQUE_STEP_TAG - Returns the unique value for the current step scope, e.g. 5rvmd8hOIy7OU2o1FhsF61 Note that this will be a different value for each loop-iteration when the step is in a loop. UNIQUE_SESSION_TAG - Returns the unique value for the current session scope, e.g. 6N38vXLrgjwUwT5MseHHY9 IS_CONCURRENT - Returns info about the current mapping, will return 0 or 1 (only in % phase) GUID_SRC_SET - Returns the UUID for the current source set/execution unit (only in % phase) The getPop API has been extended with the IS_CONCURRENT property which returns info about an mapping, will return 0 or 1.  2.2 Additional APIs Some new APIs are provided including getFormattedName which will allow KM developers to construct a name from fixed-text or ODI symbols that can be optionally truncate to a max length and use a specific encoding for the unique tag. It has syntax getFormattedName(String pName[, String pTechnologyCode]) This API is available at both the % and the ? phase.  The format string can contain the ODI prefixes that are available for getObjectName(), e.g. %INT_PRF, %COL_PRF, %ERR_PRF, %IDX_PRF alongwith %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. The latter tags will be expanded into a unique string according to the specified technology. Calls to this API within the same execution context are guaranteed to return the same unique name provided that the same parameters are passed to the call. e.g. <%=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")%> <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")?> C$_MY_TAB7wDiBe80vBog1auacS1xB_AE <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG.log", "FILE")?> C2_MY_TAB7wDiBe80vBog1auacS1xB.log 2.3 Name length generation  As part of name generation, the length of the generated name will be compared with the maximum length for the target technology and truncation may need to be applied. When a unique tag is included in the generated string it is important that uniqueness is not compromised by truncation of the unique tag. When a unique tag is NOT part of the generated name, the name will be truncated by removing characters from the end - this is the existing 11g algorithm. When a unique tag is included, the algorithm will first truncate the <postfix> and if necessary  the <prefix>. It is recommended that users will ensure there is sufficient uniqueness in the <prefix> section to ensure uniqueness of the final resultant name. SUMMARY To summarize, ODI 12c make it much simpler to utilize mappings in concurrent cases and provides APIs for helping developing any procedures or custom knowledge modules in such a way they can be used in highly concurrent, parallel scenarios. 

    Read the article

  • SSIS - XML Source Script

    - by simonsabin
    The XML Source in SSIS is great if you have a 1 to 1 mapping between entity and table. You can do more complex mapping but it becomes very messy and won't perform. What other options do you have? The challenge with XML processing is to not need a huge amount of memory. I remember using the early versions of Biztalk with loaded the whole document into memory to map from one document type to another. This was fine for small documents but was an absolute killer for large documents. You therefore need a streaming approach. For flexibility however you want to be able to generate your rows easily, and if you've ever used the XmlReader you will know its ugly code to write. That brings me on to LINQ. The is an implementation of LINQ over XML which is really nice. You can write nice LINQ queries instead of the XMLReader stuff. The downside is that by default LINQ to XML requires a whole XML document to work with. No streaming. Your code would look like this. We create an XDocument and then enumerate over a set of annoymous types we generate from our LINQ statement XDocument x = XDocument.Load("C:\\TEMP\\CustomerOrders-Attribute.xml");   foreach (var xdata in (from customer in x.Elements("OrderInterface").Elements("Customer")                        from order in customer.Elements("Orders").Elements("Order")                        select new { Account = customer.Attribute("AccountNumber").Value                                   , OrderDate = order.Attribute("OrderDate").Value }                        )) {     Output0Buffer.AddRow();     Output0Buffer.AccountNumber = xdata.Account;     Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate); } As I said the downside to this is that you are loading the whole document into memory. I did some googling and came across some helpful videos from a nice UK DPE Mike Taulty http://www.microsoft.com/uk/msdn/screencasts/screencast/289/LINQ-to-XML-Streaming-In-Large-Documents.aspx. Which show you how you can combine LINQ and the XmlReader to get a semi streaming approach. I took what he did and implemented it in SSIS. What I found odd was that when I ran it I got different numbers between using the streamed and non streamed versions. I found the cause was a little bug in Mikes code that causes the pointer in the XmlReader to progress past the start of the element and thus foreach (var xdata in (from customer in StreamReader("C:\\TEMP\\CustomerOrders-Attribute.xml","Customer")                                from order in customer.Elements("Orders").Elements("Order")                                select new { Account = customer.Attribute("AccountNumber").Value                                           , OrderDate = order.Attribute("OrderDate").Value }                                ))         {             Output0Buffer.AddRow();             Output0Buffer.AccountNumber = xdata.Account;             Output0Buffer.OrderDate = Convert.ToDateTime(xdata.OrderDate);         } These look very similiar and they are the key element is the method we are calling, StreamReader. This method is what gives us streaming, what it does is return a enumerable list of elements, because of the way that LINQ works this results in the data being streamed in. static IEnumerable<XElement> StreamReader(String filename, string elementName) {     using (XmlReader xr = XmlReader.Create(filename))     {         xr.MoveToContent();         while (xr.Read()) //Reads the first element         {             while (xr.NodeType == XmlNodeType.Element && xr.Name == elementName)             {                 XElement node = (XElement)XElement.ReadFrom(xr);                   yield return node;             }         }         xr.Close();     } } This code is specifically designed to return a list of the elements with a specific name. The first Read reads the root element and then the inner while loop checks to see if the current element is the type we want. If not we do the xr.Read() again until we find the element type we want. We then use the neat function XElement.ReadFrom to read an element and all its sub elements into an XElement. This is what is returned and can be consumed by the LINQ statement. Essentially once one element has been read we need to check if we are still on the same element type and name (the inner loop) This was Mikes mistake, if we called .Read again we would advance the XmlReader beyond the start of the Element and so the ReadFrom method wouldn't work. So with the code above you can use what ever LINQ statement you like to flatten your XML into the rowsets you want. You could even have multiple outputs and generate your own surrogate keys.        

    Read the article

  • trying to override getView in a SimpleCursorAdapter gives NullPointerException

    - by Dimitry Hristov
    Would very much appreciate any help or hint on were to go next. I'm trying to change the content of a row in ListView programmatically. In one row there are 3 TextView and a ProgressBar. I want to animate the ProgressBar if the 'result' column of the current row is zero. After reading some tutorials and docs, I came to the conclusion that LayoutInflater has to be used and getView() - overriden. Maybe I am wrong on this. If I return row = inflater.inflate(R.layout.row, null); from the function, it gives NullPointerException. Here is the code: private final class mySimpleCursorAdapter extends SimpleCursorAdapter { private Cursor localCursor; private Context localContext; public mySimpleCursorAdapter(Context context, int layout, Cursor c, String[] from, int[] to) { super(context, layout, c, from, to); this.localCursor = c; this.localContext = context; } /** * 1. ListView asks adapter "give me a view" (getView) for each item of the list * 2. A new View is returned and displayed */ public View getView(int position, View convertView, ViewGroup parent) { View row = super.getView(position, convertView, parent); LayoutInflater inflater = (LayoutInflater)localContext.getSystemService(Context.LAYOUT_INFLATER_SERVICE); String result = localCursor.getString(2); int resInt = Integer.parseInt(result); Log.d(TAG, "row " + row); // if 'result' column form the TABLE is 0, do something useful: if(resInt == 0) { ProgressBar progress = (ProgressBar) row.findViewById(R.id.update_progress); progress.setIndeterminate(true); TextView edit1 = (TextView)row.findViewById(R.id.row_id); TextView edit2 = (TextView)row.findViewById(R.id.request); TextView edit3 = (TextView)row.findViewById(R.id.result); edit1.setText("1"); edit2.setText("2"); edit3.setText("3"); row = inflater.inflate(R.layout.row, null); } return row; } here is the Stack Trace: 03-08 03:15:29.639: ERROR/AndroidRuntime(619): java.lang.NullPointerException 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.SimpleCursorAdapter.bindView(SimpleCursorAdapter.java:149) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.CursorAdapter.getView(CursorAdapter.java:186) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at com.dhristov.test1.test1$mySimpleCursorAdapter.getView(test1.java:105) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.AbsListView.obtainView(AbsListView.java:1256) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.ListView.makeAndAddView(ListView.java:1668) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.ListView.fillDown(ListView.java:637) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.ListView.fillSpecific(ListView.java:1224) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.ListView.layoutChildren(ListView.java:1499) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.AbsListView.onLayout(AbsListView.java:1113) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.view.View.layout(View.java:6830) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1119) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.layoutVertical(LinearLayout.java:998) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.onLayout(LinearLayout.java:918) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.view.View.layout(View.java:6830) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1119) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.layoutVertical(LinearLayout.java:998) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.onLayout(LinearLayout.java:918) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.view.View.layout(View.java:6830) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.view.View.layout(View.java:6830) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1119) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.layoutVertical(LinearLayout.java:998) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.onLayout(LinearLayout.java:918) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.view.View.layout(View.java:6830) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.view.View.layout(View.java:6830) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.view.ViewRoot.performTraversals(ViewRoot.java:996) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.view.ViewRoot.handleMessage(ViewRoot.java:1633) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.os.Handler.dispatchMessage(Handler.java:99) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.os.Looper.loop(Looper.java:123) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.app.ActivityThread.main(ActivityThread.java:4363) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at java.lang.reflect.Method.invokeNative(Native Method) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at java.lang.reflect.Method.invoke(Method.java:521) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • Using a "white list" for extracting terms for Text Mining

    - by [email protected]
    In Part 1 of my post on "Generating cluster names from a document clustering model" (part 1, part 2, part 3), I showed how to build a clustering model from text documents using Oracle Data Miner, which automates preparing data for text mining. In this process we specified a custom stoplist and lexer and relied on Oracle Text to identify important terms.  However, there is an alternative approach, the white list, which uses a thesaurus object with the Oracle Text CTXRULE index to allow you to specify the important terms. INTRODUCTIONA stoplist is used to exclude, i.e., black list, specific words in your documents from being indexed. For example, words like a, if, and, or, and but normally add no value when text mining. Other words can also be excluded if they do not help to differentiate documents, e.g., the word Oracle is ubiquitous in the Oracle product literature. One problem with stoplists is determining which words to specify. This usually requires inspecting the terms that are extracted, manually identifying which ones you don't want, and then re-indexing the documents to determine if you missed any. Since a corpus of documents could contain thousands of words, this could be a tedious exercise. Moreover, since every word is considered as an individual token, a term excluded in one context may be needed to help identify a term in another context. For example, in our Oracle product literature example, the words "Oracle Data Mining" taken individually are not particular helpful. The term "Oracle" may be found in nearly all documents, as with the term "Data." The term "Mining" is more unique, but could also refer to the Mining industry. If we exclude "Oracle" and "Data" by specifying them in the stoplist, we lose valuable information. But it we include them, they may introduce too much noise. Still, when you have a broad vocabulary or don't have a list of specific terms of interest, you rely on the text engine to identify important terms, often by computing the term frequency - inverse document frequency metric. (This is effectively a weight associated with each term indicating its relative importance in a document within a collection of documents. We'll revisit this later.) The results using this technique is often quite valuable. As noted above, an alternative to the subtractive nature of the stoplist is to specify a white list, or a list of terms--perhaps multi-word--that we want to extract and use for data mining. The obvious downside to this approach is the need to specify the set of terms of interest. However, this may not be as daunting a task as it seems. For example, in a given domain (Oracle product literature), there is often a recognized glossary, or a list of keywords and phrases (Oracle product names, industry names, product categories, etc.). Being able to identify multi-word terms, e.g., "Oracle Data Mining" or "Customer Relationship Management" as a single token can greatly increase the quality of the data mining results. The remainder of this post and subsequent posts will focus on how to produce a dataset that contains white list terms, suitable for mining. CREATING A WHITE LIST We'll leverage the thesaurus capability of Oracle Text. Using a thesaurus, we create a set of rules that are in effect our mapping from single and multi-word terms to the tokens used to represent those terms. For example, "Oracle Data Mining" becomes "ORACLEDATAMINING." First, we'll create and populate a mapping table called my_term_token_map. All text has been converted to upper case and values in the TERM column are intended to be mapped to the token in the TOKEN column. TERM                                TOKEN DATA MINING                         DATAMINING ORACLE DATA MINING                  ORACLEDATAMINING 11G                                 ORACLE11G JAVA                                JAVA CRM                                 CRM CUSTOMER RELATIONSHIP MANAGEMENT    CRM ... Next, we'll create a thesaurus object my_thesaurus and a rules table my_thesaurus_rules: CTX_THES.CREATE_THESAURUS('my_thesaurus', FALSE); CREATE TABLE my_thesaurus_rules (main_term     VARCHAR2(100),                                  query_string  VARCHAR2(400)); We next populate the thesaurus object and rules table using the term token map. A cursor is defined over my_term_token_map. As we iterate over  the rows, we insert a synonym relationship 'SYN' into the thesaurus. We also insert into the table my_thesaurus_rules the main term, and the corresponding query string, which specifies synonyms for the token in the thesaurus. DECLARE   cursor c2 is     select token, term     from my_term_token_map; BEGIN   for r_c2 in c2 loop     CTX_THES.CREATE_RELATION('my_thesaurus',r_c2.token,'SYN',r_c2.term);     EXECUTE IMMEDIATE 'insert into my_thesaurus_rules values                        (:1,''SYN(' || r_c2.token || ', my_thesaurus)'')'     using r_c2.token;   end loop; END; We are effectively inserting the token to return and the corresponding query that will look up synonyms in our thesaurus into the my_thesaurus_rules table, for example:     'ORACLEDATAMINING'        SYN ('ORACLEDATAMINING', my_thesaurus)At this point, we create a CTXRULE index on the my_thesaurus_rules table: create index my_thesaurus_rules_idx on        my_thesaurus_rules(query_string)        indextype is ctxsys.ctxrule; In my next post, this index will be used to extract the tokens that match each of the rules specified. We'll then compute the tf-idf weights for each of the terms and create a nested table suitable for mining.

    Read the article

  • Thread scheduling Round Robin / scheduling dispatch

    - by MRP
    #include <pthread.h> #include <stdio.h> #include <stdlib.h> #include <semaphore.h> #define NUM_THREADS 4 #define COUNT_LIMIT 13 int done = 0; int count = 0; int quantum = 2; int thread_ids[4] = {0,1,2,3}; int thread_runtime[4] = {0,5,4,7}; pthread_mutex_t count_mutex; pthread_cond_t count_threshold_cv; void * inc_count(void * arg); static sem_t count_sem; int quit = 0; ///////// Inc_Count//////////////// void *inc_count(void *t) { long my_id = (long)t; int i; sem_wait(&count_sem); /////////////CRIT SECTION////////////////////////////////// printf("run_thread = %d\n",my_id); printf("%d \n",thread_runtime[my_id]); for( i=0; i < thread_runtime[my_id];i++) { printf("runtime= %d\n",thread_runtime[my_id]); pthread_mutex_lock(&count_mutex); count++; if (count == COUNT_LIMIT) { pthread_cond_signal(&count_threshold_cv); printf("inc_count(): thread %ld, count = %d Threshold reached.\n", my_id, count); } printf("inc_count(): thread %ld, count = %d, unlocking mutex\n",my_id, count); pthread_mutex_unlock(&count_mutex); sleep(1) ; }//End For sem_post(&count_sem); // Next Thread Enters Crit Section pthread_exit(NULL); } /////////// Count_Watch //////////////// void *watch_count(void *t) { long my_id = (long)t; printf("Starting watch_count(): thread %ld\n", my_id); pthread_mutex_lock(&count_mutex); if (count<COUNT_LIMIT) { pthread_cond_wait(&count_threshold_cv, &count_mutex); printf("watch_count(): thread %ld Condition signal received.\n", my_id); printf("watch_count(): thread %ld count now = %d.\n", my_id, count); } pthread_mutex_unlock(&count_mutex); pthread_exit(NULL); } ////////////////// Main //////////////// int main (int argc, char *argv[]) { int i; long t1=0, t2=1, t3=2, t4=3; pthread_t threads[4]; pthread_attr_t attr; sem_init(&count_sem, 0, 1); /* Initialize mutex and condition variable objects */ pthread_mutex_init(&count_mutex, NULL); pthread_cond_init (&count_threshold_cv, NULL); /* For portability, explicitly create threads in a joinable state */ pthread_attr_init(&attr); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE); pthread_create(&threads[0], &attr, watch_count, (void *)t1); pthread_create(&threads[1], &attr, inc_count, (void *)t2); pthread_create(&threads[2], &attr, inc_count, (void *)t3); pthread_create(&threads[3], &attr, inc_count, (void *)t4); /* Wait for all threads to complete */ for (i=0; i<NUM_THREADS; i++) { pthread_join(threads[i], NULL); } printf ("Main(): Waited on %d threads. Done.\n", NUM_THREADS); /* Clean up and exit */ pthread_attr_destroy(&attr); pthread_mutex_destroy(&count_mutex); pthread_cond_destroy(&count_threshold_cv); pthread_exit(NULL); } I am trying to learn thread scheduling, there is a lot of technical coding that I don't know. I do know in theory how it should work, but having trouble getting started in code... I know, at least I think, this program is not real time and its not meant to be. Some how I need to create a scheduler dispatch to control the threads in the order they should run... RR FCFS SJF ect. Right now I don't have a dispatcher. What I do have is semaphores/ mutex to control the threads. This code does run FCFS... and I have been trying to use semaphores to create a RR.. but having a lot of trouble. I believe it would be easier to create a dispatcher but I dont know how. I need help, I am not looking for answers just direction.. some sample code will help to understand a bit more. Thank you.

    Read the article

  • Dynamic XAP loading in Task-It - Part 1

    Download Source Code NOTE 1: The source code provided is running against the RC versions of Silverlight 4 and VisualStudio 2010, so you will need to update to those bits to run it. NOTE 2: After downloading the source, be sure to set the .Web project as the StartUp Project, and Default.aspx as the Start Page In my MEF into post, MEF to the rescue in Task-It, I outlined a couple of issues I was facing and explained why I chose MEF (the Managed Extensibility Framework) to solve these issues. Other posts to check out There are a few other resources out there around dynamic XAP loading that you may want to review (by the way, Glenn Block is the main dude when it comes to MEF): Glenn Blocks 3-part series on a dynamically loaded dashboard Glenn and John Papas Silverlight TV video on dynamic xap loading These provide some great info, but didnt exactly cover the scenario I wanted to achieve in Task-Itand that is dynamically loading each of the apps pages the first time the user enters a page. The code In the code I provided for download above, I created a simple solution that shows the technique I used for dynamic XAP loading in Task-It, but without all of the other code that surrounds it. Taking all that other stuff away should make it easier to grasp. Having said that, there is still a fair amount of code involved. I am always looking for ways to make things simpler, and to achieve the desired result with as little code as possible, so if I find a better/simpler way I will blog about it, but for now this technique works for me. When I created this solution I started by creating a new Silverlight Navigation Application called DynamicXAP Loading. I then added the following line to my UriMappings in MainPage.xaml: <uriMapper:UriMapping Uri="/{assemblyName};component/{path}" MappedUri="/{assemblyName};component/{path}"/> In the section of MainPage.xaml that produces the page links in the upper right, I kept the Home link, but added a couple of new ones (page1 and page 2). These are the pages that will be dynamically (lazy) loaded: <StackPanel x:Name="LinksStackPanel" Style="{StaticResource LinksStackPanelStyle}">      <HyperlinkButton Style="{StaticResource LinkStyle}" NavigateUri="/Home" TargetName="ContentFrame" Content="home"/>      <Rectangle Style="{StaticResource DividerStyle}"/>      <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 1" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage1}"/>      <Rectangle Style="{StaticResource DividerStyle}"/>      <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 2" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage2}"/>  </StackPanel> In App.xaml.cs I added a bit of MEF code. In Application_Startup I call a method called InitializeContainer, which creates a PackageCatalog (a MEF thing), then I create a CompositionContainer and pass it to the CompositionHost.Initialize method. This is boiler-plate MEF stuff that allows you to do 'composition' and import 'packages'. You're welcome to do a bit more MEF research on what is happening here if you'd like, but for the purpose of this example you can just trust that it works. :-) private void Application_Startup(object sender, StartupEventArgs e) {     InitializeContainer();     this.RootVisual = new MainPage(); }   private static void InitializeContainer() {     var catalog = new PackageCatalog();     catalog.AddPackage(Package.Current);     var container = new CompositionContainer(catalog);     container.ComposeExportedValue(catalog);     CompositionHost.Initialize(container); } Infrastructure In the sample code you'll notice that there is a project in the solution called DynamicXAPLoading.Infrastructure. This is simply a Silverlight Class Library project that I created just to move stuff I considered application 'infrastructure' code into a separate place, rather than cluttering the main Silverlight project (DynamicXapLoading). I did this same thing in Task-It, as the amount of this type of code was starting to clutter up the Silverlight project, and it just seemed to make sense to move things like Enums, Constants and the like off to a separate place. In the DynamicXapLoading.Infrastructure project you'll see 3 classes: Enums - There is only one enum in here called ModuleEnum. We'll use these later. PageMetadata - We will use this class later to add metadata to a new dynamically loaded project. ViewModelBase - This is simply a base class for view models that we will use in this, as well as future samples. As mentioned in my MVVM post, I will be using the MVVM pattern throughout my code for reasons detailed in the post. By the way, the ViewModelExtension class in there allows me to do strongly-typed property changed notification, so rather than OnPropertyChanged("MyProperty"), I can do this.OnPropertyChanged(p => p.MyProperty). It's just a less error-prown approach, because if you don't spell "MyProperty" correctly using the first method, nothing will break, it just won't work. Adding a new page We currently have a couple of pages that are being dynamically (lazy) loaded, but now let's add a third page. 1. First, create a new Silverlight Application project: In this example I call it Page3. In the future you may prefer to use a different name, like DynamicXAPLoading.Page3, or even DynamicXAPLoading.Modules.Page3. It can be whatever you want. In my Task-It application I used the latter approach (with 'Modules' in the name). I do think of these application as 'modules', but Prism uses the same term, so some folks may not like that. Use whichever naming convention you feel is appropriate, but for now Page3 will do. When you change the name to Page3 and click OK, you will be presented with the Add New Project dialog: It is important that you leave the 'Host the Silverlight application in a new or existing Web site in the solution' checked, and the .Web project will be selected in the dropdown below. This will create the .xap file for this project under ClientBin in the .Web project, which is where we want it. 2. Uncheck the 'Add a test page that references the application' checkbox, and leave everything else as is. 3. Once the project is created, you can delete App.xaml and MainPage.xaml. 4. You will need to add references your new project to the following: DynamicXAPLoading.Infrastructure.dll (this is a Project reference) DynamicNavigation.dll (this is in the Libs directory under the DynamicXAPLoading project) System.ComponentModel.Composition.dll System.ComponentModel.Composition.Initialization.dll System.Windows.Controls.Navigation.dll If you have installed the latest RC bits you will find the last 3 dll's under the .NET tab in the Add Referenced dialog. They live in the following location, or if you are on a 64-bit machine like me, it will be Program Files (x86).       C:\Program Files\Microsoft SDKs\Silverlight\v4.0\Libraries\Client Now let's create some UI for our new project. 5. First, create a new Silverlight User Control called Page3.dyn.xaml 6. Paste the following code into the xaml: <dyn:DynamicPageShim xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     xmlns:my="clr-namespace:Page3;assembly=Page3">     <my:Page3Host /> </dyn:DynamicPageShim> This is just a 'shim', part of David Poll's technique for dynamic loading. 7. Expand the icon next to Page3.dyn.xaml and delete the code-behind file (Page3.dyn.xaml.cs). 8. Next we will create a control that will 'host' our page. Create another Silverlight User Control called Page3Host.xaml and paste in the following XAML: <dyn:DynamicPage x:Class="Page3.Page3Host"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     xmlns:Views="clr-namespace:Page3.Views"      mc:Ignorable="d"     d:DesignHeight="300" d:DesignWidth="400"     Title="Page 3">       <Views:Page3/>   </dyn:DynamicPage> 9. Now paste the following code into the code-behind for this control: using DynamicXAPLoading.Infrastructure;   namespace Page3 {     [PageMetadata(NavigateUri = "/Page3;component/Page3.dyn.xaml", Module = Enums.Page3)]     public partial class Page3Host     {         public Page3Host()         {             InitializeComponent();         }     } } Notice that we are now using that PageMetadata custom attribute class that we created in the Infrastructure project, and setting its two properties. NavigateUri - This tells it that the assembly is called Page3 (with a slash beforehand), and the page we want to load is Page3.dyn.xaml...our 'shim'. That line we added to the UriMapper in MainPage.xaml will use this information to load the page. Module - This goes back to that ModuleEnum class in our Infrastructure project. However, setting the Module to ModuleEnum.Page3 will cause a compilation error, so... 10. Go back to that Enums.cs under the Infrastructure project and add a 3rd entry for Page3: public enum ModuleEnum {     Page1,     Page2,     Page3 } 11. Now right-click on the Page3 project and add a folder called Views. 12. Right-click on the Views folder and create a new Silverlight User Control called Page3.xaml. We won't bother creating a view model for this User Control as I did in the Page 1 and Page 2 projects, just for the sake of simplicity. Feel free to add one if you'd like though, and copy the code from one of those other projects. Right now those view models aren't really doing anything anyway...though they will in my next post. :-) 13. Now let's replace the xaml for Page3.xaml with the following: <dyn:DynamicPage x:Class="Page3.Views.Page3"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:dyn="clr-namespace:DynamicNavigation;assembly=DynamicNavigation"     mc:Ignorable="d"     d:DesignHeight="300" d:DesignWidth="400"     Style="{StaticResource PageStyle}">       <Grid x:Name="LayoutRoot">         <ScrollViewer x:Name="PageScrollViewer" Style="{StaticResource PageScrollViewerStyle}">             <StackPanel x:Name="ContentStackPanel">                 <TextBlock x:Name="HeaderText" Style="{StaticResource HeaderTextStyle}" Text="Page 3"/>                 <TextBlock x:Name="ContentText" Style="{StaticResource ContentTextStyle}" Text="Page 3 content"/>             </StackPanel>         </ScrollViewer>     </Grid>   </dyn:DynamicPage> 14. And in the code-behind remove the inheritance from UserControl, so it should look like this: namespace Page3.Views {     public partial class Page3     {         public Page3()         {             InitializeComponent();         }     } } One thing you may have noticed is that the base class for the last two User Controls we created is DynamicPage. Once again, we are using the infrastructure that David Poll created. 15. OK, a few last things. We need a link on our main page so that we can access our new page. In MainPage.xaml let's update our links to look like this: <StackPanel x:Name="LinksStackPanel" Style="{StaticResource LinksStackPanelStyle}">     <HyperlinkButton Style="{StaticResource LinkStyle}" NavigateUri="/Home" TargetName="ContentFrame" Content="home"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 1" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage1}"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 2" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage2}"/>     <Rectangle Style="{StaticResource DividerStyle}"/>     <HyperlinkButton Style="{StaticResource LinkStyle}" Content="page 3" Command="{Binding NavigateCommand}" CommandParameter="{Binding ModulePage3}"/> </StackPanel> 16. Next, we need to add the following at the bottom of MainPageViewModel in the ViewModels directory of our DynamicXAPLoading project: public ModuleEnum ModulePage3 {     get { return ModuleEnum.Page3; } } 17. And at last, we need to add a case for our new page to the switch statement in MainPageViewModel: switch (module) {     case ModuleEnum.Page1:         DownloadPackage("Page1.xap");         break;     case ModuleEnum.Page2:         DownloadPackage("Page2.xap");         break;     case ModuleEnum.Page3:         DownloadPackage("Page3.xap");         break;     default:         break; } Now fire up the application and click the page 1, page 2 and page 3 links. What you'll notice is that there is a 2-second delay the first time you hit each page. That is because I added the following line to the Navigate method in MainPageViewModel: Thread.Sleep(2000); // Simulate a 2 second initial loading delay The reason I put this in there is that I wanted to simulate a delay the first time the page loads (as the .xap is being downloaded from the server). You'll notice that after the first hit to the page though that there is no delay...that's because the .xap has already been downloaded. Feel free to comment out this 2-second delay, or remove it if you'd like. I just wanted to show how subsequent hits to the page would be quicker than the initial one. By the way, you may want to display some sort of BusyIndicator while the .xap is loading. I have that in my Task-It appplication, but for the sake of simplicity I did not include it here. In the future I'll blog about how I show and hide the BusyIndicator using events (I'm currently using the eventing framework in Prism for that, but may move to the one in the MVVM Light Toolkit some time soon). Whew, that felt like a lot of steps, but it does work quite nicely. As I mentioned earlier, I'll try to find ways to simplify the code (I'd like to get away from having things like hard-coded .xap file names) and will blog about it in the future if I find a better way. In my next post, I'll talk more about what is actually happening with the code that makes this all work.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Problem with room/screen/menu controller in python game: old rooms are not removed from memory

    - by Jordan Magnuson
    I'm literally banging my head against a wall here (as in, yes, physically, at my current location, I am damaging my cranium). Basically, I've got a Python/Pygame game with some typical game "rooms", or "screens." EG title screen, high scores screen, and the actual game room. Something bad is happening when I switch between rooms: the old room (and its various items) are not removed from memory, or from my event listener. Not only that, but every time I go back to a certain room, my number of event listeners increases, as well as the RAM being consumed! (So if I go back and forth between the title screen and the "game room", for instance, the number of event listeners and the memory usage just keep going up and up. The main issue is that all the event listeners start to add up and really drain the CPU. I'm new to Python, and don't know if I'm doing something obviously wrong here, or what. I will love you so much if you can help me with this! Below is the relevant source code. Complete source code at http://www.necessarygames.com/my_games/betraveled/betraveled_src0328.zip MAIN.PY class RoomController(object): """Controls which room is currently active (eg Title Screen)""" def __init__(self, screen, ev_manager): self.room = None self.screen = screen self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.room = self.set_room(config.room) def set_room(self, room_const): #Unregister old room from ev_manager if self.room: self.room.ev_manager.unregister_listener(self.room) self.room = None #Set new room based on const if room_const == config.TITLE_SCREEN: return rooms.TitleScreen(self.screen, self.ev_manager) elif room_const == config.GAME_MODE_ROOM: return rooms.GameModeRoom(self.screen, self.ev_manager) elif room_const == config.GAME_ROOM: return rooms.GameRoom(self.screen, self.ev_manager) elif room_const == config.HIGH_SCORES_ROOM: return rooms.HighScoresRoom(self.screen, self.ev_manager) def notify(self, event): if isinstance(event, ChangeRoomRequest): if event.game_mode: config.game_mode = event.game_mode self.room = self.set_room(event.new_room) #Run game def main(): pygame.init() screen = pygame.display.set_mode(config.screen_size) ev_manager = EventManager() spinner = CPUSpinnerController(ev_manager) room_controller = RoomController(screen, ev_manager) pygame_event_controller = PyGameEventController(ev_manager) spinner.run() EVENT_MANAGER.PY class EventManager: #This object is responsible for coordinating most communication #between the Model, View, and Controller. def __init__(self): from weakref import WeakKeyDictionary self.last_listeners = {} self.listeners = WeakKeyDictionary() self.eventQueue= [] self.gui_app = None #---------------------------------------------------------------------- def register_listener(self, listener): self.listeners[listener] = 1 #---------------------------------------------------------------------- def unregister_listener(self, listener): if listener in self.listeners: del self.listeners[listener] #---------------------------------------------------------------------- def clear(self): del self.listeners[:] #---------------------------------------------------------------------- def post(self, event): # if isinstance(event, MouseButtonLeftEvent): # debug(event.name) #NOTE: copying the list like this before iterating over it, EVERY tick, is highly inefficient, #but currently has to be done because of how new listeners are added to the queue while it is running #(eg when popping cards from a deck). Should be changed. See: http://dr0id.homepage.bluewin.ch/pygame_tutorial08.html #and search for "Watch the iteration" print 'Number of listeners: ' + str(len(self.listeners)) for listener in list(self.listeners): #NOTE: If the weakref has died, it will be #automatically removed, so we don't have #to worry about it. listener.notify(event) def notify(self, event): pass #------------------------------------------------------------------------------ class PyGameEventController: """...""" def __init__(self, ev_manager): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.input_freeze = False #---------------------------------------------------------------------- def notify(self, incoming_event): if isinstance(incoming_event, UserInputFreeze): self.input_freeze = True elif isinstance(incoming_event, UserInputUnFreeze): self.input_freeze = False elif isinstance(incoming_event, TickEvent) or isinstance(incoming_event, BoardCreationTick): #Share some time with other processes, so we don't hog the cpu pygame.time.wait(5) #Handle Pygame Events for event in pygame.event.get(): #If this event manager has an associated PGU GUI app, notify it of the event if self.ev_manager.gui_app: self.ev_manager.gui_app.event(event) #Standard event handling for everything else ev = None if event.type == QUIT: ev = QuitEvent() elif event.type == pygame.MOUSEBUTTONDOWN and not self.input_freeze: if event.button == 1: #Button 1 pos = pygame.mouse.get_pos() ev = MouseButtonLeftEvent(pos) elif event.type == pygame.MOUSEBUTTONDOWN and not self.input_freeze: if event.button == 2: #Button 2 pos = pygame.mouse.get_pos() ev = MouseButtonRightEvent(pos) elif event.type == pygame.MOUSEBUTTONUP and not self.input_freeze: if event.button == 2: #Button 2 Release pos = pygame.mouse.get_pos() ev = MouseButtonRightReleaseEvent(pos) elif event.type == pygame.MOUSEMOTION: pos = pygame.mouse.get_pos() ev = MouseMoveEvent(pos) #Post event to event manager if ev: self.ev_manager.post(ev) # elif isinstance(event, BoardCreationTick): # #Share some time with other processes, so we don't hog the cpu # pygame.time.wait(5) # # #If this event manager has an associated PGU GUI app, notify it of the event # if self.ev_manager.gui_app: # self.ev_manager.gui_app.event(event) #------------------------------------------------------------------------------ class CPUSpinnerController: def __init__(self, ev_manager): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.clock = pygame.time.Clock() self.cumu_time = 0 self.keep_going = True #---------------------------------------------------------------------- def run(self): if not self.keep_going: raise Exception('dead spinner') while self.keep_going: time_passed = self.clock.tick() fps = self.clock.get_fps() self.cumu_time += time_passed self.ev_manager.post(TickEvent(time_passed, fps)) if self.cumu_time >= 1000: self.cumu_time = 0 self.ev_manager.post(SecondEvent(fps=fps)) pygame.quit() #---------------------------------------------------------------------- def notify(self, event): if isinstance(event, QuitEvent): #this will stop the while loop from running self.keep_going = False EXAMPLE CLASS USING EVENT MANAGER class Timer(object): def __init__(self, ev_manager, time_left): self.ev_manager = ev_manager self.ev_manager.register_listener(self) self.time_left = time_left self.paused = False def __repr__(self): return str(self.time_left) def pause(self): self.paused = True def unpause(self): self.paused = False def notify(self, event): #Pause Event if isinstance(event, Pause): self.pause() #Unpause Event elif isinstance(event, Unpause): self.unpause() #Second Event elif isinstance(event, SecondEvent): if not self.paused: self.time_left -= 1

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • Why do I get a null pointer exception from TabWidget?

    - by rushinge
    I'm writing an android program in which I have an activity that uses tabs. The Activity public class UnitActivity extends TabActivity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); TabHost tabHost = getTabHost(); TabSpec spec; Resources res = getResources(); LayoutInflater.from(this).inflate(R.layout.unit_view, tabHost.getTabContentView(), true); spec = tabHost.newTabSpec("controls"); spec.setIndicator("Control", res.getDrawable(R.drawable.ic_tab_equalizer)); spec.setContent(R.id.txtview); tabHost.addTab(spec); } } The XML referenced by R.layout.unit_view <?xml version="1.0" encoding="utf-8"?> <TabHost xmlns:android="http://schemas.android.com/apk/res/android" android:id="@android:id/tabhost" android:layout_width="fill_parent" android:layout_height="fill_parent"> <LinearLayout android:layout_width="fill_parent" android:layout_height="fill_parent" android:padding="5dp"> <TabWidget android:id="@android:id/tabs" android:layout_width="fill_parent" android:layout_height="wrap_content"/> <FrameLayout android:id="@android:id/tabcontent" android:layout_width="fill_parent" android:layout_height="fill_parent" android:padding="5dp"> <TextView android:id="@+id/txtview" android:layout_width="fill_parent" android:layout_height="fill_parent" android:gravity="bottom" android:text="nullpointer this!" /> </FrameLayout> </LinearLayout> </TabHost> As far as I can see I'm doing the same thing I see in the tabs1 api sample from the android sdk. I've tried "getLayoutInflator()" instead of "LayoutInflator.from(this)" with the same result. If I replace the LayoutInflater line with "setContentView(R.layout.unit_view)" my program doesn't crash with a null pointer exception but my content is completely blank and empty. I get the tab and that's it. I've checked to make sure R.layout.unit_view and tabHost are not null when it runs the LayoutInflater line and they seem to be fine. They're defenitely not null. I've also checked to make sure LayoutInflater.from(this) returns a valid layout inflater object and it does. The logcat indicating the error says E/AndroidRuntime( 541): java.lang.NullPointerException E/AndroidRuntime( 541): at android.widget.TabWidget.dispatchDraw(TabWidget.java:206) E/AndroidRuntime( 541): at android.view.ViewGroup.drawChild(ViewGroup.java:1529) E/AndroidRuntime( 541): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) E/AndroidRuntime( 541): at android.view.ViewGroup.drawChild(ViewGroup.java:1529) E/AndroidRuntime( 541): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) E/AndroidRuntime( 541): at android.view.ViewGroup.drawChild(ViewGroup.java:1529) E/AndroidRuntime( 541): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) E/AndroidRuntime( 541): at android.view.ViewGroup.drawChild(ViewGroup.java:1529) E/AndroidRuntime( 541): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) E/AndroidRuntime( 541): at android.view.ViewGroup.drawChild(ViewGroup.java:1529) E/AndroidRuntime( 541): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) E/AndroidRuntime( 541): at android.view.ViewGroup.drawChild(ViewGroup.java:1529) E/AndroidRuntime( 541): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) E/AndroidRuntime( 541): at android.view.View.draw(View.java:6538) E/AndroidRuntime( 541): at android.widget.FrameLayout.draw(FrameLayout.java:352) E/AndroidRuntime( 541): at android.view.ViewGroup.drawChild(ViewGroup.java:1531) E/AndroidRuntime( 541): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) E/AndroidRuntime( 541): at android.view.ViewGroup.drawChild(ViewGroup.java:1529) E/AndroidRuntime( 541): at android.view.ViewGroup.dispatchDraw(ViewGroup.java:1258) E/AndroidRuntime( 541): at android.view.View.draw(View.java:6538) E/AndroidRuntime( 541): at android.widget.FrameLayout.draw(FrameLayout.java:352) E/AndroidRuntime( 541): at com.android.internal.policy.impl.PhoneWindow$DecorView.draw(PhoneWindow.java:1830) E/AndroidRuntime( 541): at android.view.ViewRoot.draw(ViewRoot.java:1349) E/AndroidRuntime( 541): at android.view.ViewRoot.performTraversals(ViewRoot.java:1114) E/AndroidRuntime( 541): at android.view.ViewRoot.handleMessage(ViewRoot.java:1633) E/AndroidRuntime( 541): at android.os.Handler.dispatchMessage(Handler.java:99) E/AndroidRuntime( 541): at android.os.Looper.loop(Looper.java:123) E/AndroidRuntime( 541): at android.app.ActivityThread.main(ActivityThread.java:4363) E/AndroidRuntime( 541): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime( 541): at java.lang.reflect.Method.invoke(Method.java:521) E/AndroidRuntime( 541): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) E/AndroidRuntime( 541): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) E/AndroidRuntime( 541): at dalvik.system.NativeStart.main(Native Method) I/Process ( 61): Sending signal. PID: 541 SIG: 3 I/dalvikvm( 541): threadid=7: reacting to signal 3 I/dalvikvm( 541): Wrote stack trace to '/data/anr/traces.txt' Anybody have any idea how I can get this content into a tab without crashing my application? My actual program is more complex and has more than one tab but I simplified it down to this in an attempt to find out why it's crashing but it still crashes and I don't know why. If I don't use LayoutInflator my program doesn't crash but I don't get any content either, just tabs.

    Read the article

  • Injection with google guice does not work anymore after obfuscation with proguard

    - by sme
    Has anyone ever tried to combine the use of google guice with obfuscation (in particular proguard)? The obfuscated version of my code does not work with google guice as guice complains about missing type parameters. This information seems to be erased by the transformation step that proguard does, even when the relevant classes are excluded from the obfuscation. The stack trace looks like this: com.google.inject.CreationException: Guice creation errors: 1) Cannot inject a Provider that has no type parameter while locating com.google.inject.Provider for parameter 0 at de.repower.lvs.client.admin.user.administration.AdminUserCommonPanel.setPasswordPanelProvider(SourceFile:499) at de.repower.lvs.client.admin.user.administration.AdminUserCommonPanel.setPasswordPanelProvider(SourceFile:499) while locating de.repower.lvs.client.admin.user.administration.AdminUserCommonPanel for parameter 0 at de.repower.lvs.client.admin.user.administration.b.k.setParentPanel(SourceFile:65) at de.repower.lvs.client.admin.user.administration.b.k.setParentPanel(SourceFile:65) at de.repower.lvs.client.admin.user.administration.o.a(SourceFile:38) 2) Cannot inject a Provider that has no type parameter while locating com.google.inject.Provider for parameter 0 at de.repower.lvs.client.admin.user.administration.AdminUserCommonPanel.setWindTurbineAccessGroupProvider(SourceFile:509) at de.repower.lvs.client.admin.user.administration.AdminUserCommonPanel.setWindTurbineAccessGroupProvider(SourceFile:509) while locating de.repower.lvs.client.admin.user.administration.AdminUserCommonPanel for parameter 0 at de.repower.lvs.client.admin.user.administration.b.k.setParentPanel(SourceFile:65) at de.repower.lvs.client.admin.user.administration.b.k.setParentPanel(SourceFile:65) at de.repower.lvs.client.admin.user.administration.o.a(SourceFile:38) 2 errors at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:354) at com.google.inject.InjectorBuilder.initializeStatically(InjectorBuilder.java:152) at com.google.inject.InjectorBuilder.build(InjectorBuilder.java:105) at com.google.inject.Guice.createInjector(Guice.java:92) at com.google.inject.Guice.createInjector(Guice.java:69) at com.google.inject.Guice.createInjector(Guice.java:59) I tried to create a small example (without using guice) that seems to reproduce the problem: package de.repower.common; import java.lang.reflect.Method; import java.lang.reflect.ParameterizedType; import java.lang.reflect.Type; class SomeClass<S> { } public class ParameterizedTypeTest { public void someMethod(SomeClass<Integer> param) { System.out.println("value: " + param); System.setProperty("my.dummmy.property", "hallo"); } private static void checkParameterizedMethod(ParameterizedTypeTest testObject) { System.out.println("checking parameterized method ..."); Method[] methods = testObject.getClass().getMethods(); for (Method method : methods) { if (method.getName().equals("someMethod")) { System.out.println("Found method " + method.getName()); Type[] types = method.getGenericParameterTypes(); Type parameterType = types[0]; if (parameterType instanceof ParameterizedType) { Type parameterizedType = ((ParameterizedType) parameterType).getActualTypeArguments()[0]; System.out.println("Parameter: " + parameterizedType); System.out.println("Class: " + ((Class) parameterizedType).getName()); } else { System.out.println("Failed: type ist not instance of ParameterizedType"); } } } } public static void main(String[] args) { System.out.println("Starting ..."); try { ParameterizedTypeTest someInstance = new ParameterizedTypeTest(); checkParameterizedMethod(someInstance); } catch (SecurityException e) { e.printStackTrace(); } } } If you run this code unsbfuscated, the output looks like this: Starting ... checking parameterized method ... Found method someMethod Parameter: class java.lang.Integer Class: java.lang.Integer But running the version obfuscated with proguard yields: Starting ... checking parameterized method ... Found method someMethod Failed: type ist not instance of ParameterizedType These are the options I used for obfuscation: -injars classes_eclipse\methodTest.jar -outjars classes_eclipse\methodTestObfuscated.jar -libraryjars 'C:\Program Files\Java\jre6\lib\rt.jar' -dontskipnonpubliclibraryclasses -dontskipnonpubliclibraryclassmembers -dontshrink -printusage classes_eclipse\shrink.txt -dontoptimize -dontpreverify -verbose -keep class **.ParameterizedTypeTest.class { <fields>; <methods>; } -keep class ** { <fields>; <methods>; } # Keep - Applications. Keep all application classes, along with their 'main' # methods. -keepclasseswithmembers public class * { public static void main(java.lang.String[]); } # Also keep - Enumerations. Keep the special static methods that are required in # enumeration classes. -keepclassmembers enum * { public static **[] values(); public static ** valueOf(java.lang.String); } # Also keep - Database drivers. Keep all implementations of java.sql.Driver. -keep class * extends java.sql.Driver # Also keep - Swing UI L&F. Keep all extensions of javax.swing.plaf.ComponentUI, # along with the special 'createUI' method. -keep class * extends javax.swing.plaf.ComponentUI { public static javax.swing.plaf.ComponentUI createUI(javax.swing.JComponent); } # Keep names - Native method names. Keep all native class/method names. -keepclasseswithmembers,allowshrinking class * { native <methods>; } # Keep names - _class method names. Keep all .class method names. This may be # useful for libraries that will be obfuscated again with different obfuscators. -keepclassmembers,allowshrinking class * { java.lang.Class class$(java.lang.String); java.lang.Class class$(java.lang.String,boolean); } Does anyone have an idea of how to solve this (apart from the obvious workaround to put the relevant files into a seperate jar and not obfuscate it)? Best regards, Stefan

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >