Search Results

Search found 20265 results on 811 pages for 'oracle bi 11g scorecards dashboards strategy execution'.

Page 521/811 | < Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >

  • Tip #19 Module Private Visibility in OSGi

    - by ByronNevins
    I hate public and protected methods and classes.  It requires so much work to change them in a huge project like GlassFish.  Not to mention that you may well have to support those APIs forever.  They are highly overused in GlassFish.  In fact I'd bet that > 95% of classes are marked as public for no good reason.  It's just (bad) habit is my guess. private and default visibility (I call it package-private) is easier to maintain.  It is much much easier to change such classes and methods around.  If you have ANY public method or public class in GlassFish you'll need to grep through a tremendous amount of source code to find all callers.  But even that won't be theoretically reliable.  What if a caller is using reflection to access public methods?  You may never find such usages. If you have package private methods, it's easy.  Simply grep through all the code in that one package.  As long as that package compiles ok you're all set.  There can' be any compile errors anywhere else.  It's a waste of time to even look around or build the "outside" world.  So you may be thinking: "Aha!  I'll just make my module have one giant package with all the java files.  Then I can use the default visibility and maintenance will be much easier.  But there's a problem.  You are wasting a very nice feature of java -- organizing code into separate packages.  It also makes the code much more encapsulated.  Unfortunately to share code between the packages you have no choice but to declare public visibility. What happens in practice is that a module ends up having tons of public classes and methods that are used exclusively inside the module.  Which finally brings me to the point of this blog:  If Only There Was A Module-Private Visibility Available Well, surprise!  There is such a mechanism.  If your project is running under OSGi that is.  Like GlassFish does!  With this mechanism you can easily add another level of visibility by telling OSGi exactly which public you want to be exposed outside of the module.  You get the best of both worlds: Better encapsulation of your code so that maintenance is easier and productivity is increased. Usage of public visibility inside the module so that you can encapsulate intra-module better with packages. How I do this in GlassFish: Carefully plan out at least one package that will contain "true" publics.  This is the package that will be exported by OSGi.  I recommend just one package. Here is how to tell OSGi to use it in GlassFish -- edit osgi.bundle like so:-exportcontents:     org.glassfish.mymodule.truepublics;  version=${project.osgi.version} Now all publics declared in any other packages will be visible module-wide but not outside the module. There is one caveat: Accessing "module-private" items outside of the module is controlled at run-time, not compile-time.  The compiler has no clue that a public in a dependent module isn't really public.  it will happily compile it.  At runtime you will definitely see fireworks.  The good news is that you don't have to wait for the code path that tries to use the "module-private" items to fire.  OSGi will complain loudly when that module gets loaded.  OSGi will refuse to load it.  You will see an error like this: remote failure: Error while loading FOO: Exception while adding the new configuration : Error occurred during deployment: Exception while loading the app : org.osgi.framework.BundleException: Unresolved constraint in bundle com.oracle.glassfish.miscreant.code [115]: Unable to resolve 115.0: missing requirement [115.0] osgi.wiring.package; (osgi.wiring.package=org.glassfish.mymodule.unexported). Please see server.log for more details. That is if you accidentally change code in module B to use a public that is really a "module-private" in module A, then you will see the error immediately when you try to test whatever you were changing in module B.

    Read the article

  • Unity.ResolutionFailedException - Resolution of the dependency failed

    - by Anibas
    I have the following code: public static IEngine CreateEngine() { UnityContainer container = Unity.LoadUnityContainer(DefaultStrategiesContainerName); IEnumerable<IStrategy> strategies = container.ResolveAll<IStrategy>(); ITraderProvider provider = container.Resolve<ITraderProvider>(); return new Engine(provider, new List<IStrategy>(strategies)); } and the config: <unity> <typeAliases> <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="weakRef" type="Microsoft.Practices.Unity.ExternallyControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="Strategy" type="ADTrader.Core.Contracts.IStrategy, ADTrader.Core" /> <typeAlias alias="Trader" type="ADTrader.Core.Contracts.ITraderProvider, ADTrader.Core" /> </typeAliases> <containers> <container name="strategies"> <types> <type type="Strategy" mapTo="ADTrader.Strategies.ThreeTurningStrategy, ADTrader.Strategies" name="1" /> <type type="Trader" mapTo="ADTrader.MbTradingProvider.MBTradingProvider, ADTrader.MbTradingProvider" /> </types> </container> </containers></unity> I am getting the following exception: Microsoft.Practices.Unity.ResolutionFailedException: Resolution of the dependency failed, type = "ADTrader.Core.Contracts.ITraderProvider", name = "". Exception message is: The current build operation (build key Build Key[ADTrader.MbTradingProvider.MBTradingProvider, null]) failed: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. (Strategy type BuildPlanStrategy, index 3) --- Microsoft.Practices.ObjectBuilder2.BuildFailedException: The current build operation (build key Build Key[ADTrader.MbTradingProvider.MBTradingProvider, null]) failed: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. (Strategy type BuildPlanStrategy, index 3) --- System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at MBTCOMLib.MbtComMgrClass.EnableSplash(Boolean bEnable) at ADTrader.MbTradingProvider.MBTradingProvider..ctor() at BuildUp_ADTrader.MbTradingProvider.MBTradingProvider(IBuilderContext ) at Microsoft.Practices.ObjectBuilder2.DynamicMethodBuildPlan.BuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.BuildPlanStrategy.PreBuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) --- End of inner exception stack trace --- at Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.Builder.BuildUp(IReadWriteLocator locator, ILifetimeContainer lifetime, IPolicyList policies, IStrategyChain strategies, Object buildKey, Object existing) at Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, Object existing, String name) --- End of inner exception stack trace --- at Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, Object existing, String name) at Microsoft.Practices.Unity.UnityContainer.Resolve(Type t, String name) at Microsoft.Practices.Unity.UnityContainerBase.ResolveT at ADTrader.Engine.EngineFactory.CreateEngine() Any idea?

    Read the article

  • Java Deflater strategies - DEFAULT_STRATEGY, FILTERED and HUFFMAN_ONLY

    - by Keyur
    I'm trying to find a balance between performance and degree of compression when gzipping a Java webapp response. In looking at the Deflater class, I can set a level and a strategy. The levels are self explanatory - BEST_SPEED to BEST_COMPRESSION. I'm not sure regarding the strategies - DEFAULT_STRATEGY, FILTERED and HUFFMAN_ONLY I can make some sense from the Javadoc but I was wondering if someone had used a specific strategy in their apps and if you saw any difference in terms of performance / degree of compression. Thanks, Keyur

    Read the article

  • Hibernate @DiscriminatorColumn and @DiscriminatorValue issue

    - by user224270
    I have this parent class: @Entity @Inheritance(strategy = InheritanceType.JOINED) @Table(name = "BSEntity") @DiscriminatorColumn(name = "resourceType", discriminatorType = DiscriminatorType.STRING, length = 32) public abstract class BaseEntity { and the subclass @Entity @Table(name = "BSCategory") @DiscriminatorValue("Category") public class CategoryEntity extends BaseEntity { But when I run the program, I get the following error: 2010-06-03 10:13:54,222 [main] WARN (org.hibernate.util.JDBCExceptionReporter:100) - SQL Error: 1364, SQLState: HY000 2010-06-03 10:13:54,222 [main] ERROR (org.hibernate.util.JDBCExceptionReporter:101) - Field 'resourceType' doesn't have a default value Any thoughts? Updated: Database is MySQL. I have also changed inheritance strategy to JOINED instead of SINGLE_TABLE. Help

    Read the article

  • must have tools for better quality code

    - by leon
    I just started my real development career and I want to know what set of tools/strategy that the community is using to write better quality code. To start, I use astyle to format my code doxygen to document my code gcc -Wall -Wextra -pedantic and clang -Wall -Wextra -pedantic to check all warnings What tools/strategy do you use to write better code? This question is open to all language and all platform.

    Read the article

  • BPA scan did not complete for one or more servers

    - by Hossein Aarabi
    In Windows Server 2012 RTM, I am trying to run the BPA. It fails, saying "BPA scan did not complete for one or more servers" Try #1: Try #2: So, I decided to enable the Turn on Script Execution (with Allow all scripts) in Local Group Policy Editor, now I get a very nice exception message :) Clicking on ignore button, BPA logs the following error message: Try #3: So, I decided to go ahead and set the execution policy for all the scopes in PowerShell to unrestricted. Again no luck. What is going on?

    Read the article

  • How to load powershell profile from cygwin bash?

    - by Jon Erickson
    So in cygwin bash I am able to type "powershell" to bring me into a powershell prompt but it won't load my powershell profile.ps1 due to not being able to execute scripts, but I can't set the execution policy in this prompt... So I tried running the default powershell prompt (as administrator) and was able to set the execution policy to remote signed, but it doesn't affect the powershell within bash what am I missing?

    Read the article

  • What are incentives (if any) to use WinRT instead of .Net?

    - by Ark-kun
    Let's compare WinRT with .Net .Net .Net is the 13+ years evolution of COM. Three main parts of .Net are execution environment, standard libraries and supported languages. CLR is the native-code execution environment based on COM .Net Framework has a big set of standard libraries (implemented using managed and native code) that can be used from all .Net languages. There are .Net classes that allow using OS APIs. WPF or Silverlight provide a XAML-based UI framework .Net can be used with C++, C#, Javascript, Python, Ruby, VB, LISP, Scheme and many other languages. C++/.Net is a variation of the C++ language that allows interaction with .Net objects. .Net supports inheritance, generics, operator and method overloading and many other features. .Net allows creating apps that run on Windows (XP, 7, 8 Pro (Desktop and Metro), RT, CE, etc), Mac OS, Linux (+ other *nix); iOS, Android, Windows Phone (7, 8); Internet Explorer, Chrome, Firefox; XBox 360, Playstation Suite; raw microprocessors. There is support for creating games (2D/3D) using any managed language or C++. Created by Developer Division WinRT WinRT is based on COM. Three main parts of WinRT are execution environment, standard libraries and supported languages. WinRT has a native-code execution environment based on COM WinRT has a set of standard libraries that more or less can be used from WinRT languages. There are WinRT classes that allow using OS APIs. Unnamed Silverlight clone provides a XAML-based UI framework WinRT can be used with C++, C#, Javascript, VB. C++/CX is a variation of the C++ language that allows interaction with WinRT objects. Custom WinRT components don't support inheritance (classes must be sealed), generics, operator overloading and many other features. WinRT allows creating apps that run on Windows 8 Pro and RT (Metro only); Windows Phone 8 (limited). There is support for creating games (2D/3D) using C++ only. Ordered by Windows Team I think that all the aspects except the last ones are very important for developers. On the other hand it seems that the most important aspect for Microsoft is the last one. So, given the above comparison of conceptually identical technologies, what are incentives (if any) to use WinRT instead of .Net?

    Read the article

  • Optimizing Solaris 11 SHA-1 on Intel Processors

    - by danx
    SHA-1 is a "hash" or "digest" operation that produces a 160 bit (20 byte) checksum value on arbitrary data, such as a file. It is intended to uniquely identify text and to verify it hasn't been modified. Max Locktyukhin and others at Intel have improved the performance of the SHA-1 digest algorithm using multiple techniques. This code has been incorporated into Solaris 11 and is available in the Solaris Crypto Framework via the libmd(3LIB), the industry-standard libpkcs11(3LIB) library, and Solaris kernel module sha1. The optimized code is used automatically on systems with a x86 CPU supporting SSSE3 (Intel Supplemental SSSE3). Intel microprocessor architectures that support SSSE3 include Nehalem, Westmere, Sandy Bridge microprocessor families. Further optimizations are available for microprocessors that support AVX (such as Sandy Bridge). Although SHA-1 is considered obsolete because of weaknesses found in the SHA-1 algorithm—NIST recommends using at least SHA-256, SHA-1 is still widely used and will be with us for awhile more. Collisions (the same SHA-1 result for two different inputs) can be found with moderate effort. SHA-1 is used heavily though in SSL/TLS, for example. And SHA-1 is stronger than the older MD5 digest algorithm, another digest option defined in SSL/TLS. Optimizations Review SHA-1 operates by reading an arbitrary amount of data. The data is read in 512 bit (64 byte) blocks (the last block is padded in a specific way to ensure it's a full 64 bytes). Each 64 byte block has 80 "rounds" of calculations (consisting of a mixture of "ROTATE-LEFT", "AND", and "XOR") applied to the block. Each round produces a 32-bit intermediate result, called W[i]. Here's what each round operates: The first 16 rounds, rounds 0 to 15, read the 512 bit block 32 bits at-a-time. These 32 bits is used as input to the round. The remaining rounds, rounds 16 to 79, use the results from the previous rounds as input. Specifically for round i it XORs the results of rounds i-3, i-8, i-14, and i-16 and rotates the result left 1 bit. The remaining calculations for the round is a series of AND, XOR, and ROTATE-LEFT operators on the 32-bit input and some constants. The 32-bit result is saved as W[i] for round i. The 32-bit result of the final round, W[79], is the SHA-1 checksum. Optimization: Vectorization The first 16 rounds can be vectorized (computed in parallel) because they don't depend on the output of a previous round. As for the remaining rounds, because of step 2 above, computing round i depends on the results of round i-3, W[i-3], one can vectorize 3 rounds at-a-time. Max Locktyukhin found through simple factoring, explained in detail in his article referenced below, that the dependencies of round i on the results of rounds i-3, i-8, i-14, and i-16 can be replaced instead with dependencies on the results of rounds i-6, i-16, i-28, and i-32. That is, instead of initializing intermediate result W[i] with: W[i] = (W[i-3] XOR W[i-8] XOR W[i-14] XOR W[i-16]) ROTATE-LEFT 1 Initialize W[i] as follows: W[i] = (W[i-6] XOR W[i-16] XOR W[i-28] XOR W[i-32]) ROTATE-LEFT 2 That means that 6 rounds could be vectorized at once, with no additional calculations, instead of just 3! This optimization is independent of Intel or any other microprocessor architecture, although the microprocessor has to support vectorization to use it, and exploits one of the weaknesses of SHA-1. Optimization: SSSE3 Intel SSSE3 makes use of 16 %xmm registers, each 128 bits wide. The 4 32-bit inputs to a round, W[i-6], W[i-16], W[i-28], W[i-32], all fit in one %xmm register. The following code snippet, from Max Locktyukhin's article, converted to ATT assembly syntax, computes 4 rounds in parallel with just a dozen or so SSSE3 instructions: movdqa W_minus_04, W_TMP pxor W_minus_28, W // W equals W[i-32:i-29] before XOR // W = W[i-32:i-29] ^ W[i-28:i-25] palignr $8, W_minus_08, W_TMP // W_TMP = W[i-6:i-3], combined from // W[i-4:i-1] and W[i-8:i-5] vectors pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) movdqa W, W_TMP // 4 dwords in W are rotated left by 2 psrld $30, W // rotate left by 2 W = (W >> 30) | (W << 2) pslld $2, W_TMP por W, W_TMP movdqa W_TMP, W // four new W values W[i:i+3] are now calculated paddd (K_XMM), W_TMP // adding 4 current round's values of K movdqa W_TMP, (WK(i)) // storing for downstream GPR instructions to read A window of the 32 previous results, W[i-1] to W[i-32] is saved in memory on the stack. This is best illustrated with a chart. Without vectorization, computing the rounds is like this (each "R" represents 1 round of SHA-1 computation): RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR With vectorization, 4 rounds can be computed in parallel: RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR Optimization: AVX The new "Sandy Bridge" microprocessor architecture, which supports AVX, allows another interesting optimization. SSSE3 instructions have two operands, a input and an output. AVX allows three operands, two inputs and an output. In many cases two SSSE3 instructions can be combined into one AVX instruction. The difference is best illustrated with an example. Consider these two instructions from the snippet above: pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) With AVX they can be combined in one instruction: vpxor W_minus_16, W, W_TMP // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) This optimization is also in Solaris, although Sandy Bridge-based systems aren't widely available yet. As an exercise for the reader, AVX also has 256-bit media registers, %ymm0 - %ymm15 (a superset of 128-bit %xmm0 - %xmm15). Can %ymm registers be used to parallelize the code even more? Optimization: Solaris-specific In addition to using the Intel code described above, I performed other minor optimizations to the Solaris SHA-1 code: Increased the digest(1) and mac(1) command's buffer size from 4K to 64K, as previously done for decrypt(1) and encrypt(1). This size is well suited for ZFS file systems, but helps for other file systems as well. Optimized encode functions, which byte swap the input and output data, to copy/byte-swap 4 or 8 bytes at-a-time instead of 1 byte-at-a-time. Enhanced the Solaris mdb(1) and kmdb(1) debuggers to display all 16 %xmm and %ymm registers (mdb "$x" command). Previously they only displayed the first 8 that are available in 32-bit mode. Can't optimize if you can't debug :-). Changed the SHA-1 code to allow processing in "chunks" greater than 2 Gigabytes (64-bits) Performance I measured performance on a Sun Ultra 27 (which has a Nehalem-class Xeon 5500 Intel W3570 microprocessor @3.2GHz). Turbo mode is disabled for consistent performance measurement. Graphs are better than words and numbers, so here they are: The first graph shows the Solaris digest(1) command before and after the optimizations discussed here, contained in libmd(3LIB). I ran the digest command on a half GByte file in swapfs (/tmp) and execution time decreased from 1.35 seconds to 0.98 seconds. The second graph shows the the results of an internal microbenchmark that uses the Solaris libpkcs11(3LIB) library. The operations are on a 128 byte buffer with 10,000 iterations. The results show operations increased from 320,000 to 416,000 operations per second. Finally the third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. The results show for 1 kernel thread, operations increased from 410 to 600 MBytes/second. For 8 kernel threads, operations increase from 1540 to 1940 MBytes/second. Availability This code is in Solaris 11 FCS. It is available in the 64-bit libmd(3LIB) library for 64-bit programs and is in the Solaris kernel. You must be running hardware that supports Intel's SSSE3 instructions (for example, Intel Nehalem, Westmere, or Sandy Bridge microprocessor architectures). The easiest way to determine if SSSE3 is available is with the isainfo(1) command. For example, nehalem $ isainfo -v $ isainfo -v 64-bit amd64 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu If the output also shows "avx", the Solaris executes the even-more optimized 3-operand AVX instructions for SHA-1 mentioned above: sandybridge $ isainfo -v 64-bit amd64 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this code. Solaris libraries and kernel automatically determine if it's running on SSSE3 or AVX-capable machines and execute the correctly-tuned code for that microprocessor. Summary The Solaris 11 Crypto Framework, via the sha1 kernel module and libmd(3LIB) and libpkcs11(3LIB) libraries, incorporated a useful SHA-1 optimization from Intel for SSSE3-capable microprocessors. As with other Solaris optimizations, they come automatically "under the hood" with the current Solaris release. References "Improving the Performance of the Secure Hash Algorithm (SHA-1)" by Max Locktyukhin (Intel, March 2010). The source for these SHA-1 optimizations used in Solaris "SHA-1", Wikipedia Good overview of SHA-1 FIPS 180-1 SHA-1 standard (FIPS, 1995) NIST Comments on Cryptanalytic Attacks on SHA-1 (2005, revised 2006)

    Read the article

  • Spring AOP Pointcut syntax for AND, OR and NOT

    - by ixlepixle
    I'm having trouble with a pointcut definition in Spring (version 2.5.6). I'm trying to intercept all method calls to a class, except for a given method (someMethod in the example below). <aop:config> <aop:advisor pointcut="execution(* x.y.z.ClassName.*(..)) AND NOT execution(* x.y.x.ClassName.someMethod(..))" /> </aop:config> However, the interceptor is invoked for someMethod as well. Then I tried this: <aop:config> <aop:advisor pointcut="execution(* x.y.z.ClassName.(* AND NOT someMethod)(..)) )" /> </aop:config> But this does not compile as it is not valid syntax (I get a BeanCreationException). Can anybody give any tips?

    Read the article

  • How to publish multiple jar files to maven on a clean install

    - by Abhijit Hukkeri
    I have a used the maven assembly plugin to create multiple jar from one jar now the problem is that I have to publish these jar to the local repo, just like other maven jars publish by them self when they are built maven clean install how will I be able to do this here is my pom file <project> <parent> <groupId>parent.common.bundles</groupId> <version>1.0</version> <artifactId>child-bundle</artifactId> </parent> <modelVersion>4.0.0</modelVersion> <groupId>common.dataobject</groupId> <artifactId>common-dataobject</artifactId> <packaging>jar</packaging> <name>common-dataobject</name> <version>1.0</version> <dependencies> </dependencies> <build> <plugins> <plugin> <groupId>org.jibx</groupId> <artifactId>maven-jibx-plugin</artifactId> <version>1.2.1</version> <configuration> <directory>src/main/resources/jibx_mapping</directory> <includes> <includes>binding.xml</includes> </includes> <verbose>false</verbose> </configuration> <executions> <execution> <goals> <goal>bind</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-assembly-plugin</artifactId> <executions> <execution> <id>make-business-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <appendAssemblyId>false</appendAssemblyId> <finalName>flight-dto</finalName> <descriptors> <descriptor>src/main/assembly/car-assembly.xml</descriptor> </descriptors> <attach>true</attach> </configuration> </execution> <execution> <id>make-gui-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <appendAssemblyId>false</appendAssemblyId> <finalName>app_gui</finalName> <descriptors> <descriptor>src/main/assembly/bike-assembly.xml</descriptor> </descriptors> <attach>true</attach> </configuration> </execution> </executions> </plugin> </plugins> </build> </project> Here is my assembly file <assembly> <id>app_business</id> <formats> <format>jar</format> </formats> <baseDirectory>target</baseDirectory> <includeBaseDirectory>false</includeBaseDirectory> <fileSets> <fileSet> <directory>${project.build.outputDirectory}</directory> <outputDirectory></outputDirectory> <includes> <include>com/dataobjects/**</include> </includes> </fileSet> </fileSets> </assembly>

    Read the article

  • How do ansynchronous methods work

    - by Polaris878
    Hello, I'm wondering if anyone can help me understand some asynchronous javascript concepts... Say I make an asynch ajax call like so: xmlhttp=new XMLHttpRequest(); xmlhttp.onreadystatechange= myFoo; xmlhttp.open("GET",url,true); Here is my callback function: function myFoo() { if (xmlhttp.readyState==4) { if (xmlhttp.status==200) { // Success message } else { // some error message } } } Now, where and when does the execution path start again? Once I make the call to open(), does execution continue directly below the open() and another "thread" enters the asynch function once the ajax request has been completed? Or, does the browser wait for the request to complete, make the Asynch call, and then execution continues right after the open? Thanks!

    Read the article

  • Maven Release Plugin with JAXB issues

    - by Wysawyg
    Hiya, We've got a project set up to use the Maven Release Plugin which includes a phase that unpacks a JAR of XML schemas pulled from Artifactory and a phase that generates XJC classes. We're on maven release 2.2.1. Unfortunately the latter phase is executing before the former which means that it isn't generating the XJC classes for the schema. A partial POM.XML looks like: <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <executions> <execution> <id>unpack</id> <!-- phase>generate-sources</phase --> <goals> <goal>unpack</goal> <goal>copy</goal> </goals> <configuration> <artifactItems> <artifactItem> <groupId>ourgroupid</groupId> <artifactId>ourschemas</artifactId> <version>5.1</version> <outputDirectory>${project.basedir}/src/main/webapp/xsd</outputDirectory> <excludes>META-INF/</excludes> <overWrite>true</overWrite> </artifactItem> </artifactItems> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>maven-buildnumber-plugin</artifactId> <version>0.9.6</version> <executions> <execution> <phase>validate</phase> <goals> <goal>create</goal> </goals> </execution> </executions> <configuration> <doCheck>true</doCheck> <doUpdate>true</doUpdate> </configuration> </plugin> <plugin> <groupId>org.jvnet.jaxb2.maven2</groupId> <artifactId>maven-jaxb2-plugin</artifactId> <configuration> <schemaDirectory>${project.basedir}/src/main/webapp/xsd</schemaDirectory> <schemaIncludes> <include>*.xsd</include> <include>*/*.xsd</include> </schemaIncludes> <verbose>true</verbose> <!-- args> <arg>-Djavax.xml.validation.SchemaFactory:http://www.w3.org/2001/XMLSchema=org.apache.xerces.jaxp.validation.XMLSchemaFactory</arg> </args--> </configuration> <executions> <execution> <goals> <goal>generate</goal> </goals> </execution> </executions> </plugin> I've tried googling for it, unfortunately I ended up with a case of thousands of links none of which were actually relevant so I'd be very grateful if someone knew how to configure the order of the release plugin steps to ensure a was fully executed before it did b. Thanks

    Read the article

  • How to publish the jars to repository after creating multiple jars from single jar using maven assem

    - by Abhijit Hukkeri
    Hi I have a used the maven assembly plugin to create multiple jar from one jar now the problem is that I have to publish these jar to the local repo, just like other maven jars publish by them self when they are built maven clean install how will I be able to do this here is my pom file <project> <parent> <groupId>parent.common.bundles</groupId> <version>1.0</version> <artifactId>child-bundle</artifactId> </parent> <modelVersion>4.0.0</modelVersion> <groupId>common.dataobject</groupId> <artifactId>common-dataobject</artifactId> <packaging>jar</packaging> <name>common-dataobject</name> <version>1.0</version> <dependencies> </dependencies> <build> <plugins> <plugin> <groupId>org.jibx</groupId> <artifactId>maven-jibx-plugin</artifactId> <version>1.2.1</version> <configuration> <directory>src/main/resources/jibx_mapping</directory> <includes> <includes>binding.xml</includes> </includes> <verbose>false</verbose> </configuration> <executions> <execution> <goals> <goal>bind</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-assembly-plugin</artifactId> <executions> <execution> <id>make-business-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <appendAssemblyId>false</appendAssemblyId> <finalName>flight-dto</finalName> <descriptors> <descriptor>src/main/assembly/car-assembly.xml</descriptor> </descriptors> <attach>true</attach> </configuration> </execution> <execution> <id>make-gui-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <appendAssemblyId>false</appendAssemblyId> <finalName>app_gui</finalName> <descriptors> <descriptor>src/main/assembly/bike-assembly.xml</descriptor> </descriptors> <attach>true</attach> </configuration> </execution> </executions> </plugin> </plugins> </build> </project> Here is my assembly file <assembly> <id>app_business</id> <formats> <format>jar</format> </formats> <baseDirectory>target</baseDirectory> <includeBaseDirectory>false</includeBaseDirectory> <fileSets> <fileSet> <directory>${project.build.outputDirectory}</directory> <outputDirectory></outputDirectory> <includes> <include>com/dataobjects/**</include> </includes> </fileSet> </fileSets> </assembly>

    Read the article

  • Javascript ajax asynchronous question...

    - by Polaris878
    Hello, I'm wondering if anyone can help me understand some asynchronous javascript concepts... Say I make an asynch ajax call like so: xmlhttp=new XMLHttpRequest(); xmlhttp.onreadystatechange= myFoo; xmlhttp.open("GET",url,true); Here is my callback function: function myFoo() { if (xmlhttp.readyState==4) { if (xmlhttp.status==200) { // Success message } else { // some error message } } } Now, where and when does the execution path start again? Once I make the call to open(), does execution continue directly below the open() and another "thread" enters the asynch function once the ajax request has been completed? Or, does the browser wait for the request to complete, make the Asynch call, and then execution continues right after the open? Thanks!

    Read the article

  • Throughput measurements

    - by dotsid
    I wrote simple load testing tool for testing performance of Java modules. One problem I faced is algorithm of throughput measurements. Tests are executed in several thread (client configure how much times test should be repeated), and execution time is logged. So, when tests are finished we have following history: 4 test executions 2 threads 36ms overall time - idle * test execution 5ms 9ms 4ms 13ms T1 |-*****-*********-****-*************-| 3ms 6ms 7ms 11ms T2 |-***-******-*******-***********-----| <-----------------36ms---------------> For the moment I calculate throughput (per second) in a following way: 1000 / overallTime * threadCount. But there is problem. What if one thread will complete it's own tests more quickly (for whatever reason): 3ms 3ms 3ms 3ms T1 |-***-***-***-***----------------| 3ms 6ms 7ms 11ms T2 |-***-******-*******-***********-| <--------------32ms--------------> In this case actual throughput is much better because of measured throughput is bounded by the most slow thread. So, my question is how should I measure throughput of code execution in multithreaded environment.

    Read the article

  • Sql server indexed view

    - by Jose
    OK, I'm confused about sql server indexed views(using 2008) I've got an indexed view called AssignmentDetail when I look at the execution plan for select * from AssignmentDetail it shows the execution plan of all the underlying indexes of all the other tables that the indexed view is supposed to abstract away. I would think that the execution plan woul simply be an clustered index scan of PK_AssignmentDetail(the name of the clustered index for my view) but it doesn't. There seems to be no performance gain with this indexed view what am I supposed to do? Should I also create a non-clustered index with all of the columns so that it doesn't have to hit all the other indexes? Any insight would be greatly appreciated

    Read the article

  • Logging into table in MS SQL trigger

    - by Martin
    I am coding MS SQL 2005 trigger. I want to make some logging during trigger execution, using INSERT statement into my log table. When there occurs error during execution, I want to raise error and cancel action that cause trigger execution, but not to lose log records. What is the best way to achieve this? Now my trigger logs everything except situation when there is error - because of ROLLBACK. RAISERROR statement is needed in order to inform calling program about error. Now, my error handling code looks like: if (@err = 1) begin INSERT INTO dbo.log(date, entry) SELECT getdate(), 'ERROR: ' + out from #output RAISERROR (@msg, 16, 1) rollback transaction return end

    Read the article

  • Best Automation Frame work design

    - by Vijay Prasath
    Using Nunit Frame work or Creating Visual studio Test Projects which one is the best way to save the time and effective automation? Now i am using selenium IDE to script the maximum parts in my application to reduce the time of execution(i feel ide execution is faster than test project execution) using gotoif, while, regexp ..etc and would go Selenium RC only for data driven methods and the events which have not been handled by IDE. Please suggest me Am i in the right way? because i am in the beginning stage on Automating my applications asking this Question for early correction is better.

    Read the article

  • Adding java source (.java files) to test jar in Maven

    - by user320550
    Hi all, I'm making use of my pom.xml and am was able to generate the jar for src/main/java (say app.jar) as well as for src/test/java (say app-test.jar). I was also able to include my java sources as part of the app.jar (i.e. have both my .class as well as my .java files in the jar). However for my app-test.jar, i'm not able to include my .java files in it. This is my pom.xml: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mycompany.app</groupId> <artifactId>my-app</artifactId> <version>1.0-SNAPSHOT</version> <packaging>jar</packaging> <name>my-app</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> </dependencies> <build> <resources> <resource> <directory>src/main/java</directory> </resource> </resources> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.3.1</version> <executions> <execution> <phase>package</phase> <goals> <goal>test-jar</goal> </goals> <configuration> <includes> <include>src/test/java</include> </includes> </configuration> </execution> </executions> </plugin> </plugins> </build> </project> Any help would be appreciated. Thanks. Update on post on Whaley's suggestion: Tried the maven-antrun-plugin, but rt now after running mvn package all i'm getting inside my tests.jar is the META-INF folder. .java and .class are not getting included: This is the part of the pom.xml <build> <resources> <resource> <directory>src/main/java</directory> </resource> </resources> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <executions> <execution> <phase>package</phase> <goals> <goal>test-jar</goal> </goals> <configuration> <includes> <include>src/test/java</include> </includes> </configuration> </execution> </executions> </plugin> <plugin> <artifactId>maven-antrun-plugin</artifactId> <executions> <execution> <id>${project.artifactId}-include-sources</id> <phase>process-resources</phase> <goals> <goal>run</goal> </goals> <configuration> <tasks> <copy todir="${project.build.testOutputDirectory}"> <fileset dir="${project.build.testSourceDirectory}"/> </copy> </tasks> </configuration> </execution> </executions> </plugin> </plugins> </build> Thanks.

    Read the article

  • java.util.Observable, will clients complete executing their update() before continuing

    - by jax
    When I call: setChanged(); notifyObservers() on a java.until.Observable class, will all the listening Observer objects complete execution of their udpate() methods - assuming we are running in the same Thread - before the java.until.Observable class continues running? This is important because I will be sending a few messages through the notifyObservers(Object o) method in quick concession, it is important that each Observer class has finished its method before the new one though. I understand that the order of execution for each Observer class may vary when we call notifyObservers() - it is just important that the order of method execution for each individual instance is in order.

    Read the article

  • Are there any subversion "dash board" web applications that can show me a list of recent commits from all my repositories?

    - by Joe
    I am looking for something like a subversion dashboard that at the very least can show me commits from across a group of repositories. Is there anything like this available? Since it could just as well be dead simple and I can't find anything immediately I am thinking if just scratching my own itch here, but I am hoping someone has wanted this before? Are there any subversion "dashboards" that an show me even a simple twitter-like list of commits from across my repositories?

    Read the article

  • Microsoft Declares the Future of ASP.NET is Web API

    - by sbwalker
    Sitting on a plane on my way home from Tech Ed 2012 in Orlando, I thought it would be a good time to jot down some key takeaways from this year’s conference. Some of these items I have known since the Microsoft MVP Summit which occurred in Redmond in late February ( but due to NDA restrictions I could not share them with the developer community at large ) and some of them are a result of insightful conversations with a wide variety of industry insiders and Microsoft employees at the conference. First, let’s travel back in time 4 years to the Microsoft MVP Summit in 2008. Microsoft was facing some heat from market newcomer Ruby on Rails and responded with a new web development framework of its own, ASP.NET MVC. At the Summit they estimated that MVC would only be applicable for ~10% of all new web development projects. Based on that prediction I questioned why they were investing such considerable resources for such a relative edge case, but my guess is that they felt it was an important edge case at the time as some of the more vocal .NET evangelists as well as some very high profile start-ups ( ie. Twitter ) had publicly announced their intent to use Rails. Microsoft made a lot of noise about MVC. In fact, they focused so much of their messaging and marketing hype around MVC that it appeared that WebForms was essentially dead. Yes, it may have been true that Microsoft continued to invest in WebForms, but from an outside perspective it really appeared that MVC was the only framework getting any real attention. As a result, MVC started to gain market share. An inside source at Microsoft told me that MVC usage has grown at a rate of about 5% per year and now sits at ~30%. Essentially by focusing so much marketing effort on MVC, Microsoft actually created a larger market demand for it.  This is because in the Microsoft ecosystem there is somewhat of a bandwagon mentality amongst developers. If Microsoft spends a lot of time talking about a specific technology, developers get the perception that it must be really important. So rather than choosing the right tool for the job, they often choose the tool with the most marketing hype and then try to sell it to the customer. In 2010, I blogged about the fact that MVC did not make any business sense for the DotNetNuke platform. This was because our ecosystem relied on third party extensions which were dependent on the WebForms model. If we migrated the core to MVC it would mean that all of the third party extensions would no longer be compatible, which would be an irresponsible business decision for us to make at the expense of our users and customers. However, this did not stop the debate from continuing to occur in our ecosystem. Clearly some developers had drunk Microsoft’s Kool-Aid about MVC and were of the mindset, to paraphrase an old Scottish saying, “If its not MVC, it’s crap”. Now, this is a rather ignorant position to take as most of the benefits of MVC can be achieved in WebForms with solid architecture and responsible coding practices. Clean separation of concerns, unit testing, and direct control over page output are all possible in the WebForms model – it just requires diligence and discipline. So over the past few years some horror stories have begun to bubble to the surface of software development projects focused on ground-up rewrites of web applications for the sole purpose of migrating from WebForms to MVC. These large scale rewrites were typically initiated by engineering teams with only a single argument driving the business decision, that Microsoft was promoting MVC as “the future”. These ill-fated rewrites offered no benefit to end users or customers and in fact resulted in a less stable, less scalable and more complicated systems – basically taking one step forward and two full steps back. A case in point is the announcement earlier this week that a popular open source .NET CMS provider has decided to pull the plug on their new MVC product which has been under active development for more than 18 months and revert back to WebForms. The availability of multiple server-side development models has deeply fragmented the Microsoft developer community. Some folks like to compare it to the age-old VB vs. C# language debate. However, the VB vs. C# language debate was ultimately more of a religious war because at least the two dominant programming languages were compatible with one another and could be used interchangeably. The issue with WebForms vs. MVC is much more challenging. This is because the messaging from Microsoft has positioned the two solutions as being incompatible with one another and as a result web developers feel like they are forced to choose one path or another. Yes, it is true that it has always been technically possible to use WebForms and MVC in the same project, but the tooling support has always made this feel “dirty”. The fragmentation has also made it difficult to attract newcomers as the perceived barrier to entry for learning ASP.NET has become higher. As a result many new software developers entering the market are gravitating to environments where the development model seems more simple and intuitive ( ie. PHP or Ruby ). At the same time that the Web Platform team was busy promoting ASP.NET MVC, the Microsoft Office team has been promoting Sharepoint as a platform for building internal enterprise web applications. Sharepoint has great penetration in the enterprise and over time has been enhanced with improved extensibility capabilities for software developers. But, like many other mature enterprise ASP.NET web applications, it is built on the WebForms development model. Similar to DotNetNuke, Sharepoint leverages a rich third party ecosystem for both generic web controls and more specialized WebParts – both of which rely on WebForms. So basically this resulted in a situation where the Web Platform group had headed off in one direction and the Office team had gone in another direction, and the end customer was stuck in the middle trying to figure out what to do with their existing investments in Microsoft technology. It really emphasized the perception that the left hand was not speaking to the right hand, as strategically speaking there did not seem to be any high level plan from Microsoft to ensure consistency and continuity across the different product lines. With the introduction of ASP.NET MVC, it also made some of the third party control vendors scratch their heads, and wonder what the heck Microsoft was thinking. The original value proposition of ASP.NET over Classic ASP was the ability for web developers to emulate the highly productive desktop development model by using abstract components for creating rich, interactive web interfaces. Web control vendors like Telerik, Infragistics, DevExpress, and ComponentArt had all built sizable businesses offering powerful user interface components to WebForms developers. And even after MVC was introduced these vendors continued to improve their products, offering greater productivity and a superior user experience via AJAX to what was possible in MVC. And since many developers were comfortable and satisfied with these third party solutions, the demand remained strong and the third party web control market continued to prosper despite the availability of MVC. While all of this was going on in the Microsoft ecosystem, there has also been a fundamental shift in the general software development industry. Driven by the explosion of Internet-enabled devices, the focus has now centered on service-oriented architecture (SOA). Service-oriented architecture is all about defining a public API for your product that any client can consume; whether it’s a native application running on a smart phone or tablet, a web browser taking advantage of HTML5 and Javascript, or a rich desktop application running on a PC. REST-based services which utilize the less verbose characteristics of JSON as a transport mechanism, have become the preferred approach over older, more bloated SOAP-based techniques. SOA also has the benefit of producing a cross-platform API, as every major technology stack is able to interact with standard REST-based web services. And for web applications, more and more developers are turning to robust Javascript libraries like JQuery and Knockout for browser-based client-side development techniques for calling web services and rendering content to end users. In fact, traditional server-side page rendering has largely fallen out of favor, resulting in decreased demand for server-side frameworks like Ruby on Rails, WebForms, and (gasp) MVC. In response to these new industry trends, Microsoft did what it always does – it immediately poured some resources into developing a solution which will ensure they remain relevant and competitive in the web space. This work culminated in a new framework which was branded as Web API. It is convention-based and designed to embrace native HTTP standards without copious layers of abstraction. This framework is designed to be the ultimate replacement for both the REST aspects of WCF and ASP.NET MVC Web Services. And since it was developed out of band with a dependency only on ASP.NET 4.0, it means that it can be used immediately in a variety of production scenarios. So at Tech Ed 2012 it was made abundantly clear in numerous sessions that Microsoft views Web API as the “Future of ASP.NET”. In fact, one Microsoft PM even went as far as to say that if we look 3-4 years into the future, that all ASP.NET web applications will be developed using the Web API approach. This is a fairly bold prediction and clearly telegraphs where Microsoft plans to allocate its resources going forward. Currently Web API is being delivered as part of the MVC4 package, but this is only temporary for the sake of convenience. It also sounds like there are still internal discussions going on in terms of how to brand the various aspects of ASP.NET going forward – perhaps the moniker of “ASP.NET Web Stack” coined a couple years ago by Scott Hanselman and utilized as part of the open source release of ASP.NET bits on Codeplex a few months back will eventually stick. Web API is being positioned as the unification of ASP.NET – the glue that is able to pull this fragmented mess back together again. The  “One ASP.NET” strategy will promote the use of all frameworks - WebForms, MVC, and Web API, even within the same web project. Basically the message is utilize the appropriate aspects of each framework to solve your business problems. Instead of navigating developers to a fork in the road, the plan is to educate them that “hybrid” applications are a great strategy for delivering solutions to customers. In addition, the service-oriented approach coupled with client-side development promoted by Web API can effectively be used in both WebForms and MVC applications. So this means it is also relevant to application platforms like DotNetNuke and Sharepoint, which means that it starts to create a unified development strategy across all ASP.NET product lines once again. And so what about MVC? There have actually been rumors floated that MVC has reached a stage of maturity where, similar to WebForms, it will be treated more as a maintenance product line going forward ( MVC4 may in fact be the last significant iteration of this framework ). This may sound alarming to some folks who have recently adopted MVC but it really shouldn’t, as both WebForms and MVC will continue to play a vital role in delivering solutions to customers. They will just not be the primary area where Microsoft is spending the majority of its R&D resources. That distinction will obviously go to Web API. And when the question comes up of why not enhance MVC to make it work with Web API, you must take a step back and look at this from the higher level to see that it really makes no sense. MVC is a server-side page compositing framework; whereas, Web API promotes client-side page compositing with a heavy focus on web services. In order to make MVC work well with Web API, would require a complete rewrite of MVC and at the end of the day, there would be no upgrade path for existing MVC applications. So it really does not make much business sense. So what does this have to do with DotNetNuke? Well, around 8-12 months ago we recognized the software industry trends towards web services and client-side development. We decided to utilize a “hybrid” model which would provide compatibility for existing modules while at the same time provide a bridge for developers who wanted to utilize more modern web techniques. Customers who like the productivity and familiarity of WebForms can continue to build custom modules using the traditional approach. However, in DotNetNuke 6.2 we also introduced a new Service Framework which is actually built on top of MVC2 ( we chose to leverage MVC because it had the most intuitive, light-weight REST implementation in the .NET stack ). The Services Framework allowed us to build some rich interactive features in DotNetNuke 6.2, including the Messaging and Notification Center and Activity Feed. But based on where we know Microsoft is heading, it makes sense for the next major version of DotNetNuke ( which is expected to be released in Q4 2012 ) to migrate from MVC2 to Web API. This will likely result in some breaking changes in the Services Framework but we feel it is the best approach for ensuring the platform remains highly modern and relevant. The fact that our development strategy is perfectly aligned with the “One ASP.NET” strategy from Microsoft means that our customers and developer community can be confident in their current and future investments in the DotNetNuke platform.

    Read the article

< Previous Page | 517 518 519 520 521 522 523 524 525 526 527 528  | Next Page >