Search Results

Search found 7291 results on 292 pages for 'runtime exec'.

Page 71/292 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • ??????????? - Java SE Embedded 8

    - by kshimizu-Oracle
    Java?OS??????1?????????????????????????????????3?????????????? HEAP: Java????????????????????????????????? NON-HEAP: NON-HEAP????JVM???????????????????Code Cache?Metaspace???2????????????? Code Cache: ????JIT??????????????????????????? Metaspace: HEAP??????????????????????????   JavaVM??????????: VM?????????????????? ??????????????? ????????????????????????????????????????????????????????????????????????? HEAP?Java Mission Control???????????????????? (????)? ????Java SE?????????????API????????????????????????????????????? Mission Control?????API?????????????????????????????????API??????????????? HEAP???????????? VM????????"-Xmx"???????????????? java.lang.Runtime.maxMemory(); ?????HEAP????????? ?????VM????????"-Xms"? ????????????? "-Xms"???????"-Xmx"?????????? java.lang.Runtime.totalMemory(); ???????????HEAP????????????? java.lang.Runtime.freeMemory(); ??NON-HEAP???????????? API??????????? Java Mission Control?????????? ????????????Java Mission Control??????????????????????? ????"NON_HEAP"?????????NON-HEAP?????? ???? HEAP????NON-HEAP?????????????? Java VM???????????????????????????????????????? ?????????????????????????????????? ????HEAP/NON-HEAP?????????????????????????? OS?????????????? Linux???????procfs?Java??????????????????? (VmHWM or VmRSS) ????? ????HEAP/NON-HEAP??????????????????????????? ?????????????????? ??????JVM?????????????????? ?????????????????JVM???????????????????? ???JVM?????? ????????????? Embedded??JVM?????????? ??Embedded???Oracle JVM??????CPU????????????????????????????????????????? ??????CPU??????????????????????????????????????? Minimal/Client/Server??JVM???????????????? ????JVM??????????????????? ??????Compact????????????????? ? 2 - 3?????? Concept Guide (http://docs.oracle.com/javase/8/embedded/embedded-concepts/basic-concepts.htm) ???????? ??JVM??????????? ????????????????????? -Xms: ??????????? ?????????? ?????????????????????????????????????????????????? -Xmx: ??????????? -XX:ReservedCodeCacheSize: Code Cache??????? ?) JIT??????????????Code Cache????????????0???????? -Xint: JIT??????????? ????????????? JIT?????????????????????? ????????????????? -Xss: ???????????????????? ????????????????????????? ????????????????????????????? -XX:CompileThreshold: JIT?????????????????????????????????? ?????????????????????? ????????? ?????????????????? Code Cache?????????? ?????????? ????????????????????? ????????????????????????? ??????????????????????? ?????????????????????

    Read the article

  • Installing VSTO 4.0 Causes VSTO 3.0 Addin to quit working

    - by Jacob Adams
    I just installed Visual Studio 2010 yesterday. As part of that I installed VSTO 4.0. Now when I run any Office application, my VSTO 3.0 addins fail to load. The error in the event log is Customization URI: file:///H:/PathToMyAddin/MyAddin.vsto Exception: Customization does not have the permissions required to create an application domain. ***** Exception Text ******* Microsoft.VisualStudio.Tools.Applications.Runtime.CannotCreateCustomizationDomainException: Customization does not have the permissions required to create an application domain. --- System.Security.SecurityException: Customized functionality in this application will not work because the administrator has listed file:///H:/PathToMyAddin/MyAddin.vsto as untrusted. Contact your administrator for further assistance. at Microsoft.VisualStudio.Tools.Office.Runtime.RuntimeUtilities.VerifySolutionUri(Uri uri) at Microsoft.VisualStudio.Tools.Office.Runtime.DomainCreator.CreateCustomizationDomainInternal(String solutionLocation, String manifestName, String documentName, Boolean showUIDuringDeployment, IntPtr hostServiceProvider, IntPtr& executor) The Zone of the assembly that failed was: MyComputer It seems like like maybe this is due to it trying to load different version of .NET is the same process/AppDomain. However the error would indicate it's some sort of permissions issue.

    Read the article

  • Can I set a <probing> privatePath in ~/subdir/web.config without access to the root application .con

    - by Bago
    In ASP.NET, is it possible to set a path from a web.config in a subdir that is not setup as a virtual directory? In other words, can I reference a private assembly in the ~/subdir/bin folder? Take a look at my setup below, and let me know if I'm doing something wrong, please. Let me explain why I'm doing this: my page is setup in ~/subdir. I don't have write access to the root. I only have FTP access to the server (i.e., I am not an IIS admin and I can't login to the machine) I am trying to use FCKeditor in my subdir application Here is my folder structure: / | -subdir | | - Bin | | | *FredCK.FCKeditorV2.dll | | *Default.aspx | | *web.config Here is the <runtime> section of ~/subdir/web.config: <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <probing privatePath="subdir;subdir/bin" /> </dependentAssembly> </assemblyBinding> </runtime> I've tried all sorts of things to get to work. However, upon checking the Fusion logs, my subdir never shows up in the probing paths.

    Read the article

  • Why were namespaces removed from ECMAScript consideration?

    - by Bob
    Namespaces were once a consideration for ECMAScript (the old ECMAScript 4) but were taken out. As Brendan Eich says in this message: One of the use-cases for namespaces in ES4 was early binding (use namespace intrinsic), both for performance and for programmer comprehension -- no chance of runtime name binding disagreeing with any earlier binding. But early binding in any dynamic code loading scenario like the web requires a prioritization or reservation mechanism to avoid early versus late binding conflicts. Plus, as some JS implementors have noted with concern, multiple open namespaces impose runtime cost unless an implementation works significantly harder. For these reasons, namespaces and early binding (like packages before them, this past April) must go. But I'm not sure I understand all of that. What exactly is a prioritization or reservation mechanism and why would either of those be needed? Also, must early binding and namespaces go hand-in-hand? For some reason I can't wrap my head around the issues involved. Can anyone attempt a more fleshed out explanation? Also, why would namespaces impose runtime costs? In my mind I can't help but see little difference in concept between a namespace and a function using closures. For instance, Yahoo and Google both have YAHOO and google objects that "act like" namespaces in that they contain all of their public and private variables, functions, and objects within a single access point. So why, then, would a namespace be so significantly different in implementation? Maybe I just have a misconception as to what a namespace is exactly.

    Read the article

  • Error when installing plugin in Eclipse

    - by Derk
    When I try to install a plugin in Eclipse I get these error messages Registry event dispatcher Error notifying registry change listener. Error notifying registry change listener. Invalid registry object Error notifying registry change listener. Invalid registry object Error notifying registry change listener. Invalid registry object Error notifying registry change listener. Invalid registry object Error notifying registry change listener. Invalid registry object Has someone an idea what the cause of this problem could be? Thanks Edit: I see the Eclipse .log file has also a lot of new stack traces The first one is java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=nl_NL Framework arguments: -product org.eclipse.epp.package.jee.product Command-line arguments: -os win32 -ws win32 -arch x86_64 -product org.eclipse.epp.package.jee.product !ENTRY org.eclipse.equinox.registry 4 2 2010-05-06 21:04:31.236 !MESSAGE Problems occurred when invoking code from plug-in: "org.eclipse.equinox.registry". !STACK 0 org.eclipse.core.runtime.InvalidRegistryObjectException: Invalid registry object at org.eclipse.core.internal.registry.TemporaryObjectManager.getObject(TemporaryObjectManager.java:98) at org.eclipse.core.internal.registry.BaseExtensionPointHandle.getExtensionPoint(BaseExtensionPointHandle.java:106) at org.eclipse.core.internal.registry.BaseExtensionPointHandle.getContributor(BaseExtensionPointHandle.java:45) at org.eclipse.core.internal.registry.BaseExtensionPointHandle.getNamespace(BaseExtensionPointHandle.java:37) at org.eclipse.ui.internal.PopupMenuExtender.registryChanged(PopupMenuExtender.java:520) at org.eclipse.core.internal.registry.ExtensionRegistry$2.run(ExtensionRegistry.java:921) at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42) at org.eclipse.core.internal.registry.ExtensionRegistry.processChangeEvent(ExtensionRegistry.java:919) at org.eclipse.core.runtime.spi.RegistryStrategy.processChangeEvent(RegistryStrategy.java:260) at org.eclipse.core.internal.registry.osgi.ExtensionEventDispatcherJob.run(ExtensionEventDispatcherJob.java:50) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54)

    Read the article

  • ServerIdentity memory leak with IHttpAsyncHandler

    - by Anton
    I have a .NET web application that consists of a single HTTP handler class that implements IHttpAsyncHandler. All requests to this handler are handled asynchronously, though some requests are short-lived and some are long-lived (nothing over a few seconds). The problem is that memory consumption grows over time as requests are handled. All profiling results point to an unbounded growth of String objects held by instances of System.Runtime.Remoting.ServerIdentity. Every String value is different, but they all look similar to: /dd41c00e_1566_4702_b660_c81cdea18a43/vigefresi5pfv8n0ekddg57z_1154.rem There is nothing in my application that uses ServerIdentity directly, and unless I am mistaken, the ServerIdentity instances are proportional to the number of incoming requests. If this is an internal .NET structure, it looks like the CLR is not cleaning up after itself. What could be causing the leak? UPDATE A little less than half of the String objects are being held by System.Runtime.Remoting. The remaining String objects are being held by System.Runtime.Serialization and look similar to: +1sgess5rjcrgbmp3kqr6bmv_3474.rem Also, the problem only seems to occur when lots of simultaneous HTTP web requests arrive.

    Read the article

  • Delegates in .NET: how are they constructed ?

    - by Saulius
    While inspecting delegates in C# and .NET in general, I noticed some interesting facts: Creating a delegate in C# creates a class derived from MulticastDelegate with a constructor: .method public hidebysig specialname rtspecialname instance void .ctor(object 'object', native int 'method') runtime managed { } Meaning that it expects the instance and a pointer to the method. Yet the syntax of constructing a delegate in C# suggests that it has a constructor new MyDelegate(int () target) where I can recognise int () as a function instance (int *target() would be a function pointer in C++). So obviously the C# compiler picks out the correct method from the method group defined by the function name and constructs the delegate. So the first question would be, where does the C# compiler (or Visual Studio, to be precise) pick this constructor signature from ? I did not notice any special attributes or something that would make a distinction. Is this some sort of compiler/visualstudio magic ? If not, is the T (args) target construction valid in C# ? I did not manage to get anything with it to compile, e.g.: int () target = MyMethod; is invalid, so is doing anything with MyMetod, e.g. calling .ToString() on it (well this does make some sense, since that is technically a method group, but I imagine it should be possible to explicitly pick out a method by casting, e.g. (int())MyFunction. So is all of this purely compiler magic ? Looking at the construction through reflector reveals yet another syntax: Func CS$1$0000 = new Func(null, (IntPtr) Foo); This is consistent with the disassembled constructor signature, yet this does not compile! One final interesting note is that the classes Delegate and MulticastDelegate have yet another sets of constructors: .method family hidebysig specialname rtspecialname instance void .ctor(class System.Type target, string 'method') cil managed Where does the transition from an instance and method pointer to a type and a string method name occur ? Can this be explained by the runtime managed keywords in the custom delegate constructor signature, i.e. does the runtime do it's job here ?

    Read the article

  • Quartz compositions created in Snow Leopard (10.6) doesn't work in Leopard (10.5) despite testing in

    - by adib
    Hi I have a reasonably advanced (many patches and subpatches) quartz composition that was created in Snow Leopard but doesn't run well (many elements are not rendered) in Leopard. The composition tested OK via Quartz Composer's Test in Runtime option and works fine for both Leopard 32-bits and Leopard 64-bits (menu item "File | Test in Runtime | Leopard 32-bits". In an actual Leopard (32-bits) system, a lot of elements are not rendered in the quartz composition. Below are the log file excerpt when the composition is run in QuickTime Player under Leopard: QuickTime Player[134] *** <QCNodeManager | namespace = "com.apple.QuartzComposer" | 335 nodes>: Patch with name "/units to pixels" is missing QuickTime Player[134] *** Message from <QCPatch = 0x06D82880 "(null)">:Cannot create node of class "/units to pixels" and identifier "(null)" QuickTime Player[134] *** Message from <QCPatch = 0x06D7C130 "(null)">:Cannot create node of class "/resize image to target" and identifier "(null)" QuickTime Player[134] *** Message from <QCPatch = 0x06D7C130 "(null)">:Cannot create connection from ["outputValue" @ "Math_1"] to ["Target_Pixels" @ "Patch_2"] The patch "units to pixels" is a system patch in Snow Leopard whereas the patch "resize image to target" is a custom virtual patch located in my home directory. It seems that we can cross out problems in which the composition is referencing a missing virtual patch. I have tested the composition under another user's account and it ran fine which shows that it already embeds the "resize image to target" virtual patch that is located in my home directory. I'm really puzzled why the composition passes the Leopard Runtime test but yet fail to run in an actual Leopard OS? Is there a post-processing step that I need to run to the composition file? Is there any way to make this patch more compatible with Leopard? Thanks in advance.

    Read the article

  • How i can add border to ListViewItem, ListView in GridView mode.

    - by Andrew
    Hello! I want to have a border around ListViewItem (row in my case). ListView source and columns generated during Runtime. In XAML i have this structure: <ListView Name="listViewRaw"> <ListView.View> <GridView> </GridView> </ListView.View> </ListView> During Runtime i bind listview to DataTable, adding necessary columns and bindings: var view = (listView.View as GridView); view.Columns.Clear(); for (int i = 0; i < table.Columns.Count; i++) { GridViewColumn col = new GridViewColumn(); col.Header = table.Columns[i].ColumnName; col.DisplayMemberBinding = new Binding(string.Format("[{0}]", i.ToString())); view.Columns.Add(col); } listView.CoerceValue(ListView.ItemsSourceProperty); listView.DataContext = table; listView.SetBinding(ListView.ItemsSourceProperty, new Binding()); So i want to add border around each row, and set border behavior (color etc) with DataTriggers (for example if value in 1st column = "Visible", set border color to black). Can i put border through DataTemplate in ItemTemplate? I know solution, where you manipulate with CellTemplates, but i don't really like it. I want something like this if this even possible. <DataTemplate> <Border Name="Border" BorderBrush="Transparent" BorderThickness="2"> <ListViewItemRow><!-- Put my row here, but i ll know about table structure only during runtime --></ListViewItemRow> </Border> </DataTemplate>

    Read the article

  • Load dll's from Environment Variable Path from a service

    - by Paulo Manuel Santos
    We install Matlab Runtime on a machine, then we restart a .net windows service that invokes methods from the Matlab Runtime. The problem is that we receive TypeInitializationException errors until we restart windows. We think this happens because Environment Variables are not changed on services until restart and Matlab uses the %Path% variable to reference it's core DLL's. My question is, do you think I can change the %Path% variable so that Matlab will use it when referencing the core dll's for it's engine? Or is it possible to add a directory to the runtime DLL loading mechanism of .NET so that those Matlab core dll's would be referenced correctly without restarting the machine? Here is the exception we get System.TypeInitializationException: The type initializer for 'MatlabCalculation.Calculation' threw an exception. ---> System.TypeInitializationException: The type initializer for 'MathWorks.MATLAB.NET.Utility.MWMCR' threw an exception. ---> System.DllNotFoundException: Unable to load DLL 'mclmcrrt710.dll': Kan opgegeven module niet vinden. (Exception from HRESULT: 0x8007007E) at MathWorks.MATLAB.NET.Utility.MWMCR.mclmcrInitialize() at MathWorks.MATLAB.NET.Utility.MWMCR..cctor() --- End of inner exception stack trace --- at MatlabCalculation.Calculation..cctor() --- End of inner exception stack trace --- at MatlabCalculation.Calculation.Finalize() "Kan opgegeven module niet vinden" = "The specified module not found"

    Read the article

  • N-Tier Architecture - Structure with multiple projects in VB.NET

    - by focus.nz
    I would like some advice on the best approach to use in the following situation... I will have a Windows Application and a Web Application (presentation layers), these will both access a common business layer. The business layer will look at a configuration file to find the name of the dll (data layer) which it will create a reference to at runtime (is this the best approach?). The reason for creating the reference at runtime to the data access layer is because the application will interface with a different 3rd party accounting system depending on what the client is using. So I would have a separate data access layer to support each accounting system. These could be separate setup projects, each client would use one or the other, they wouldn't need to switch between the two. Projects: MyCompany.Common.dll - Contains interfaces, all other projects have a reference to this one. MyCompany.Windows.dll - Windows Forms Project, references MyCompany.Business.dll MyCompany.Web.dll - Website project, references MyCompany.Business.dll MyCompany.Busniess.dll - Business Layer, references MyCompany.Data.* (at runtime) MyCompany.Data.AccountingSys1.dll - Data layer for accounting system 1 MyCompany.Data.AccountingSys2.dll - Data layer for accounting system 2 The project MyCompany.Common.dll would contain all the interfaces, each other project would have a reference to this one. Public Interface ICompany ReadOnly Property Id() as Integer Property Name() as String Sub Save() End Interface Public Interface ICompanyFactory Function CreateCompany() as ICompany End Interface The project MyCompany.Data.AccountingSys1.dll and MyCompany.Data.AccountingSys2.dll would contain the classes like the following: Public Class Company Implements ICompany Protected _id As Integer Protected _name As String Public ReadOnly Property Id As Integer Implements MyCompany.Common.ICompany.Id Get Return _id End Get End Property Public Property Name As String Implements MyCompany.Common.ICompany.Name Get Return _name End Get Set(ByVal value as String) _name = value End Set End Property Public Sub Save() Implements MyCompany.Common.ICompany.Save Throw New NotImplementedException() End Sub End Class Public Class CompanyFactory Implements ICompanyFactory Public Function CreateCompany() As ICompany Implements MyCompany.Common.ICompanyFactory.CreateCompany Return New Company() End Function End Class The project MyCompany.Business.dll would provide the business rules and retrieve data form the data layer: Public Class Companies Public Shared Function CreateCompany() As ICompany Dim factory as New MyCompany.Data.CompanyFactory Return factory.CreateCompany() End Function End Class Any opinions/suggestions would be greatly appreciated.

    Read the article

  • Annotation to make available generic type

    - by mdma
    Given an generic interface like interface DomainObjectDAO<T> { T newInstance(); add(T t); remove(T t); T findById(int id); // etc... } I'd like to create a subinterface that specifies the type parameter: interface CustomerDAO extends DomainObjectDAO<Customer> { // customer-specific queries - incidental. } The implementation needs to know the actual template parameter type, but of course type erasure means isn't available at runtime. Is there some annotation that I could include to declare the interface type? Something like @GenericParameter(Customer.class) interface CustomerDAO extends DomainObjectDAO<Customer> { } The implementation could then fetch this annotation from the interface and use it as a substitute for runtime generic type access. Some background: This interface is implemented using JDK dynamic proxies as outlined here. The non-generic version of this interface has been working well, but it would be nicer to use generics and not have to create a subinterface for each domain object type. The actual type is needed at runtime to implement the newInstance method, amongst others.

    Read the article

  • Type error while trying to implement the (>>=) function in order to create a custom monad transforme

    - by CharlieP
    Hello, I'm trying to create a monad transformer for a future project, but unfortunately, my implementation of the Monad typeclasse's (=) function doesn't work. First of all, here is the underlying monad's implementation : newtype Runtime a = R { unR :: State EInfo a } deriving (Monad) Here, the implementation of the Monad typeclasse is done automatically by GHC (using the GeneralizedNewtypeDeriving language pragma). The monad transformer is defined as so : newtype RuntimeT m a = RuntimeT { runRuntimeT :: m (Runtime a) } The problem comes from the way I instanciate the (=) function of the Monad typeclasse : instance (Monad m) => Monad (RuntimeT m) where return a = RuntimeT $ (return . return) a x >>= f = runRuntimeT x >>= id >>= f The way I see it, the first >>= runs in the underlying m monad. Thus, runRuntimeT x >>= returns a value of type Runtime a (right ?). Then, the following code, id >>=, should return a value of type a. This value is the passed on to the function f of type f :: (Monad m) => a -> RuntimeT m b. And here comes the type problem : the f function's type doesn't match the type required by the (=) function. Jow can I make this coherent ? I can see why this doesn't work, but I can't manage to turn it into something functionnal. Thank you for you help, and do not hesitate to correct any flaws in my message, Charlie P.

    Read the article

  • How to implement a counter when using golang's goroutine?

    - by MrROY
    I'm trying to make a queue struct that have push and pop functions. I need to use 10 threads push and another 10 threads pop data, just like i did in the code below. Questions : 1. I need to print out how much i have pushed/popped, but i don't know how to do that. 2. Is there anyway to speed up my code ? the code is too slow for me. package main import ( "runtime" "time" ) const ( DATA_SIZE_PER_THREAD = 10000000 ) type Queue struct { records string } func (self Queue) push(record chan interface{}) { // need push counter record <- time.Now() } func (self Queue) pop(record chan interface{}) { // need pop counter <- record } func main() { runtime.GOMAXPROCS(runtime.NumCPU()) //record chan record := make(chan interface{},1000000) //finish flag chan finish := make(chan bool) queue := new(Queue) for i:=0; i<10; i++ { go func() { for j:=0; j<DATA_SIZE_PER_THREAD; j++ { queue.push(record) } finish<-true }() } for i:=0; i<10; i++ { go func() { for j:=0; j<DATA_SIZE_PER_THREAD; j++ { queue.pop(record) } finish<-true }() } for i:=0; i<20; i++ { <-finish } }

    Read the article

  • How can I get the correct DisplayMetrics from an AppWidget in Android?

    - by Gary
    I need to determine the screen density at runtime in an Android AppWidget. I've set up an HDPI emulator device (avd). If set up a regular executable project, and insert this code into the onCreate method: DisplayMetrics dm = getResources().getDisplayMetrics(); Log.d("MyTag", "screen density " + dm.densityDpi); This outputs "screen density 240" as expected. However, if I set up an AppWidget project, and insert this code into the onUpdate method: DisplayMetrics dm = context.getResources().getDisplayMetrics(); Log.d("MyTag", "screen density " + dm.densityDpi); This outputs "screen density 160". I noticed, hooking up the debugger, that the mDefaultDisplay member of the Resources object here is null in the AppWidget case. Similarly, if I get a resource at runtime using the Resources object obtained from context.getResources() in the AppWidget, it returns the wrong resource based on screen density. For instance, I have a 60x60px drawable for mdpi, and an 80x80 drawable for hdpi. If I get this Drawable object using context.getResources().getDrawable(...), it returns the 60x60 version. Is there any way to correctly deal with resources at runtime from the context of an AppWidget? Thanks!

    Read the article

  • jvm version for Websphere 6.1.0.23on Solaris

    - by dr jerry
    Hi I'm at big financial institute and we've an application running on Websphere 6.1. on Solaris. Due to MQ Connectivity we had to install fixpack 6.1.0.23. Unfortunately this broke an ejb (1.1) which is still there as legacy (Test missed it). [3/23/10 11:33:18:703 CET] 00000055 EJBContainerI E WSVR0068E: Attempt to start EnterpriseBean EventRisk_1.0.0#EventRiskEJB.jar#PolicyDataManager failed with exception: java.lang.NoSuchMethodError: com.ibm.ejs.csi.ResRefListImpl.(Lorg/eclipse/jst/j2ee/ejb/EnterpriseBean;Lcom/ibm/ejs/models/base/bindings/ejbbnd/EnterpriseBeanBinding;Lcom/ibm/ejs/models/base/extensions/ejbext/EnterpriseBeanExtension;)V at com.ibm.ws.metadata.ejb.EJBMDOrchestrator.finishBMDInit(EJBMDOrchestrator.java:1364) at com.ibm.ws.runtime.component.EJBContainerImpl.finishDeferredBeanMetaData(EJBContainerImpl.java:4829) at com.ibm.ws.runtime.component.EJBContainerImpl$3.run(EJBContainerImpl.java:4631) at java.security.AccessController.doPrivileged(Native Method) at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:125) at com.ibm.ws.runtime.component.EJBContainerImpl.initializeDeferredEJB(EJBContainerImpl.java:4627) at com.ibm.ejs.container.HomeOfHomes.getHome(HomeOfHomes.java:390) at com.ibm.ejs.container.HomeOfHomes.internalCreateWrapper(HomeOfHomes.java:938) at com.ibm.ejs.container.EJSContainer.createWrapper(EJSContainer.java:4783) at com.ibm.ejs.container.WrapperManager.faultOnKey(WrapperManager.java:545) at com.ibm.ejs.util.cache.Cache.findAndFault(Cache.java:498) at com.ibm.ejs.container.WrapperManager.keyToObject(WrapperManager.java:489) We cannot reproduce the issue on our desktop boxes (it all works fine there) and we do not have direct access to our the Solaris machines (dependent on the deployment department) we do suspect a discrepancy on the jvm but we're not sure. My question is two fold: can you confirm IBM's statement that fixpack 6.1.0.23 for solaris indeed runs on jvm 1.5.0_17b04 our installation tells us ./java -version java version "1.5.0_13" But deploy department is not eager to investigate. Do you see some other solution, apart from hiring big blue's con$ultancy? kind regards, Jeroen.

    Read the article

  • Trying to use a authlogic-connect as a plugin in place of gem - Server doesn't start

    - by Arkid
    I am trying to use Authlogic-connect as a plugin in Rails 3 in place of a gem. I have made an entry in the gemfile as gem "authlogic-connect", :require => "authlogic-connect", :path => "localgems" Now when I run the bundle install, it runs fine. When I try to start the server i get the error Could not find gem 'authlogic-connect (>= 0, runtime)' in source at localgems. Source does not contain any versions of 'authlogic-connect (>= 0, runtime)' Try running `bundle install`. I have placed the unzipped Gem renamed as authlogic-connect in the localgems folder. what is the problem? Here is what I get on using rails plugin install arkidmitra$ rails plugin install git://github.com/viatropos/authlogic-connect.git Usage: rails new APP_PATH [options] Options: [--skip-gemfile] # Don't create a Gemfile -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db) # Default: sqlite3 -O, [--skip-active-record] # Skip Active Record files [--dev] # Setup the application with Gemfile pointing to your Rails checkout -J, [--skip-prototype] # Skip Prototype files -T, [--skip-test-unit] # Skip Test::Unit files -G, [--skip-git] # Skip Git ignores and keeps -r, [--ruby=PATH] # Path to the Ruby binary of your choice # Default: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL) -b, [--builder=BUILDER] # Path to an application builder (can be a filesystem path or URL) [--edge] # Setup the application with Gemfile pointing to Rails repository Runtime options: -q, [--quiet] # Supress status output -s, [--skip] # Skip files that already exist -f, [--force] # Overwrite files that already exist -p, [--pretend] # Run but do not make any changes Rails options: -h, [--help] # Show this help message and quit -v, [--version] # Show Rails version number and quit Description: The 'rails new' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails new ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going.

    Read the article

  • JVM segmentation faults due to "Invalid memory access of location"

    - by Dan
    I have a small project written in Scala 2.9.2 with unit tests written using ScalaTest. I use SBT for compiling and running my tests. Running sbt test on my project makes the JVM segfault regularly, but just compiling and running my project from SBT works fine. Here is the exact error message: Invalid memory access of location 0x8 rip=0x10959f3c9 [1] 11925 segmentation fault sbt I cannot locate a core dump anywhere, but would be happy to provide it if it can be obtained. Running java -version results in this: java version "1.6.0_37" Java(TM) SE Runtime Environment (build 1.6.0_37-b06-434-11M3909) Java HotSpot(TM) 64-Bit Server VM (build 20.12-b01-434, mixed mode) But I've also got Java 7 installed (though I was never able to actually run a Java program with it, afaik). Another issue that may be related: some of my test cases contain titles including parentheses like ( and ). SBT or ScalaTest (not sure) will consequently insert square parens in the middle of the output. For example, a test case with the name (..)..(..) might suddenly look like (..[)..](..). Any help resolving these issues is much appreciated :-) EDIT: I installed the Java 7 JDK, so now java -version shows the right thing: java version "1.7.0_07" Java(TM) SE Runtime Environment (build 1.7.0_07-b10) Java HotSpot(TM) 64-Bit Server VM (build 23.3-b01, mixed mode) This also means that I now get a more detailed segfault error and a core dump: # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x000000010a71a3e3, pid=16830, tid=19459 # # JRE version: 7.0_07-b10 # Java VM: Java HotSpot(TM) 64-Bit Server VM (23.3-b01 mixed mode bsd-amd64 compressed oops) # Problematic frame: # V [libjvm.dylib+0x3cd3e3] And the dump.

    Read the article

  • Why does Java's invokevirtual need to resolve the called method's compile-time class?

    - by Chris
    Consider this simple Java class: class MyClass { public void bar(MyClass c) { c.foo(); } } I want to discuss what happens on the line c.foo(). At the bytecode level, the meat of c.foo() will be the invokevirtual opcode, and, according to the documentation for invokevirtual, more or less the following will happen: Look up the foo method defined in compile-time class MyClass. (This involves first resolving MyClass.) Do some checks, including: Verify that c is not an initialization method, and verify that calling MyClass.foo wouldn't violate any protected modifiers. Figure out which method to actually call. In particular, look up c's runtime type. If that type has foo(), call that method and return. If not, look up c's runtime type's superclass; if that type has foo, call that method and return. If not, look up c's runtime type's superclass's superclass; if that type has foo, call that method and return. Etc.. If no suitable method can be found, then error. Step #3 alone seems adequate for figuring out which method to call and verifying that said method has the correct argument/return types. So my question is why step #1 gets performed in the first place. Possible answers seem to be: You don't have enough information to perform step #3 until step #1 is complete. (This seems implausible at first glance, so please explain.) The linking or access modifier checks done in #1 and #2 are essential to prevent certain bad things from happening, and those checks must be performed based on the compile-time type, rather than the run-time type hierarchy. (Please explain.)

    Read the article

  • Java errors on Lotus Domino Designer Client 8.5.1

    - by ajcooper
    I have a clean install of Lotus Notes 8.5.1 (now with FP3) and I'm getting the following errors in Designer. This is with a new database with a couple of forms and views. I'm finding this is typical across all databases. Is there something I need to install/configure etc.? I'm not new to Notes, but I'm new to 8.5 Thanks Aidan Description Resource Path Location Type Cannot resolve plug-in: org.eclipse.core.runtime plugin.xml TestAgent.nsf line 9 Plug-in Problem Cannot resolve plug-in: org.eclipse.ui plugin.xml TestAgent.nsf line 8 Plug-in Problem Cannot resolve plug-in: com.ibm.commons plugin.xml TestAgent.nsf line 10 Plug-in Problem Cannot resolve plug-in: com.ibm.commons.vfs plugin.xml TestAgent.nsf line 12 Plug-in Problem Cannot resolve plug-in: com.ibm.commons.xml plugin.xml TestAgent.nsf line 11 Plug-in Problem Cannot resolve plug-in: com.ibm.designer.runtime plugin.xml TestAgent.nsf line 15 Plug-in Problem Cannot resolve plug-in: com.ibm.designer.runtime.directory plugin.xml TestAgent.nsf line 14 Plug-in Problem Cannot resolve plug-in: com.ibm.jscript plugin.xml TestAgent.nsf line 13 Plug-in Problem Cannot resolve plug-in: com.ibm.notes.java.api plugin.xml TestAgent.nsf line 20 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.core plugin.xml TestAgent.nsf line 16 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.core plugin.xml TestAgent.nsf line 21 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.designer plugin.xml TestAgent.nsf line 18 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.designer plugin.xml TestAgent.nsf line 22 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.domino plugin.xml TestAgent.nsf line 19 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.domino plugin.xml TestAgent.nsf line 23 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.extsn plugin.xml TestAgent.nsf line 17 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.extsn plugin.xml TestAgent.nsf line 24 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.rcp plugin.xml TestAgent.nsf line 25 Plug-in Problem

    Read the article

  • Getting minimum - Min() - for DateTime column in a DataTable using LINQ to DataSets?

    - by Jay Stevens
    I need to get the minimum DateTime value of a column in a DataTable. The DataTable is generated dynamically from a CSV file, therefore I don't know the name of that column until runtime. Here is code I've got that doesn't work... private DateTime GetStartDateFromCSV(string inputFile, string date_attr) { EnumerableRowCollection<DataRow> table = CsvStreamReader.GetDataTableFromCSV(inputFile, "input", true).AsEnumerable(); DateTime dt = table.Select(record => record.Field<DateTime>(date_attr)).Min(); return dt; } The variable table is broken out just for clarity. I basically need to find the minimum value as a DateTime for one of the columns (to be chosen at runtime and represented by date_attr). I have tried several solutions from SO (most deal with known columns and/or non-DateTime fields). What I've got throws an error at runtime telling me that it can't do the DateTime conversion (that seems to be a problem with Linq?) I've confirmed that the data for the column name that is in the string date_attr is a date value.

    Read the article

  • Mixed Mode C++ DLL function call failure when app launched from network share. Called from unmanage

    - by Steve
    Mixed-mode DLL called from native C application fails to load: An unhandled exception of type 'System.IO.FileLoadException' occurred in Unknown Module. Additional information: Could not load file or assembly 'XXSharePoint, Version=0.0.0.0, Culture=neutral, PublicKeyToken=e0fbc95fd73fff47' or one of its dependencies. Failed to grant minimum permission requests. (Exception from HRESULT: 0x80131417) My environment is: Native C application calling a mixed mode C++ DLL, which then loads a C# DLL.. This works correctly when loaded from a local drive, but when launched from a network drive, it fails with the above messages. The call to LoadLibrary succeeds, as does the GetProcAddress. The load error happens when I call the function. I have digitally signed the C application, and I've performed "strong name" signing on the 2 DLLs. The PublickKeyToken in the message above does match the named DLL. I have also issued the CASPOLcommands on my client to grant FullTrust to that strong name keytoken. When that failed to work, I tried the CASPOL command to grant FullTrust to the URL of the network drive (including path to my application's directory); no change in results. I tried removing all dependencies, so that there was just the initial mixed-mode DLL... I replaced the bodies of all the functions with just a return of a "success" integer value. Results unchanged. Only when I changed it from Mixed Mode to Win32, and changed the Configuration Properties General Common Language Runtime Support from "Common Language Runtime Support" to "No Common Language Runtime Support" did calling the DLL produce the expected result (just returned the "success" integer return value).

    Read the article

  • reading excel using scriptom for groovy, producing xml

    - by john
    Dear friends, I got a program from http://kousenit.wordpress.com/2007/03/27/groovyness-with-excel-and-xml but I got some very strange results: 1) I can still print xml but two records are not readable. 2) I got exception suggesting some thing missing could some experts enlighten me about what might go wrong? I copied the program and result below. thanks! import org.codehaus.groovy.scriptom.ActiveXObject def addresses = new File('addresses1.xls').canonicalPath def xls = new ActiveXObject('Excel.Application') def workbooks = xls.Workbooks def workbook = workbooks.Open(addresses) // select the active sheet def sheet = workbook.ActiveSheet sheet.Visible = true // get the XML builder ready def builder = new groovy.xml.MarkupBuilder() builder.people { for (row in 2..1000) { def ID = sheet.Range("A${row}").Value.value if (!ID) break // use the builder to write out each person person (id: ID) { name { firstName sheet.Range("B${row}").Value.value lastName sheet.Range("C${row}").Value.value } address { street sheet.Range("D${row}").Value.value city sheet.Range("E${row}").Value.value state sheet.Range("F${row}").Value.value zip sheet.Range("G${row}").Value.value } } } } // close the workbook without asking for saving the file workbook.Close(false, null, false) // quits excel xls.Quit() xls.release() however, i got the following results: <people> <person id='1234.0'> <name> <firstName>[C@128a25</firstName> <lastName>[C@5e45</lastName> </name> <address> <street>[C@179ef7c</street> <city>[C@12f95de</city> <state>[C@138b554</state> <zip>12345.0</zip> </address> </person> </person> Exception thrown May 12, 2010 4:07:15 AM org.codehaus.groovy.runtime.StackTraceUtils sanitize WARNING: Sanitizing stacktrace: java.lang.NullPointerException at org.codehaus.groovy.runtime.callsite.GetEffectivePojoFieldSite.acceptGetProperty(GetEffectivePojoFieldSite.java:43) at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGetProperty(AbstractCallSite.java:237) at sriptom4_excel$_run_closure1.doCall(sriptom4_excel.groovy:18) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [1]: http://kousenit.wordpress.com/2007/03/27/groovyness-with-excel-and-xml/

    Read the article

  • DatagGridViewColumn.DataPropertyName to an array element?

    - by unknown
    Hi there, I'm using a DataGridView binding its datasource to a List, and specifying the properties for each column. An example would be: DataGridViewTextBoxColumn colConcept = new DataGridViewTextBoxColumn(); DataGridViewCell cell4 = new DataGridViewTextBoxCell(); colConcept.CellTemplate = cell4; colConcept.Name = "concept"; colConcept.HeaderText = "Concept"; colConcept.DataPropertyName = "Concept"; colConcept.Width = 200; this.dataGridViewBills.Columns.Add(colConcept); {... assign other colums...} And finally this.dataGridViewBills.DataSource=billslist; //billslist is List<Bill> Obviously Class Bill has a Property called Concept, as well as one Property for each column. Well, now my problem, is that Bill should have and Array/List/whateverdynamicsizecontainer of strings called Years. Let's assume that every Bill will have the same Years.Count, but this only known at runtime.Thus, I can't specify properties like Bill.FirstYear to obtain Bill.Years[0], Bill.SecondYear to obtain Bills.Years[1]... etc... and bind it to each column. The idea, is that now I want to have a grid with dynamic number of colums (known at runtime), and each column filled with a string from the Bill.Years List. I can make a loop to add columns to the grid at runtime depending of Bill.Years.Count, but is possible to bind them to each of the strings that the Bill.Years List contains??? I'm not sure if I'm clear enough. The result ideally would be something like this, for 2 bills on the list, and 3 years for each bill: --------------------------------------GRID HEADER------------------------------- NAME CONCEPT YEAR1 YEAR2 YEAR3 --------------------------------------GRID VALUES------------------------------- Bill1 Bill1.Concept Bill1.Years[0] Bill1.Years[1] Bill1.Years[2] Bill2 Bill2.Concept Bill2.Years[0] Bill2.Years[1] Bill2.Years[2] I can always forget the datasource, and write each cell manually, as the MSFlexGrid used to like, but if possible, I would like to use the binding capabilities of the DataGridView. Any ideas? Thanks a lot.

    Read the article

  • Getting size of a specific byte array from an array of pointers to bytes

    - by Pat James
    In the following example c code, used in an Arduino project, I am looking for the ability to get the size of a specific byte array within an array of pointers to bytes, for example void setup() { Serial.begin(9600); // for debugging byte zero[] = {8, 169, 8, 128, 2,171,145,155,141,177,187,187,2,152,2,8,134,199}; byte one[] = {8, 179, 138, 138, 177 ,2,146, 8, 134, 8, 194,2,1,14,199,7, 145, 8,131, 8,158,8,187,187,191}; byte two[] = {29,7,1,8, 169, 8, 128, 2,171,145,155,141,177,187,187,2,152,2,8,134,199, 2, 2, 8, 179, 138, 138, 177 ,2,146, 8, 134, 8, 194,2,1,14,199,7, 145, 8,131, 8,158,8,187,187,191}; byte* numbers[3] = {zero, one, two }; function(numbers[1], sizeof(numbers[1])/sizeof(byte)); //doesn't work as desired, always passes 2 as the length function(numbers[1], 25); //this works } void loop() { } void function( byte arr[], int len ) { Serial.print("length: "); Serial.println(len); for (int i=0; i<len; i++){ Serial.print("array element "); Serial.print(i); Serial.print(" has value "); Serial.println((int)arr[i]); } } In this code, I understand that sizeof(numbers1)/sizeof(byte)) doesn't work because numbers1 is a pointer and not the byte array value. Is there a way in this example that I can, at runtime, get at the length of a specific (runtime-determined) byte array within an array of pointers to bytes? Understand that I am limited to developing in c (or assembly) for an Arduino environment. Also open to other suggestions rather than the array of pointers to bytes. The overall objective is to organize lists of bytes which can be retrieved, with length, at runtime.

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >