Search Results

Search found 11479 results on 460 pages for 'resource usage'.

Page 417/460 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • Marshalling polymorphic objects in JAX-WS

    - by pkchukiss
    I'm creating a JAX-WS type webservice, with operations that return an object WebServiceReply. The class WebServiceReply itself contains a field of type Object. The individual operations would populate that field with a few different data-types, depending on the operation. Publishing the WSDL (I'm using Netbeans 6.7), and getting a ASP.NET application to retrieve and parse the WSDL was fine, but when I tried to call an operation, I would receive the following exception: javax.xml.ws.WebServiceException: javax.xml.bind.MarshalException - with linked exception: [javax.xml.bind.JAXBException: class [LDataObject.Patient; nor any of its super class is known to this context.] How do I mark the annotations in the DataObject.Patient class, as well as the WebServiceReply class to get it to work? I haven't been able to fine a definitive resource on marshalling based upon annotations within the target classes either, so it would be great if anybody could point me to that too. WebServiceReply.java @XmlRootElement(name="WebServiceReply") public class WebServiceReply { private Object returnedObject; private String returnedType; private String message; private String errorMessage; .......... // Getters and setters follow } DataObject.Patient.java @XmlRootElement(name="Patient") public class Patient { private int uid; private Date versionDateTime; private String name; private String identityNumber; private List<Address> addressList; private List<ContactNumber> contactNumberList; private List<Appointment> appointmentList; private List<Case> caseList; } Solution (Thanks to Gregory Mostizky for his answer) I edited the WebServiceReply class so that all the possible return objects extend from a new class ReturnValueBase, and added the annotations using @XmlSeeAlso to ReturnValueBase. JAXB worked properly after that! Nonetheless, I'm still learning about JAXB marshalling in JAX-WS, so it would be great if anyone can still post any tutorial on this. Gregory: you might want to add-on to your answer that the return objects need to sub-class from ReturnValueBase. Thanks a lot for your help! I had been going bonkers over this problem for so long!

    Read the article

  • Computation on db data then list them using either SimpleCursorAdapter or ArrayAdapter

    - by kc2uno
    Hi all, I juststarted programming in android a few weeks ago, so I am not entirely sure how to deal with listing values. Please help me out! I have some questions regarding displaying data sets from db in a list. Currently I have a cursor returned by my db points to a list of rows and I want display 2 columns values in a single row of the list. The row xml looks like this: <TextView android:id="@+id/text1" android:textSize="16sp" android:textStyle="bold" android:layout_width="fill_parent" android:layout_height="wrap_content"/> <TextView android:id="@+id/text2" android:textSize="14sp" android:layout_width="fill_parent" android:layout_height="wrap_content"/> so I was thinking using simplecursoradapter which supposedly makes my life easier by displaying the data in a list. However that is only true if I want to display the raw data. For the purpose of my program I need to do some computations on the raw data sets, then display them. I am not sure how to do that using SimpleCursorAdapter. Here's how I display the raw data: String[] from = new String[]{BtDbAdapter.KEY_EX_TYPE,BtDbAdapter.KEY_EX_TIMESTAMP}; int[] to = new int[]{R.id.text1, R.id.text2}; // Now create a simple cursor adapter and set it to display SimpleCursorAdapter records = new SimpleCursorAdapter(this, R.layout.exset_row, mExsetCursor, from, to); setListAdapter(records); Is there a way to do computation on the data in those rows before I bind it with the SimpleCursorAdapter? I was trying to use an alternative way of doing this by using arraylist and arrayadapter, but that way I dont know to how achieve displaying 2 items in a single row. This is my code for using arrayadapter which only display 1 text in a row instead of 2 textviews in a row: //fill in the array timestamp_arr = new ArrayList<String>(); type_arr = new ArrayList<String>(); fillRecord(); Log.d(TAG,"setting now in recordlist"); setListAdapter(new ArrayAdapter<String>(this, R.layout.list_item,timestamp_arr)); setListAdapter(new ArrayAdapter<String>(this, R.layout.list_item2,type_arr)); It's very obvious that it only displays one textview in a row because I set the second arrayadapter overwrites the first one! I was trying to use R.id.text1 and R.id.text2 for them, but it gave me some errors saying 04-23 01:40:58.658: ERROR/AndroidRuntime(3309): android.content.res.Resources$NotFoundException: Resource ID #0x7f070008 type #0x12 is not valid I believe the second method can achieve this, but I'm not sure how do deal with the layout problems, so if you any suggestions, please post them out. Thank you!!

    Read the article

  • Asynchronous vs Synchronous vs Threading in an iPhone App

    - by Coocoo4Cocoa
    I'm in the design stage for an app which will utilize a REST web service and sort of have a dilemma in as far as using asynchronous vs synchronous vs threading. Here's the scenario. Say you have three options to drill down into, each one having its own REST-based resource. I can either lazily load each one with a synchronous request, but that'll block the UI and prevent the user from hitting a back navigation button while data is retrieved. This case applies almost anywhere except for when your application requires a login screen. I can't see any reason to use synchronous HTTP requests vs asynchronous because of that reason alone. The only time it makes sense is to have a worker thread make your synchronous request, and notify the main thread when the request is done. This will prevent the block. The question then is bench marking your code and seeing which has more overhead, a threaded synchronous request or an asynchronous request. The problem with asynchronous requests is you need to either setup a smart notification or delegate system as you can have multiple requests for multiple resources happening at any given time. The other problem with them is if I have a class, say a singleton which is handling all of my data, I can't use asynchronous requests in a getter method. Meaning the following won't go: - (NSArray *)users { if(users == nil) users = do_async_request // NO GOOD return users; } whereas the following: - (NSArray *)users { if(users == nil) users == do_sync_request // OK. return users; } You also might have priority. What I mean by priority is if you look at Apple's Mail application on the iPhone, you'll notice they first suck down your entire POP/IMAP tree before making a second request to retrieve the first 2 lines (the default) of your message. I suppose my question to you experts is this. When are you using asynchronous, synchronous, threads -- and when are you using either async/sync in a thread? What kind of delegation system do you have setup to know what to do when a async request completes? Are you prioritizing your async requests? There's a gamut of solutions to this all too common problem. It's simple to hack something out. The problem is, I don't want to hack and I want to have something that's simple and easy to maintain.

    Read the article

  • PNGException "crc corruption" when attempting to create ImageIcon objects from ZIP archive

    - by Nathan Strong
    I've got a ZIP file containing a number of PNG images that I am trying to load into my Java application as ImageIcon resources directly from the archive. Here's my code: import java.io.*; import java.util.Enumeration; import java.util.zip.*; import javax.swing.ImageIcon; public class Test { public static void main( String[] args ) { if( args.length == 0 ) { System.out.println("usage: java Test.java file.zip"); return; } File archive = new File( args[0] ); if( !archive.exists() || !archive.canRead() ) { System.err.printf("Unable to find/access %s.\n", archive); return; } try { ZipFile zip = new ZipFile(archive); Enumeration <? extends ZipEntry>e = zip.entries(); while( e.hasMoreElements() ) { ZipEntry entry = (ZipEntry) e.nextElement(); int size = (int) entry.getSize(); int count = (size % 1024 == 0) ? size / 1024 : (size / 1024)+1; int offset = 0; int nread, toRead; byte[] buffer = new byte[size]; for( int i = 0; i < count; i++ ) { offset = 1024*i; toRead = (size-offset > 1024) ? 1024 : size-offset; nread = zip.getInputStream(entry).read(buffer, offset, toRead); } ImageIcon icon = new ImageIcon(buffer); // boom -- why? } zip.close(); } catch( IOException ex ) { System.err.println(ex.getMessage()); } } } The sizes reported by entry.getSize() match the uncompressed size of the PNG files, and I am able to read the data out of the archive without any exceptions, but the creation of the ImageIcon blows up. The stacktrace: sun.awt.image.PNGImageDecoder$PNGException: crc corruption at sun.awt.image.PNGImageDecoder.getChunk(PNGImageDecoder.java:699) at sun.awt.image.PNGImageDecoder.getData(PNGImageDecoder.java:707) at sun.awt.image.PNGImageDecoder.produceImage(PNGImageDecoder.java:234) at sun.awt.image.InputStreamImageSource.doFetch(InputStreamImageSource.java:246) at sun.awt.image.ImageFetcher.fetchloop(ImageFetcher.java:172) at sun.awt.image.ImageFetcher.run(ImageFetcher.java:136) sun.awt.image.PNGImageDecoder$PNGException: crc corruption at sun.awt.image.PNGImageDecoder.getChunk(PNGImageDecoder.java:699) at sun.awt.image.PNGImageDecoder.getData(PNGImageDecoder.java:707) at sun.awt.image.PNGImageDecoder.produceImage(PNGImageDecoder.java:234) at sun.awt.image.InputStreamImageSource.doFetch(InputStreamImageSource.java:246) at sun.awt.image.ImageFetcher.fetchloop(ImageFetcher.java:172) at sun.awt.image.ImageFetcher.run(ImageFetcher.java:136) Can anyone shed some light on it? Google hasn't turned up any useful information.

    Read the article

  • Two way binding settings problem.

    - by Jamie
    Hi, I am having a problem using two way binding with a listpicker. I am able to set the value using c# but not in the SelectedItem=".." in xaml. The binding is returning the correct value (and is a value in the listpicker) as i have texted it by assigning the text to a textblock. When the page loads, the binding used on the listpicker causes a System.ArgumentOutOfRangeException The code i am using to set it is: // Update a setting value. If the setting does not exist, add the setting. public bool AddOrUpdateValue(string key, Object value) { bool valueChanged = false; try { // If new value is different, set the new value if (settingsStorage[key] != value) { settingsStorage[key] = value; valueChanged = true; } } catch (KeyNotFoundException) { settingsStorage.Add(key, value); valueChanged = true; } catch (ArgumentException) { settingsStorage.Add(key, value); valueChanged = true; } catch (Exception e) { Console.WriteLine("Exception occured whilst using IsolatedStorageSettings: " + e.ToString()); } return valueChanged; } // Get the current value of the setting, if not found, set the setting to default value. public valueType GetValueOrDefault<valueType>(string key, valueType defaultValue) { valueType value; try { value = (valueType)settingsStorage[key]; } catch (KeyNotFoundException) { value = defaultValue; } catch (ArgumentException) { value = defaultValue; } return value; } public string WeekBeginsSetting { get { return GetValueOrDefault<string>(WeekBeginsSettingKeyName, WeekBeginsSettingDefault); } set { AddOrUpdateValue(WeekBeginsSettingKeyName, value); Save(); } } And in the xaml: <toolkit:ListPicker x:Name="WeekStartDay" Header="Week begins on" SelectedItem="{Binding Source={StaticResource AppSettings}, Path=WeekBeginsSetting, Mode=TwoWay}"> <sys:String>monday</sys:String> <sys:String>sunday</sys:String> </toolkit:ListPicker> The StaticResource AppSettings is a resource from a seperate .cs file. <phone:PhoneApplicationPage.Resources> <local:ApplicationSettings x:Key="AppSettings"></local:ApplicationSettings> </phone:PhoneApplicationPage.Resources> Thanks in advance

    Read the article

  • How can I call VC# webservice methods without ArgumentException?

    - by Zarius
    Currently, I'm trying to write a small tray application that will show the status and provide control of a server-side application exposed over webservice. The webservice only has 3 operations: start, stop and status. When I call any of these operations in code, they throw an ArgumentException citing "An item with the same key has already been added". I am compiling the webservice on Visual C# Express 2008, and .NET 3.5. The Code: private TelnetConnClient Conn { get { return new TelnetConnClient(); } } private bool Connected //call webservice operations { get { return Conn.Status(); } set { if(value) Conn.Start(); else Conn.Stop(); } } The Stacktrace: A first chance exception of type 'System.ArgumentException' occurred in mscorlib.dll at System.ThrowHelper.ThrowArgumentException(ExceptionResource resource) at System.Collections.Generic.Dictionary`2.Insert(TKey key, TValue value, Boolean add) at System.ServiceModel.TransactionFlowAttribute.ApplyBehavior(OperationDescription description, BindingParameterCollection parameters) at System.ServiceModel.TransactionFlowAttribute.System.ServiceModel.Description.IOperationBehavior.AddBindingParameters(OperationDescription description, BindingParameterCollection parameters) at System.ServiceModel.Description.DispatcherBuilder.AddBindingParameters(ServiceEndpoint endpoint, BindingParameterCollection parameters) at System.ServiceModel.Description.DispatcherBuilder.BuildProxyBehavior(ServiceEndpoint serviceEndpoint, BindingParameterCollection& parameters) at System.ServiceModel.Channels.ServiceChannelFactory.BuildChannelFactory(ServiceEndpoint serviceEndpoint) at System.ServiceModel.ChannelFactory.CreateFactory() at System.ServiceModel.ChannelFactory.OnOpening() at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.ChannelFactory.EnsureOpened() at System.ServiceModel.ChannelFactory`1.CreateChannel(EndpointAddress address, Uri via) at System.ServiceModel.ChannelFactory`1.CreateChannel() at System.ServiceModel.ClientBase`1.CreateChannel() at System.ServiceModel.ClientBase`1.CreateChannelInternal() at System.ServiceModel.ClientBase`1.get_Channel() at KordiaConnect.ferries.TelnetConnClient.Start() in C:\My Dropbox\Coding\RTF\KordiaConnect\KordiaConnect\Service References\ferries\Reference.cs:line 86 at coldshark.ferries.Main.set_Connected(Boolean value) in C:\My Dropbox\Coding\RTF\KordiaConnect\KordiaConnect\Main.cs:line 22 at coldshark.ferries.Main.<.ctor>b__0(Object sender, EventArgs e) in C:\My Dropbox\Coding\RTF\KordiaConnect\KordiaConnect\Main.cs:line 43 at System.Windows.Forms.NotifyIcon.OnClick(EventArgs e) at System.Windows.Forms.NotifyIcon.WmMouseUp(Message& m, MouseButtons button) at System.Windows.Forms.NotifyIcon.WndProc(Message& msg) at System.Windows.Forms.NotifyIcon.NotifyIconNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.PeekMessage(MSG& msg, HandleRef hwnd, Int32 msgMin, Int32 msgMax, Int32 remove) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.Run() at coldshark.ferries.Main..ctor() in C:\My Dropbox\Coding\RTF\KordiaConnect\KordiaConnect\Main.cs:line 55 I can just call the webservice from the web interface, but this application will give me a handy status notification icon, and I'd really love to know why the out-of-the-box auto-generated code fails for no particular reason.

    Read the article

  • Debugging site written mainly in JScript with AJAX code injection

    - by blumidoo
    Hello, I have a legacy code to maintain and while trying to understand the logic behind the code, I have run into lots of annoying issues. The application is written mainly in Java Script, with extensive usage of jQuery + different plugins, especially Accordion. It creates a wizard-like flow, where client code for the next step is downloaded in the background by injecting a result of a remote AJAX request. It also uses callbacks a lot and pretty complicated "by convention" programming style (lots of events handlers are created on the fly based on certain object names - e.g. current page name, current step name). Adding to that, the code is very messy and there is no obvious inner structure - the functions are scattered in the code, file names do not reflect the business role of the code, lots of functions and code snippets are most likely not used at all etc. PROBLEM: How to approach this code base, so that the inner flow of the code can be sort-of "reverse engineered" using a suite of smart debugging tools. Ideally, I would like to be able to attach to the running application and step through the code, breaking on each new function call. Also, it would be nice to be able to create a "diagram of calls" in the application (i.e. in order to run a particular page logic, this particular flow of function calls was executed in a particular order). Not to mention to be able to run a coverage analysis, identifying potentially orphaned code fragments. I would like to stress out once more, that it is impossible to understand the inner logic of the application just by looking at the code itself, unless you have LOTS of spare time and beer crates, which I unfortunately do not have :/ (shame...) An IDE of some sort that would aid in extending that code would be also great, but I am currently looking into possibility to use Visual Studio 2010 to do the job, as the site itself is a mix of Classic ASP and ASP.NET (I'd say - 70% Java Script with jQuery, 30% ASP). I have obviously tried FireBug, but I was unable to find a way to define a breakpoint or step into the code, which is "injected" into the client JS using AJAX calls (i.e. the application retrieves the code by invoking an URL and injects it to the client local code). Venkman debugger had similar issues. Any hints would be welcome. Feel free to ask additional questions.

    Read the article

  • Jboss logging issue

    - by balaji
    I'm Working as deployer and server administrator. We use Jboss 4.0x AS to deploy our applications. The issue I'm facing is, Whenever we redeploy/restart the server, server.log is getting created but after sometime the logging goes off. Yes it is not at all updating the server.log file. Due to this, we could not trace the other critical issues we have. Actually we have two separate nodes and we do deploy/restarting the server separately on two nodes. We are facing the issue in both of our test and production environment. I could not trace out where exactly the issue is. Could you please help me in resolving the issue? If we have any other issues, we can check the log files. If log itself is not getting updated/logged, how can we move further in analyzing the issues without the recent/updated logs? Below are the logs found in the stdout.log: 18:55:50,303 INFO [Server] Core system initialized 18:55:52,296 INFO [WebService] Using RMI server codebase: http://kl121tez.is.klmcorp.net:8083/ 18:55:52,313 INFO [Log4jService$URLWatchTimerTask] Configuring from URL: resource:log4j.xml 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFTraceLogger without a class name. 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFMessageLogger without a class name. 18:55:54,273 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheTraceLogger without a class name. 18:55:54,274 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheMessageLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCTraceLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCMessageLogger without a class name. 18:55:56,059 INFO [ServiceEndpointManager] WebServices: jbossws-1.0.3.SP1 (date=200609291417) 18:55:56,635 INFO [Embedded] Catalina naming disabled 18:55:56,671 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,672 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,843 INFO [Http11BaseProtocol] Initializing Coyote HTTP/1.1 on http-0.0.0.0-8180 18:55:56,844 INFO [Catalina] Initialization processed in 172 ms 18:55:56,844 INFO [StandardService] Starting service jboss.web

    Read the article

  • Lifetime issue of IDisposable unmanaged resources in a complex object graph?

    - by stakx
    This question is about dealing with unmanaged resources (COM interop) and making sure there won't be any resource leaks. I'd appreciate feedback on whether I seem to do things the right way. Background: Let's say I've got two classes: A class LimitedComResource which is a wrapper around a COM object (received via some API). There can only be a limited number of those COM objects, therefore my class implements the IDisposable interface which will be responsible for releasing a COM object when it's no longer needed. Objects of another type ManagedObject are temporarily created to perform some work on a LimitedComResource. They are not IDisposable. To summarize the above in a diagram, my classes might look like this: +---------------+ +--------------------+ | ManagedObject | <>------> | LimitedComResource | +---------------+ +--------------------+ | o IDisposable (I'll provide example code for these two classes in just a moment.) Question: Since my temporary ManagedObject objects are not disposable, I obviously have no control over how long they'll be around. However, in the meantime I might have Disposed the LimitedComObject that a ManagedObject is referring to. How can I make sure that a ManagedObject won't access a LimitedComResource that's no longer there? +---------------+ +--------------------+ | managedObject | <>------> | (dead object) | +---------------+ +--------------------+ I've currently implemented this with a mix of weak references and a flag in LimitedResource which signals whether an object has already been disposed. Is there any better way? Example code (what I've currently got): LimitedComResource: class LimitedComResource : IDisposable { private readonly IUnknown comObject; // <-- set in constructor ... void Dispose(bool notFromFinalizer) { if (!this.isDisposed) { Marshal.FinalReleaseComObject(comObject); } this.isDisposed = true; } internal bool isDisposed = false; } ManagedObject: class ManagedObject { private readonly WeakReference limitedComResource; // <-- set in constructor ... public void DoSomeWork() { if (!limitedComResource.IsAlive()) { throw new ObjectDisposedException(); // ^^^^^^^^^^^^^^^^^^^^^^^ // is there a more suitable exception class? } var ur = (LimitedComResource)limitedComResource.Target; if (ur.isDisposed) { throw new ObjectDisposedException(); } ... // <-- do something sensible here! } }

    Read the article

  • Permutations of Varying Size

    - by waiwai933
    I'm trying to write a function in PHP that gets all permutations of all possible sizes. I think an example would be the best way to start off: $my_array = array(1,1,2,3); Possible permutations of varying size: 1 1 // * See Note 2 3 1,1 1,2 1,3 // And so forth, for all the sets of size 2 1,1,2 1,1,3 1,2,1 // And so forth, for all the sets of size 3 1,1,2,3 1,1,3,2 // And so forth, for all the sets of size 4 Note: I don't care if there's a duplicate or not. For the purposes of this example, all future duplicates have been omitted. What I have so far in PHP: function getPermutations($my_array){ $permutation_length = 1; $keep_going = true; while($keep_going){ while($there_are_still_permutations_with_this_length){ // Generate the next permutation and return it into an array // Of course, the actual important part of the code is what I'm having trouble with. } $permutation_length++; if($permutation_length>count($my_array)){ $keep_going = false; } else{ $keep_going = true; } } return $return_array; } The closest thing I can think of is shuffling the array, picking the first n elements, seeing if it's already in the results array, and if it's not, add it in, and then stop when there are mathematically no more possible permutations for that length. But it's ugly and resource-inefficient. Any pseudocode algorithms would be greatly appreciated. Also, for super-duper (worthless) bonus points, is there a way to get just 1 permutation with the function but make it so that it doesn't have to recalculate all previous permutations to get the next? For example, I pass it a parameter 3, which means it's already done 3 permutations, and it just generates number 4 without redoing the previous 3? (Passing it the parameter is not necessary, it could keep track in a global or static). The reason I ask this is because as the array grows, so does the number of possible combinations. Suffice it to say that one small data set with only a dozen elements grows quickly into the trillions of possible combinations and I don't want to task PHP with holding trillions of permutations in its memory at once.

    Read the article

  • Error creating bean with name 'sessionFactory'

    - by Sunny Mate
    hi i am getting the following exception while running my application and my applicationContext.xml is <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd"> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"> <property name="driverClassName" value="com.mysql.jdbc.Driver"> </property> <property name="url" value="jdbc:mysql://localhost/SureshDB"></property> <property name="username" value="root"></property> <property name="password" value="root"></property> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource" /> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect"> org.hibernate.dialect.MySQLDialect </prop> </props> </property> <property name="mappingResources"> <list> <value>com/jsfcompref/register/UserTa.hbm.xml</value></list> </property></bean> <bean id="UserTaDAO" class="com.jsfcompref.register.UserTaDAO"> <property name="sessionFactory"> <ref bean="sessionFactory" /> </property> </bean> <bean id="UserTaService" class="com.jsfcompref.register.UserTaServiceImpl"> <property name="userTaDao"> <ref bean="UserTaDAO"/> </property> </bean> </beans> Error creating bean with name 'sessionFactory' defined in class path resource [applicationContext.xml]: Invocation of init method failed; nested exception is java.lang.NoSuchMethodError: org.objectweb.asm.ClassVisitor.visit(IILjava/lang/String;Ljava/lang/String;[Ljava/lang/String;Ljava/lang/String;)V any suggestion would be heplful

    Read the article

  • Should Application_End fire on an automatic App Pool Recycle?

    - by Laramie
    I have read this, this, this and this plus a dozen other posts/blogs. I have an ASP.Net app in shared hosting that is frequently recycling. We use NLog and have the following code in global.asax void Application_Start(object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nAPPLICATION STARTING\r\n\r\n"); } protected void Application_OnEnd(Object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nAPPLICATION_OnEnd\r\n\r\n"); } void Application_End(object sender, EventArgs e) { HttpRuntime runtime = (HttpRuntime)typeof(System.Web.HttpRuntime).InvokeMember("_theRuntime", BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.GetField, null, null, null); if (runtime == null) return; string shutDownMessage = (string)runtime.GetType().InvokeMember("_shutDownMessage", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, runtime, null); string shutDownStack = (string)runtime.GetType().InvokeMember("_shutDownStack", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, runtime, null); ApplicationShutdownReason shutdownReason = System.Web.Hosting.HostingEnvironment.ShutdownReason; NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug(String.Format("\r\n\r\nAPPLICATION END\r\n\r\n_shutDownReason = {2}\r\n\r\n _shutDownMessage = {0}\r\n\r\n_shutDownStack = {1}\r\n\r\n", shutDownMessage, shutDownStack, shutdownReason)); } void Application_Error(object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nApplication_Error\r\n\r\n"); } Our log file is littered with "APPLICATION STARTING" entries, but neither Application_OnEnd, Application_End, nor Application_Error are ever fired during these spontaneous restarts. I know they are working because there are entries for touching the web.config or /bin files. We also ran a memory overload test and can trigger an OutOfMemoryException which is caught in Application_Error. We are trying to determine whether the virtual memory limit is causing the recycling. We have added GC.GetTotalMemory(false) throughout the code, but this is for all of .Net, not just our App´s pool, correct? We've also tried var oPerfCounter = new PerformanceCounter(); oPerfCounter.CategoryName = "Process"; oPerfCounter.CounterName = "Virtual Bytes"; oPerfCounter.InstanceName = "iisExpress"; logger.Debug("Virtual Bytes: " + oPerfCounter.RawValue + " bytes"); but don't have permission in shared hosting. I've monitored the app on a dev server with the same requests that caused the recycles in production with ANTS Memory Profiler attached and can't seem to find a culprit. We have also run it with a debugger attached in dev to check for uncaught exceptions in spawned threads that might cause the app to abort. My questions are these: How can I effectively monitor memory usage in shared hosting to tell how much my application is consuming prior to an application recycle? Why are the Application_[End/OnEnd/Error] handlers in global.asax not being called? How else can I determine what is causing these recycles? Thanks.

    Read the article

  • Boggling Direct3D9 dynamic vertex buffer Lock crash/post-lock failure on Intel GMA X3100.

    - by nj
    Hi, For starters I'm a fairly seasoned graphics programmer but as wel all know, everyone makes mistakes. Unfortunately the codebase is a bit too large to start throwing sensible snippets here and re-creating the whole situation in an isolated CPP/codebase is too tall an order -- for which I am sorry, do not have the time. I'll do my best to explain. B.t.w, I will of course supply specific pieces of code if someone wonders how I'm handling this-or-that! As with all resources in the D3DPOOL_DEFAULT pool, when the device context is taken away from you you'll sooner or later will have to reset your resources. I've built a mechanism to handle this for all relevant resources that's been working for years; but that fact nothingwithstanding I've of course checked, asserted and doubted any assumption since this bug came to light. What happens is as follows: I have a rather large dynamic vertex buffer, exact size 18874368 bytes. This buffer is locked (and discarded fully using the D3DLOCK_DISCARD flag) each frame prior to generating dynamic geometry (isosurface-related, f.y.i) to it. This works fine, until, of course, I start to reset. It might take 1 time, it might take 2 or it might take 5 resets to set off a bug that causes an access violation either on the pointer returned by the Lock() operation on the renewed resource or a plain crash -- regarding a somewhat similar address, but without the offset that it has tacked on to it in the first case because in that case we're somewhere halfway writing -- iside the D3D9 dll Lock() call. I've tested this on other hardware, upgraded my GMA X3100 drivers (using a MacBook with BootCamp) to the latest ones, but I can't reproduce it on any other machine and I'm at a loss about what's wrong here. I have tried to reproduce a similar situation with a similar buffer (I've got a large scratch pad of the same type I filled with quads) and beyond a certain amount of bytes it started to behave likewise. I'm not asking for a solution here but I'm very interested if there are other developers here who have battled with the same foe or maybe some who can point me in some insightful direction, maybe ask some questions that might shed a light on what I may or may not be overlooking. Another interesting artifact is that the vertex buffer starts to bug if I supply both D3DLOCK_DISCARD and D3DLOCK_NOOVERWRITE together which, even though not very logical (you're not going to overwrite if you've just discarded all), gives graphics glitches. Thanks and any corrections are more than welcome. Niels p.s - A friend of mine raised the valid point that it is a huge buffer for onboard video RAM and it's being at least double or triple buffered internally due to it's dynamic nature. On the other hand, the debug output (D3D9 debug DLL + max. warning output) remains silent. p.s 2 - Had it tested on more machines and still works -- it's probably a matter of circumstance: the huge dynamic, internally double/trippled buffered buffer, not a lot of memory and drivers that don't complain when they should.. Unless someone has a better suggestion; I'd still love to hear it :)

    Read the article

  • custom view with layout

    - by user270811
    ok, what i am trying to do is to embed a custom view in the default layout main.xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <com.lam.customview.CustomDisplayView android:id="@+id/custom_display_view1" android:layout_width="fill_parent" android:layout_height="fill_parent" /> <LinearLayout android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="wrap_content"> <Button android:id="@+id/prev" android:layout_width="0dip" android:layout_height="wrap_content" android:layout_weight="50" android:textAppearance="?android:attr/textAppearanceSmall" android:text="@string/prev" /> </LinearLayout> </LinearLayout> as you can see the class is called com.lam.customview.CustomDisplayView, with the id of custom_display_view1. now in the com.lam.customview.CustomDisplayView class, i want to use another layout called custom_display_view.xml because i don't want to programmatically create controls/widgets. custom_display_view.xml is just a button and an image, the content of which i want to change based on certain conditions: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <TextView android:id="@+id/display_text_view1" android:layout_width="fill_parent" android:layout_height="wrap_content" android:text="@string/hello" /> <ImageView android:id="@+id/display_image_view1" android:layout_width="wrap_content" android:layout_height="wrap_content"> </ImageView> </LinearLayout> i tried to do: 1) public CustomDisplayView(Context context, AttributeSet attrs) { super(context, attrs); try { // register our interest in hearing about changes to our surface SurfaceHolder holder = getHolder(); holder.addCallback(this); View.inflate(context, R.layout.custom_display_view, null); ... but got this error, "03-08 20:33:15.711: ERROR/onCreate(10879): Binary XML file line #8: Error inflating class java.lang.reflect.Constructor ". 2) public CustomDisplayView(Context context, AttributeSet attrs) { super(context, attrs); try { // register our interest in hearing about changes to our surface SurfaceHolder holder = getHolder(); holder.addCallback(this); View.inflate(context, R.id.custom_display_view1, null); ... but got this error, "03-08 20:28:47.401: ERROR/CustomDisplayView(10806): Resource ID #0x7f050002 type #0x12 is not valid " also, if i do it this way, as someone has suggested, it's not clear to me how the custom_display_view.xml is associated with the custom view class. thanks.

    Read the article

  • How to build an offline web app using Flask?

    - by Rafael Alencar
    I'm prototyping an idea for a website that will use the HTML5 offline application cache for certain purposes. The website will be built with Python and Flask and that's where my main problem comes from: I'm working with those two for the first time, so I'm having a hard time getting the manifest file to work as expected. The issue is that I'm getting 404's from the static files included in the manifest file. The manifest itself seems to be downloaded correctly, but the files that it points to are not. This is what is spit out in the console when loading the page: Creating Application Cache with manifest http://127.0.0.1:5000/static/manifest.appcache offline-app:1 Application Cache Checking event offline-app:1 Application Cache Downloading event offline-app:1 Application Cache Progress event (0 of 2) http://127.0.0.1:5000/style.css offline-app:1 Application Cache Error event: Resource fetch failed (404) http://127.0.0.1:5000/style.css The error is in the last line. When the appcache fails even once, it stops the process completely and the offline cache doesn't work. This is how my files are structured: sandbox offline-app offline-app.py static manifest.appcache script.js style.css templates offline-app.html This is the content of offline-app.py: from flask import Flask, render_template app = Flask(__name__) @app.route('/offline-app') def offline_app(): return render_template('offline-app.html') if __name__ == '__main__': app.run(host='0.0.0.0', debug=True) This is what I have in offline-app.html: <!DOCTYPE html> <html manifest="{{ url_for('static', filename='manifest.appcache') }}"> <head> <title>Offline App Sandbox - main page</title> </head> <body> <h1>Welcome to the main page for the Offline App Sandbox!</h1> <p>Some placeholder text</p> </body> </html> This is my manifest.appcache file: CACHE MANIFEST /style.css /script.js I've tried having the manifest file in all different ways I could think of: CACHE MANIFEST /static/style.css /static/script.js or CACHE MANIFEST /offline-app/static/style.css /offline-app/static/script.js None of these worked. The same error was returned every time. I'm certain the issue here is how the server is serving up the files listed in the manifest. Those files are probably being looked up in the wrong place, I guess. I either should place them somewhere else or I need something different in the cache manifest, but I have no idea what. I couldn't find anything online about having HTML5 offline applications with Flask. Is anyone able to help me out?

    Read the article

  • Running out of memory with UIImage creation on an offscreen Bitmap Context by NSOperation

    - by sigsegv
    I have an app with multiple UIView subclasses that acts as pages for a UIScrollView. UIViews are moved back and forth to provide a seamless experience to the user. Since the content of the views is rather slow to draw, it's rendered on a single shared CGBitmapContext guarded by locks by NSOperation subclasses - executed one at once in an NSOperationQueue - wrapped up in an UIImage and then used by the main thread to update the content of the views. -(void)main { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc]init]; if([self isCancelled]) { return; } if(nil == data) { return; } // Buffer is the shared instance of a CG Bitmap Context wrapper class // data is a dictionary CGImageRef img = [buffer imageCreateWithData:data]; UIImage * image = [[UIImage alloc]initWithCGImage:img]; CGImageRelease(img); if([self isCancelled]) { [image release]; return; } NSDictionary * result = [[NSDictionary alloc]initWithObjectsAndKeys:image,@"image",id,@"id",nil]; // target is the instance of the UIView subclass that will use // the image [target performSelectorOnMainThread:@selector(updateContentWithData:) withObject:result waitUntilDone:NO]; [result release]; [image release]; [pool release]; } The updateContentWithData: of the UIView subclass performed on the main thread is just as simple -(void)updateContentWithData:(NSDictionary *)someData { NSDictionary * data = [someData retain]; if([[data valueForKey:@"id"]isEqualToString:[self pendingRequestId]]) { UIImage * image = [data valueForKey:@"image"]; [self setCurrentImage:image]; [self setNeedsDisplay]; } // If the image has not been retained, it should be released together // with the dictionary retaining it [data release]; } The drawLayer:inContext: method of the subclass will just get the CGImage from the UIImage and use it to update the backing layer or part of it. No retain or release is involved in the process. The problem is that after a while I run out of memory. The number of the UIViews is static. CGImageRef and UIImage are created, retained and released correctly (or so it seems to me). Instruments does not show any leaks, just the free memory available dip constantly, rise a few times, and then dip even lower until the application is terminated. The app cycles through about 2-300 of the aforementioned pages before that, but I would expect to have the memory usage reach a more or less stable level of used memory after a bunch of pages have been already skimmed at fast speed or, since the images are up to 3MB in size, deplete way earlier. Any suggestion will be greatly appreciated.

    Read the article

  • server|configuration problem, a php script just die with no error log & no reason

    - by Roberto
    Hi (first of all, thanks for your attention & sorry for my bad english hahaha also this is not a programming error, or thats what I think, I think this is an error in some configuration of the server or something else but I dont know what) I have a php script (is running like a process of linux, its not running on the web browser) that send SMS via SMPP on the port 2055 (using sockets in php) & then inserts like 10,000 rows into a dababase on MySQL, the script gets the data from a XML file; firts it was running in a shared server (Hostgator is our hosting provider) & at the begining it worked fine, with no trouble, but 5 months later an error appear, the process just die with no reason, the script only sent & inserted 700 rows in the table of the database & the process didnt show any warning or error, nothing appears in the error logs, & I didnt make any change in the script Hostgator never helped us hahaha so we decided to move the script from the shared server to a dedicated server; I thought it was a memory problem or something like that, but when we move the script to the dedicated server the problem just get worse, the script die when has just sent & inserted 40 to 50 rows to the database the information about this error: the shared server is on Red Hat 4.1.2-46 & the dedicated server is on CentOS 5.4 I have commented the line that sends the SMS, & the problem remains in the shared server, at the begining the script was fine, but then the script started to die when has just inserted 700 aprox. in the database, & now the script is dying when has inserted 2500 rows, its better but we didnt change anything in the dedicated server, the script dies when has just inserted like 40 row in the table the script, before it dies, change to a zombie process & we dont know why the usage of memory appears to be 0.3%, and of the cpu appears to be 0.7% to 1% I have changed the max memory limit of php to 128Mb, and even to -1 (so php wont have any limit) but the problem remains we have the limit of 50 connections of mysql at the same time, so I think this is not the problem Im using mysqli to connect from php to mysql Hostgator report that they haven't made any change or update in the servers what could be the problem?? what should I do??? what should I search??? is something in the logic Im missing?? what steps do I have to follow when managing & searching problems of process on Linux??? thank you very much, I think this is not a programming problem, but you have more experience than me, you can tell me thanks!!! bye!!! :)

    Read the article

  • How to split HTML code with javascript or JQuery

    - by Dean
    Hi I'm making a website using JSP and servlets and I have to now break up a list of radio buttons to insert a textarea and a button. I have got the button and textarea to hide and show when you click on the radio button it shows the text area and button. But this only appears at the top and when there are hundreds on the page this will become awkward so i need a way for it to appear underneath. Here is what my HTML looks like when complied: <form action="addSpotlight" method="POST"> <table> <tr><td><input type="radio" value="29" name="publicationIDs" ></td><td>A System For Dynamic Server Allocation in Application Server Clusters, IEEE International Symposium on Parallel and Distributed Processsing with Applications, 2008</td> </tr> <tr><td><input type="radio" value="30" name="publicationIDs" ></td><td>Analysing BitTorrent's Seeding Strategies, 7th IEEE/IFIP International Conference on Embedded and Ubiquitous Computing (EUC-09), 2009</td> </tr> <tr><td><input type="radio" value="31" name="publicationIDs" ></td><td>The Effect of Server Reallocation Time in Dynamic Resource Allocation, UK Performance Engineering Workshop 2009, 2009</td> </tr> <tr><td><input type="radio" value="32" name="publicationIDs" ></td><td>idk, hello, 1992</td> </tr> <tr><td><input type="radio" value="33" name="publicationIDs" ></td><td>sad, safg, 1992</td> </tr> <div class="abstractWriteup"><textarea name="abstract"></textarea> <input type="submit" value="Add Spotlight"></div> </table> </form> Now here is what my JSP looks like: <form action="addSpotlight" method="POST"> <table> <%int i = 0; while(i<ids.size()){%> <tr><td><input type="radio" value="<%=ids.get(i)%>" name="publicationIDs" ></td><td><%=info.get(i)%></td> </tr> <%i++; }%> <div class="abstractWriteup"><textarea name="abstract"></textarea> <input type="submit" value="Add Spotlight"></div> </table> </form> Thanks in Advance Dean

    Read the article

  • Does this mySQL Stored Procedure Work?

    - by Laxmidi
    Hi, I got the following stored procedure from http://dev.mysql.com/doc/refman/5.1/en/functions-that-test-spatial-relationships-between-geometries.html Does this work? CREATE FUNCTION myWithin(p POINT, poly POLYGON) RETURNS INT(1) DETERMINISTIC BEGIN DECLARE n INT DEFAULT 0; DECLARE pX DECIMAL(9,6); DECLARE pY DECIMAL(9,6); DECLARE ls LINESTRING; DECLARE poly1 POINT; DECLARE poly1X DECIMAL(9,6); DECLARE poly1Y DECIMAL(9,6); DECLARE poly2 POINT; DECLARE poly2X DECIMAL(9,6); DECLARE poly2Y DECIMAL(9,6); DECLARE i INT DEFAULT 0; DECLARE result INT(1) DEFAULT 0; SET pX = X(p); SET pY = Y(p); SET ls = ExteriorRing(poly); SET poly2 = EndPoint(ls); SET poly2X = X(poly2); SET poly2Y = Y(poly2); SET n = NumPoints(ls); WHILE i<n DO SET poly1 = PointN(ls, (i+1)); SET poly1X = X(poly1); SET poly1Y = Y(poly1); IF ( ( ( ( poly1X <= pX ) && ( pX < poly2X ) ) || ( ( poly2X <= pX ) && ( pX < poly1X ) ) ) && ( pY > ( poly2Y - poly1Y ) * ( pX - poly1X ) / ( poly2X - poly1X ) + poly1Y ) ) THEN SET result = !result; END IF; SET poly2X = poly1X; SET poly2Y = poly1Y; SET i = i + 1; END WHILE; RETURN result; End; Usage: SET @point = PointFromText('POINT(5 5)') ; SET @polygon = PolyFromText('POLYGON((0 0,10 0,10 10,0 10))') ; SELECT myWithin(@point, @polygon) AS result ; I'm using phpMyAdmin and it blows up when using stored procedures. If this one works, then I'll try to figure out how to call it in php instead. Thanks, Laxmidi

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • WPF - ListBox ignores Style When ItemsSource is bound

    - by Andy T
    Hi, I have created styled a ListBox in WPF so that it is rendered as a checkbox list. When I populate the ListBox's items manually, the styling works perfectly. However, when I instead bind the ItemsSource of the ListBox to a static resource (an ItemsControl containing the required items), the styling is completely dropped. Here's the style: <Style x:Key="CheckBoxListStyle" TargetType="ListBox"> <Style.Resources> <Style TargetType="ListBoxItem"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="ListBoxItem"> <Grid Margin="2"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition /> </Grid.ColumnDefinitions> <CheckBox IsChecked="{Binding IsSelected, RelativeSource={RelativeSource TemplatedParent}, Mode=TwoWay}"/> <ContentPresenter Grid.Column="1" Margin="2,0,0,0" /> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> </Style.Resources> <Setter Property="ItemsPanel"> <Setter.Value> <ItemsPanelTemplate> <WrapPanel Orientation="Vertical" /> </ItemsPanelTemplate> </Setter.Value> </Setter> <Setter Property="BorderThickness" Value="0" /> <Setter Property="Background" Value="Transparent" /> </Style> Here's the code for the ListBox that shows the style correctly: <ListBox x:Name="ColumnsList" Grid.Column="0" Grid.Row="0" Style="{StaticResource CheckBoxListStyle}" BorderThickness="1"> <ListBox.Items> <ListBoxItem>Test</ListBoxItem> <ListBoxItem>Test2</ListBoxItem> <ListBoxItem>Test3</ListBoxItem> </ListBox.Items> </ListBox> Here's the code for the ListBox that ignores the style: <ListBox x:Name="ColumnsList2" Grid.Column="0" Grid.Row="0" Style="{StaticResource CheckBoxListStyle}" BorderThickness="1" ItemsSource="{Binding Source={StaticResource Test1}, Path=Items}"> </ListBox> Hoping someone can help - I'm pretty new to all this and have tried everything I can think of, but everything I've read leads me to believe that setting ItemsSource should have the same outcome as setting the items manually, so I can't see any reason why this would not work. Thanks, AT

    Read the article

  • WordPress Write Cache Problem with Multiple Sessions

    - by Volomike
    I'm working on a content dripper custom plugin in WordPress that my client asked me to build. He says he wants it to catch a page view event, and if it's the right time of day (24 hours since last post), to pull from a resource file and output another post. He needed it to also raise a flag and prevent other sessions from firing that same snippet of code. So, raise some kind of flag saying, "I'm posting that post, go away other process," and then it makes that post and releases the flag again. However, the strangest thing is occurring when placed under load with multiple sessions hitting the site with page views. It's firing instead of one post -- it's randomly doing like 1, 2, or 3 extra posts, with each one thinking that it was the right time to post because it was 24 hours past the time of the last post. Because it's somewhat random, I'm guessing that the problem is some kind of write caching where the other sessions don't see the raised flag just yet until a couple microseconds pass. The plugin was raising the "flag" by simply writing to the wp_options table with the update_option() API in WordPress. The other user sessions were supposed to read that value with get_option() and see the flag, and then not run that piece of code that creates the post because a given session was already doing it. Then, when done, I lower the flag and the other sessions continue as normal. But what it's doing is letting those other sessions in. To make this work, I was using add_action('loop_start','checkToAddContent'). The odd thing about that function though is that it's called more than once on a page, and in fact some plugins may call it. I don't know if there's a better event to hook. Even still, even if I find an event to hook that only runs once on a page view, I still have multiple sessions to contend with (different users who may view the page at the same time) and I want only one given session to trigger the content post when the post is due on the schedule. I'm wondering if there are any WordPress plugin devs out there who could suggest another event hook to latch on to, and to figure out another way to raise a flag that all sessions would see. I mean, I could use the shared memory API in PHP, but many hosting plans have that disabled. Can't use a cookie or session var because that's only one single session. About the only thing that might work across hosting plans would be to drop a file as a flag, instead. If the file is present, then one session has the flag. If the file is not present, then other sessions can attempt to get the flag. Sure, I could use the file route, but it's kind of immature in my opinion and I was wondering if there's something in WordPress I could do.

    Read the article

  • Error processing Spree sample images - file not recognized by identify command in paperclip geometry.rb:29

    - by purpletonic
    I'm getting an error when I run the Spree sample data. It occurs when Spree tries to load in the product data, specifically the product images. Here's the error I'm getting: * Execute db:load_file loading ruby <GEM DIR>/sample/lib/tasks/../../db/sample/spree/products.rb -- Processing image: ror_tote.jpeg rake aborted! /var/folders/91/63kgbtds2czgp0skw3f8190r0000gn/T/ror_tote.jpeg20121007-21549-2rktq1 is not recognized by the 'identify' command. <GEM DIR>/paperclip-2.7.1/lib/paperclip/geometry.rb:31:in `from_file' <GEM DIR>/spree/core/app/models/spree/image.rb:35:in `find_dimensions' I've made sure ImageMagick is installed correctly, as previously I was having problems with it. Here's the output I'm getting when running the identify command directly. $ identify Version: ImageMagick 6.7.7-6 2012-10-06 Q16 http://www.imagemagick.org Copyright: Copyright (C) 1999-2012 ImageMagick Studio LLC Features: OpenCL ... other usage info omitted ... I also used pry with the pry-debugger and put a breakpoint in geometry.rb inside of Paperclip. Here's what that section of geometry.rb looks like: # Uses ImageMagick to determing the dimensions of a file, passed in as either a # File or path. # NOTE: (race cond) Do not reassign the 'file' variable inside this method as it is likely to be # a Tempfile object, which would be eligible for file deletion when no longer referenced. def self.from_file file file_path = file.respond_to?(:path) ? file.path : file raise(Errors::NotIdentifiedByImageMagickError.new("Cannot find the geometry of a file with a blank name")) if file_path.blank? geometry = begin silence_stream(STDERR) do binding.pry Paperclip.run("identify", "-format %wx%h :file", :file => "#{file_path}[0]") end rescue Cocaine::ExitStatusError "" rescue Cocaine::CommandNotFoundError => e raise Errors::CommandNotFoundError.new("Could not run the `identify` command. Please install ImageMagick.") end parse(geometry) || raise(Errors::NotIdentifiedByImageMagickError.new("#{file_path} is not recognized by the 'identify' command.")) end At the point of my binding.pry statement, the file_path variable is set to the following: file_path => "/var/folders/91/63kgbtds2czgp0skw3f8190r0000gn/T/ror_tote.jpeg20121007-22732-1ctl1g1" I've also double checked that this exists, by opening my finder in this directory, and opened it with preview app; and also that the program can run identify by running %x{identify} in pry, and I receive the same version Version: ImageMagick 6.7.7-6 2012-10-06 Q16 as before. Removing the additional digits (is this a timestamp?) after the file extension and running the Paperclip.run command manually in Pry gives me a different error: Cocaine::ExitStatusError: Command 'identify -format %wx%h :file' returned 1. Expected 0 I've also tried manually updating the Paperclip gem in Spree to 3.0.2 and still get the same error. So, I'm not really sure what else to try. Is there still something incorrect with my ImageMagick setup?

    Read the article

  • TicTacToe AI Making Incorrect Decisions

    - by Chris Douglass
    A little background: as a way to learn multinode trees in C++, I decided to generate all possible TicTacToe boards and store them in a tree such that the branch beginning at a node are all boards that can follow from that node, and the children of a node are boards that follow in one move. After that, I thought it would be fun to write an AI to play TicTacToe using that tree as a decision tree. TTT is a solvable problem where a perfect player will never lose, so it seemed an easy AI to code for my first time trying an AI. Now when I first implemented the AI, I went back and added two fields to each node upon generation: the # of times X will win & the # of times O will win in all children below that node. I figured the best solution was to simply have my AI on each move choose and go down the subtree where it wins the most times. Then I discovered that while it plays perfect most of the time, I found ways where I could beat it. It wasn't a problem with my code, simply a problem with the way I had the AI choose it's path. Then I decided to have it choose the tree with either the maximum wins for the computer or the maximum losses for the human, whichever was more. This made it perform BETTER, but still not perfect. I could still beat it. So I have two ideas and I'm hoping for input on which is better: 1) Instead of maximizing the wins or losses, instead I could assign values of 1 for a win, 0 for a draw, and -1 for a loss. Then choosing the tree with the highest value will be the best move because that next node can't be a move that results in a loss. It's an easy change in the board generation, but it retains the same search space and memory usage. Or... 2) During board generation, if there is a board such that either X or O will win in their next move, only the child that prevents that win will be generated. No other child nodes will be considered, and then generation will proceed as normal after that. It shrinks the size of the tree, but then I have to implement an algorithm to determine if there is a one move win and I think that can only be done in linear time (making board generation a lot slower I think?) Which is better, or is there an even better solution?

    Read the article

  • cannot retrieve effect.fx file

    - by numerical25
    I am having issues loading my effect.fx from directx. When I step into my application, my ID3D10Effect *m_pDefaultEffect; pointer remains empty. the address remains at 0x000000 below is my code #pragma once #include "stdafx.h" #include "resource.h" #include "d3d10.h" #include "d3dx10.h" #include "dinput.h" #define MAX_LOADSTRING 100 class RenderEngine { protected: RECT m_screenRect; //direct3d Members ID3D10Device *m_pDevice; // The IDirect3DDevice10 // interface ID3D10Texture2D *m_pBackBuffer; // Pointer to the back buffer ID3D10RenderTargetView *m_pRenderTargetView; // Pointer to render target view IDXGISwapChain *m_pSwapChain; // Pointer to the swap chain RECT m_rcScreenRect; // The dimensions of the screen ID3D10Texture2D *m_pDepthStencilBuffer; ID3D10DepthStencilState *m_pDepthStencilState; ID3D10DepthStencilView *m_pDepthStencilView; //transformation matrixs D3DXMATRIX g_mtxWorld; D3DXMATRIX g_mtxView; D3DXMATRIX g_mtxProj; //Effect members ID3D10Effect *m_pDefaultEffect; ID3D10EffectTechnique *m_pDefaultTechnique; ID3DX10Font *m_pFont; // The font used for rendering text // Sprites used to hold font characters ID3DX10Sprite *m_pFontSprite; ATOM RegisterEngineClass(); void DoFrame(float); bool LoadEffects(); public: static HINSTANCE m_hInst; HWND m_hWnd; int m_nCmdShow; TCHAR m_szTitle[MAX_LOADSTRING]; // The title bar text TCHAR m_szWindowClass[MAX_LOADSTRING]; // the main window class name void DrawTextString(int x, int y, D3DXCOLOR color, const TCHAR *strOutput); //static functions static LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam); static INT_PTR CALLBACK About(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam); bool InitWindow(); bool InitDirectX(); bool InitInstance(); int Run(); void ShutDown(); RenderEngine() { m_screenRect.right = 800; m_screenRect.bottom = 600; } }; below is the implementation bool RenderEngine::LoadEffects() { HRESULT hr; ID3D10Blob *pErrors = 0; // Create the default rendering effect hr = D3DX10CreateEffectFromFile(L"effect.fx", NULL, NULL, "fx_4_0", D3D10_SHADER_DEBUG, 0, m_pDevice, NULL, NULL, &m_pDefaultEffect, &pErrors, NULL); if(pErrors)// at this point, m_pDefaultEffect is still empty but pErrors returns data which means there is {//errors return false; //ends here } //m_pDefaultTechnique = m_pDefaultEffect->GetTechniqueByName("DefaultTechnique"); return true; } My directx Device does work. My effect.fx file is in the same folder as my solution files (.cpp and header files)

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >