Search Results

Search found 11093 results on 444 pages for 'issues'.

Page 409/444 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • Is there a reason why a base class decorated with XmlInclude would still throw a type unknown exception when serialized?

    - by Tedford
    I will simplify the code to save space but what is presented does illustrate the core problem. I have a class which has a property that is a base type. There exist 3 dervived classes which could be assigned to that property. If I assign any of the derived classes to the container then the XmlSerializer throws dreaded "The type xxx was not expected. Use the XmlInclude or SoapInclude attribute to specify types that are not known statically." exception when attempting to seralize the container. However my base class is already decorated with that attribute so I figure there must be an additional "hidden" requirement. The really odd part is that the default WCF serializer has no issues with this class hierarchy. The Container class [DataContract] [XmlRoot(ElementName = "TRANSACTION", Namespace = Constants.Namespace)] public class PaymentSummaryRequest : CommandRequest { /// <summary> /// Gets or sets the summary. /// </summary> /// <value>The summary.</value> /// <remarks></remarks> [DataMember] public PaymentSummary Summary { get; set; } /// <summary> /// Initializes a new instance of the <see cref="PaymentSummaryRequest"/> class. /// </summary> public PaymentSummaryRequest() { Mechanism = CommandMechanism.PaymentSummary; } } The base class [DataContract] [XmlInclude(typeof(xxxPaymentSummary))] [XmlInclude(typeof(yyyPaymentSummary))] [XmlInclude(typeof(zzzPaymentSummary))] [KnownType(typeof(xxxPaymentSummary))] [KnownType(typeof(xxxPaymentSummary))] [KnownType(typeof(zzzPaymentSummary))] public abstract class PaymentSummary { } One of the derived classes [DataContract] public class xxxPaymentSummary : PaymentSummary { } The serialization code var serializer = new XmlSerializer(typeof(PaymentSummaryRequest)); serializer.Serialize(Console.Out,new PaymentSummaryRequest{Summary = new xxxPaymentSummary{}}); The Exception System.InvalidOperationException: There was an error generating the XML document. --- System.InvalidOperationException: The type xxxPaymentSummary was not expected. Use the XmlInclude or SoapInclude attribute to specify types that are not known statically. at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriterPaymentSummaryRequest.Write13_PaymentSummary(String n, String ns, PaymentSummary o, Boolean isNullable, Boolean needType) at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriterPaymentSummaryRequest.Write14_PaymentSummaryRequest(String n, String ns, PaymentSummaryRequest o, Boolean isNullable, Boolean needType) at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationWriterPaymentSummaryRequest.Write15_TRANSACTION(Object o) --- End of inner exception stack trace --- at System.Xml.Serialization.XmlSerializer.Serialize(XmlWriter xmlWriter, Object o, XmlSerializerNamespaces namespaces, String encodingStyle, String id) at System.Xml.Serialization.XmlSerializer.Serialize(TextWriter textWriter, Object o, XmlSerializerNamespaces namespaces) at UserQuery.RunUserAuthoredQuery() in c:\Users\Tedford\AppData\Local\Temp\uqacncyo.0.cs:line 47

    Read the article

  • how to deal with the position in a c# stream

    - by CapsicumDreams
    The (entire) documentation for the position property on a stream says: When overridden in a derived class, gets or sets the position within the current stream. The Position property does not keep track of the number of bytes from the stream that have been consumed, skipped, or both. That's it. OK, so we're fairly clear on what it doesn't tell us, but I'd really like to know what it in fact does stand for. What is 'the position' for? Why would we want to alter or read it? If we change it - what happens? In a pratical example, I have a a stream that periodically gets written to, and I have a thread that attempts to read from it (ideally ASAP). From reading many SO issues, I reset the position field to zero to start my reading. Once this is done: Does this affect where the writer to this stream is going to attempt to put the data? Do I need to keep track of the last write position myself? (ie if I set the position to zero to read, does the writer begin to overwrite everything from the first byte?) If so, do I need a semaphore/lock around this 'position' field (subclassing, perhaps?) due to my two threads accessing it? If I don't handle this property, does the writer just overflow the buffer? Perhaps I don't understand the Stream itself - I'm regarding it as a FIFO pipe: shove data in at one end, and suck it out at the other. If it's not like this, then do I have to keep copying the data past my last read (ie from position 0x84 on) back to the start of my buffer? I've seriously tried to research all of this for quite some time - but I'm new to .NET. Perhaps the Streams have a long, proud (undocumented) history that everyone else implicitly understands. But for a newcomer, it's like reading the manual to your car, and finding out: The accelerator pedal affects the volume of fuel and air sent to the fuel injectors. It does not affect the volume of the entertainment system, or the air pressure in any of the tires, if fitted. Technically true, but seriously, what we want to know is that if we mash it to the floor you go faster..

    Read the article

  • Multiple layouts in rails [Newbie Q]

    - by BriteLite
    Hi. As a newb, I decided to build a "home inventory" application. I am now stuck on how to programmatically select a layout based on what type of item it is when viewing it in a browser. According to my planning, so far I should have created a few models to represent types of items I can find in my home: Furniture, Electronics and Books. class Book < ActiveRecord::Base end class Furniture < ActiveRecord::Base end class Electronic < ActiveRecord::Base end Now the Books model has things like isbn, pages, address, and category. Furniture model has things like color, price, address, and category. Electronics has things like name, voltage, address, and category. Here is where I got confused. I know the property address is going to be the same for all of them. I also know that, I will need to create multiple "layouts" for 3 different types of items to show the different properties of said items with appropriate graphics and stylesheets. But how will I go about deciding which category the item is so I can determine which layout to render. According to me, this is how I will do it: class DisplayController < ApplicationController def display @item = Params[:item] if @item.category = "electronics" render :layout => 'electronics' end end In my routes.rb map.display ':item', :controller => 'display', :action => 'display' I only seem to have one concern with this, I probably will add a lot of categories later on and think there should be a more DRY-esque way of dealing, rather than hardcoding them. I understand that I need to add into my layout html tags to display relevant information for that particular category. ----Questions---- Is this the right way to approach this type of problem. Will this approach be compatible when I decide to add a gem like *thinking_sphinx* to run search. What issues do you see with my approach and how can I make it better. I was reading something about "Polymorphic Assoc", does that apply in this case, since category exist for all items? Also, I was trying to get a routes to render a URL like "http://localhost/living-room-tv"

    Read the article

  • SQL Server architecture guidance

    - by Liam
    Hi, We are designing a new version of our existing product on a new schema. Its an internal web application with possibly 100 concurrent users (max)This will run on a SQL Server 2008 database. On of the discussion items recently is whether we should have a single database of split the database for performance reasons across 2 separate databases. The database could grow anywhere from 50-100GB over 5 years. We are Developers and not DBAs so it would be nice to get some general guidance. [I know the answer is not simple as it depends on the schema, archiving policy, amount of data etc. ] Option 1 Single Main Database [This is my preferred option]. The plan would be to have all the tables in a single database and possibly to use file groups and partitioning to separate the data if required across multiple disks. [Use schema if appropriate]. This should deal with the performance concerns One of the comments wrt this was that the a single server instance would still be processing this data so there would still be a processing bottle neck. For reporting we could have a separate reporting DB but this is still being discussed. Option 2 Split the database into 2 separate databases DB1 - Customers, Accounts, Customer resources etc DB2 - This would contain the bulk of the data [i.e. Vehicle tracking data, financial transaction tables etc]. These tables would typically contain a lot of data. [It could reside on a separate server if required] This plan would involve keeping the main data in a smaller database [DB1] and retaining the [mainly] read only transaction type data in a separate DB [DB2]. The UI would mainly read from DB1 and thus be more responsive. [I'm aware that this option makes it harder for Referential Integrity to be enforced.] Points for consideration As we are at the design stage we can at least make proper use of indexes to deal performance issues so thats why option 1 to me is attractive and its more of a standard approach. For both options we are considering implementing an archiving database. Apologies for the long Question. In summary the question is 1 DB or 2? Thanks in advance, Liam

    Read the article

  • Thread sleep and thread join.

    - by Dhruv Gairola
    hi guys, if i put a thread to sleep in a loop, netbeans gives me a caution saying Invoking Thread.sleep in loop can cause performance problems. However, if i were to replace the sleep with join, no such caution is given. Both versions compile and work fine tho. My code is below (check the last few lines for "Thread.sleep() vs t.join()"). public class Test{ //Display a message, preceded by the name of the current thread static void threadMessage(String message) { String threadName = Thread.currentThread().getName(); System.out.format("%s: %s%n", threadName, message); } private static class MessageLoop implements Runnable { public void run() { String importantInfo[] = { "Mares eat oats", "Does eat oats", "Little lambs eat ivy", "A kid will eat ivy too" }; try { for (int i = 0; i < importantInfo.length; i++) { //Pause for 4 seconds Thread.sleep(4000); //Print a message threadMessage(importantInfo[i]); } } catch (InterruptedException e) { threadMessage("I wasn't done!"); } } } public static void main(String args[]) throws InterruptedException { //Delay, in milliseconds before we interrupt MessageLoop //thread (default one hour). long patience = 1000 * 60 * 60; //If command line argument present, gives patience in seconds. if (args.length > 0) { try { patience = Long.parseLong(args[0]) * 1000; } catch (NumberFormatException e) { System.err.println("Argument must be an integer."); System.exit(1); } } threadMessage("Starting MessageLoop thread"); long startTime = System.currentTimeMillis(); Thread t = new Thread(new MessageLoop()); t.start(); threadMessage("Waiting for MessageLoop thread to finish"); //loop until MessageLoop thread exits while (t.isAlive()) { threadMessage("Still waiting..."); //Wait maximum of 1 second for MessageLoop thread to //finish. /*******LOOK HERE**********************/ Thread.sleep(1000);//issues caution unlike t.join(1000) /**************************************/ if (((System.currentTimeMillis() - startTime) > patience) && t.isAlive()) { threadMessage("Tired of waiting!"); t.interrupt(); //Shouldn't be long now -- wait indefinitely t.join(); } } threadMessage("Finally!"); } } As i understand it, join waits for the other thread to complete, but in this case, arent both sleep and join doing the same thing? Then why does netbeans throw the caution?

    Read the article

  • C# MultiThread Safe Class Design

    - by Robert
    I'm trying to designing a class and I'm having issues with accessing some of the nested fields and I have some concerns with how multithread safe the whole design is. I would like to know if anyone has a better idea of how this should be designed or if any changes that should be made? using System; using System.Collections; namespace SystemClass { public class Program { static void Main(string[] args) { System system = new System(); //Seems like an awkward way to access all the members dynamic deviceInstance = (((DeviceType)((DeviceGroup)system.deviceGroups[0]).deviceTypes[0]).deviceInstances[0]); Boolean checkLocked = deviceInstance.locked; //Seems like this method for accessing fields might have problems with multithreading foreach (DeviceGroup dg in system.deviceGroups) { foreach (DeviceType dt in dg.deviceTypes) { foreach (dynamic di in dt.deviceInstances) { checkLocked = di.locked; } } } } } public class System { public ArrayList deviceGroups = new ArrayList(); public System() { //API called to get names of all the DeviceGroups deviceGroups.Add(new DeviceGroup("Motherboard")); } } public class DeviceGroup { public ArrayList deviceTypes = new ArrayList(); public DeviceGroup() {} public DeviceGroup(string deviceGroupName) { //API called to get names of all the Devicetypes deviceTypes.Add(new DeviceType("Keyboard")); deviceTypes.Add(new DeviceType("Mouse")); } } public class DeviceType { public ArrayList deviceInstances = new ArrayList(); public bool deviceConnected; public DeviceType() {} public DeviceType(string DeviceType) { //API called to get hardwareIDs of all the device instances deviceInstances.Add(new Mouse("0001")); deviceInstances.Add(new Keyboard("0003")); deviceInstances.Add(new Keyboard("0004")); //Start thread CheckConnection that updates deviceConnected periodically } public void CheckConnection() { //API call to check connection and returns true this.deviceConnected = true; } } public class Keyboard { public string hardwareAddress; public bool keypress; public bool deviceConnected; public Keyboard() {} public Keyboard(string hardwareAddress) { this.hardwareAddress = hardwareAddress; //Start thread to update deviceConnected periodically } public void CheckKeyPress() { //if API returns true this.keypress = true; } } public class Mouse { public string hardwareAddress; public bool click; public Mouse() {} public Mouse(string hardwareAddress) { this.hardwareAddress = hardwareAddress; } public void CheckClick() { //if API returns true this.click = true; } } }

    Read the article

  • PHP, MySQL: Display only required parts of my website in sister website

    - by Devner
    Hi all, Now I have my website built on PHP & Mysql. Consider this like a forum. Now when a user posts a reply in my website 1 (ex. www.website1.com), I want to be able to show the starting thread and it's related replies in a sister website of mine. I want to do this in a way that it does not show the rest of the page & other page contents (like logo etc.). I don't think iframe would be a solution because an iframe would embed the whole page and the users visiting my sister website (totally different domain i.e. www.website2.com) would be able to see all the page contents, like logo etc. I want to avoid that. I want to make them see only limited information from website 1 and only the info. that I intend. I hope that makes sense. In a way, you could say that I am trying to replicate my 1 website, and show only a limited part of it. Users browsing 2nd website can post a reply in the 2nd website and it should automatically be posted & visible to the visitors of the website 1. Users of website 1 should not know that a user of website 2 has posted it. They would feel that some user from website 1 has posted it. Do I have to use 2 separate mysql DB or just 1? I think it would be problematic if I am trying to use different DB. I also feel I might have to face DB connectivity issues as I can connect to only 1 DB at a time. It's basically like users of website1.com should feel that they are replying to users of website1.com & users of website2.com should feel that they are replying to users of website2.com. (I need it this way to bridge the gap between them). At the same time I want to make the front end of the websites different so that they don't feel that they are replying to some other users outside the domain. These websites would be under my control and I will have access to the source code at any time. If I need to change the source code, these changes are welcome. Is this really possible? Thank you in advance.

    Read the article

  • BeanCreationException in Spring Framework .WAR deploy to Tomcat 6 on Ubuntu 9.10

    - by JediPotPie
    I am in the process of switching from a Windows box to Ubunutu and I want to run my own local instance of Tomcat 6. I have installed Tomcat 6 without any basic issues. When I try to deploy a .war file that I had running on the Tomcat 6 instance on my Windows box I am getting the following error.... Apr 26, 2010 3:30:27 PM org.apache.catalina.core.ApplicationContext log INFO: Initializing Spring root WebApplicationContext Apr 26, 2010 3:30:27 PM org.apache.catalina.core.StandardContext listenerStart SEVERE: Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener org.springframework.beans.factory.CannotLoadBeanClassException: Cannot find class [com.ameren.eam.ldap.LdapDAONovellImpl] for bean with name 'testNovellDao' defined in ServletContext resource [/WEB-INF/applicationContext.xml]; nested exception is java.lang.ClassNotFoundException: com.ameren.eam.ldap.LdapDAONovellImpl at org.springframework.beans.factory.support.AbstractBeanFactory.resolveBeanClass(AbstractBeanFactory.java:1173) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.predictBeanType(AbstractAutowireCapableBeanFactory.java:479) at org.springframework.beans.factory.support.AbstractBeanFactory.isFactoryBean(AbstractBeanFactory.java:787) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:393) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:736) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:369) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:261) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:3934) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4429) at org.apache.catalina.manager.ManagerServlet.start(ManagerServlet.java:1249) at org.apache.catalina.manager.HTMLManagerServlet.start(HTMLManagerServlet.java:612) at org.apache.catalina.manager.HTMLManagerServlet.doGet(HTMLManagerServlet.java:136) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.catalina.security.SecurityUtil$1.run(SecurityUtil.java:269) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAsPrivileged(Subject.java:537) at org.apache.catalina.security.SecurityUtil.execute(SecurityUtil.java:301) at org.apache.catalina.security.SecurityUtil.doAsPrivilege(SecurityUtil.java:162) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:283) at org.apache.catalina.core.ApplicationFilterChain.access$000(ApplicationFilterChain.java:56) at org.apache.catalina.core.ApplicationFilterChain$1.run(ApplicationFilterChain.java:189) at java.security.AccessController.doPrivileged(Native Method) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:185) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:525) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454) at java.lang.Thread.run(Thread.java:636) Caused by: java.lang.ClassNotFoundException: com.ameren.eam.ldap.LdapDAONovellImpl at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1399) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1245) at org.springframework.util.ClassUtils.forName(ClassUtils.java:230) at org.springframework.beans.factory.support.AbstractBeanDefinition.resolveBeanClass(AbstractBeanDefinition.java:381) at org.springframework.beans.factory.support.AbstractBeanFactory.resolveBeanClass(AbstractBeanFactory.java:1170) ... 40 more The class that is not being found is located at /WEB-INF/classes/com/ameren/eam/ldap/LdapDAONovellImpl.class relative to /WEB-INF/applicationContext.xml. I cannot figure out why it cannot find the class? Any ideas would be great.

    Read the article

  • NHibernate Many-to-Many Mapping not working

    - by ClutchDude
    I have a Nhibernate mapping file for a simple user/role mapping. Here are the mapping files: Users.hbm.xml <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Sample.Persistence" namespace="Sample.Persistence.Model"> <class name="User" table="Users"> <id name="UserKey"> <generator class="identity"/> </id> <property name="UserName" column="UserName" type="String" /> <property name="Password" column="Password" type="Byte[]" /> <property name="FirstName" column="FirstName" type="String" /> <property name="LastName" column="LastName" type="String" /> <property name="Email" column="Email" type="String" /> <property name="Active" column="Active" type="Boolean" /> <property name="Locked" column="Locked" type="Boolean" /> <property name="LoginFailures" column="LoginFailures" type="int" /> <property name="LockoutDate" column="LockoutDate" type="DateTime" generated="insert" /> <property name="Expired" column="Expired" type="Boolean" generated="insert"/> <set name="Roles" table="UsersRolesBridge" lazy="false"> <key column="UserKey" /> <many-to-many class="Role" not-found="exception" column="RoleKey" /> </set> </class> </hibernate-mapping> Role.hbm.xml <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="Sample.Persistence" namespace="Sample.Persistence.Model"> <class name="Role" table="Roles"> <id name="RoleKey"> <generator class="identity"/> </id> <property name="Name" column="Name" type="String" /> <set name="Users" inverse="true" atable="UsersRolesBridge" lazy="false" > <key column="RoleKey" /> <many-to-many class="User" column="UserKey" /> </set> </class> </hibernate-mapping> I am able to retrieve roles for each user via NHibernate but when I go to save a new object, the roles are not saved in the Bridge table. The user is created and insert with no issues. I've checked that the Role collection, a field on the user, is being populated with the proper rolekey before the Session.Save() is called. There is no exception thrown as well.

    Read the article

  • How do I use Ruby metaprogramming to refactor this common code?

    - by James Wenton
    I inherited a project with a lot of badly-written Rake tasks that I need to clean up a bit. Because the Rakefiles are enormous and often prone to bizarre nonsensical dependencies, I'm simplifying and isolating things a bit by refactoring everything to classes. Specifically, that pattern is the following: namespace :foobar do desc "Frozz the foobar." task :frozzify do unless Rake.application.lookup('_frozzify') require 'tasks/foobar' Foobar.new.frozzify end Rake.application['_frozzify'].invoke end # Above pattern repeats many times. end # Several namespaces, each with tasks that follow this pattern. In tasks/foobar.rb, I have something that looks like this: class Foobar def frozzify() # The real work happens here. end # ... Other tasks also in the :foobar namespace. end For me, this is great, because it allows me to separate the task dependencies from each other and to move them to another location entirely, and I've been able to drastically simplify things and isolate the dependencies. The Rakefile doesn't hit a require until you actually try to run a task. Previously this was causing serious issues because you couldn't even list the tasks without it blowing up. My problem is that I'm repeating this idiom very frequently. Notice the following patterns: For every namespace :xyz_abc, there is a corresponding class in tasks/... in the file tasks/[namespace].rb, with a class name that looks like XyzAbc. For every task in a particular namespace, there is an identically named method in the associated namespace class. For example, if namespace :foo_bar has a task :apples, you would expect to see def apples() ... inside the FooBar class, which itself is in tasks/foo_bar.rb. Every task :t defines a "meta-task" _t (that is, the task name prefixed with an underscore) which is used to do the actual work. I still want to be able to specify a desc-description for the tasks I define, and that will be different for each task. And, of course, I have a small number of tasks that don't follow the above pattern at all, so I'll be specifying those manually in my Rakefile. I'm sure that this can be refactored in some way so that I don't have to keep repeating the same idiom over and over, but I lack the experience to see how it could be done. Can someone give me an assist?

    Read the article

  • Are there any platforms where using structure copy on an fd_set (for select() or pselect()) causes p

    - by Jonathan Leffler
    The select() and pselect() system calls modify their arguments (the 'struct fd_set *' arguments), so the input value tells the system which file descriptors to check and the return values tell the programmer which file descriptors are currently usable. If you are going to call them repeatedly for the same set of file descriptors, you need to ensure that you have a fresh copy of the descriptors for each call. The obvious way to do that is to use a structure copy: struct fd_set ref_set_rd; struct fd_set ref_set_wr; struct fd_set ref_set_er; ... ...code to set the reference fd_set_xx values... ... while (!done) { struct fd_set act_set_rd = ref_set_rd; struct fd_set act_set_wr = ref_set_wr; struct fd_set act_set_er = ref_set_er; int bits_set = select(max_fd, &act_set_rd, &act_set_wr, &act_set_er, &timeout); if (bits_set > 0) { ...process the output values of act_set_xx... } } My question: Are there any platforms where it is not safe to do a structure copy of the struct fd_set values as shown? I'm concerned lest there be hidden memory allocation or anything unexpected like that. (There are macros/functions FD_SET(), FD_CLR(), FD_ZERO() and FD_ISSET() to mask the internals from the application.) I can see that MacOS X (Darwin) is safe; other BSD-based systems are likely to be safe, therefore. You can help by documenting other systems that you know are safe in your answers. (I do have minor concerns about how well the struct fd_set would work with more than 8192 open file descriptors - the default maximum number of open files is only 256, but the maximum number is 'unlimited'. Also, since the structures are 1 KB, the copying code is not dreadfully efficient, but then running through a list of file descriptors to recreate the input mask on each cycle is not necessarily efficient either. Maybe you can't do select() when you have that many file descriptors open, though that is when you are most likely to need the functionality.) There's a related SO question - asking about 'poll() vs select()' which addresses a different set of issues from this question.

    Read the article

  • RIM blackberry Record 3GP video

    - by pankaj_shukla
    Hi All, I am writing an application that can record a 3GP video. I have tried both MMAPI and Invoke API. But have following issues. Using MMAPI: 1. When I record to stream, It records video in RIMM streaming format. when I try to play this video player gives error "Unsupported media format.". 2. When I record to a file. It will create a file of size 0. Using Invoke API: 1. In MMS mode it does not allow to record a video more than 30 seconds. 2. In Normal mode size of the file is very large. 3. Once I invoke camera application I do not have any control on application. Here is my source code: _player = javax.microedition.media.Manager .createPlayer("capture://video?encoding=video/3gpp&mode=mms"); // I have tried every encoding returns from System.getProperty("video.encodings") method _player.realize(); _videoControl = (VideoControl) _player.getControl("VideoControl"); _recordControl = (RecordControl) _player.getControl("RecordControl"); _volumeControl = (VolumeControl) _player.getControl("VolumeControl"); String videoPath = System.getProperty("fileconn.dir.videos"); if (videoPath == null) { videoPath = "file:///store/home/user/videos/"; } _recordControl.setRecordLocation(videoPath + "RecordedVideo.3gp"); _player.addPlayerListener(this); Field videoField = (Field) _videoControl.initDisplayMode( VideoControl.USE_GUI_PRIMITIVE, "net.rim.device.api.ui.Field"); _videoControl.setVisible(true); add(videoField); _player.start(); ON start menu item Selection: try { _recordControl.startRecord(); } catch (Exception e) { _player.close(); showAlert(e.getClass() + " " + e.getMessage()); } On stop menuItem selection: try { _recordControl.commit(); } catch (Exception e) { _player.close(); showAlert(e.getClass() + " " + e.getMessage()); } Please let me if I am doing something wrong. Thanks, Pankaj

    Read the article

  • NSFetchedResultsChangeUpdate crashes when called on a searched tableview

    - by Zachary Fisher
    So I nearly have this thing figured out, but I am stumbling over the NSFetchedResultsChangeUpdate when I update my managedObjectContext from a detail view that was entered after searching the table. I have a tableview generated from a core data set. I can enter a detail view from this table and make changes without any issue. I can also search the table and make changes MOST of the time without any issues. However, on certain objects, I get an "Exception was caught during Core Data change processing". I tracked this down to the NSFetchedResultsChangeUpdate. I'm using the following code: case NSFetchedResultsChangeUpdate: if (searchTermForSegue) { NSLog(@"index info:%@.....",theIndexPath); NSLog(@"crashing at the next line"); [self fetchedResultsController:self.searchFetchedResultsController configureCell:[tableView cellForRowAtIndexPath:theIndexPath] atIndexPath:theIndexPath]; break; } else { [self fetchedResultsController:controller configureCell:[tableView cellForRowAtIndexPath:theIndexPath] atIndexPath:theIndexPath]; } break; When the table is not being searched, it runs the else method and that works 100% of the time. When the table is being searched, it runs the if (searchTermForSegue) and that works most of the time, but not always. I logged theIndexPath and discovered the following: When it works, theIndexPath is correctly reporting the objects indexPat, when it fails, the wrong theIndexPath has been called. For example, if I do a search that narrows the tableView to 3 sections, 2 items in first, 1 in second, 1 in third, I get the following nslog: On first object: index info:<NSIndexPath 0xb0634d0> 2 indexes [0, 0]..... on second object: index info:<NSIndexPath 0xb063e70> 2 indexes [0, 1]..... on third object: index info:<NSIndexPath 0xb042880> 2 indexes [1, 0]..... but on the last object: index info:<NSIndexPath 0x9665790> 2 indexes [2, 17]..... it should be calling [2, 0] Note that I am simply updating these objects, not deleting them or adding new ones. Any thoughts would be appreciated!

    Read the article

  • TestNG - Factories and Dataproviders

    - by Tim K
    Background Story I'm working at a software firm developing a test automation framework to replace our old spaghetti tangled system. Since our system requires a login for almost everything we do, I decided it would be best to use @BeforeMethod, @DataProvider, and @Factory to setup my tests. However, I've run into some issues. Sample Test Case Lets say the software system is a baseball team roster. We want to test to make sure a user can search for a team member by name. (Note: I'm aware that BeforeMethods don't run in any given order -- assume that's been taken care of for now.) @BeforeMethod public void setupSelenium() { // login with username & password // acknowledge announcements // navigate to search page } @Test(dataProvider="players") public void testSearch(String playerName, String searchTerm) { // search for "searchTerm" // browse through results // pass if we find playerName // fail (Didn't find the player) } This test case assumes the following: The user has already logged on (in a BeforeMethod, most likely) The user has already navigated to the search page (trivial, before method) The parameters to the test are associated with the aforementioned login The Problems So lets try and figure out how to handle the parameters for the test case. Idea #1 This method allows us to associate dataproviders with usernames, and lets us use multiple users for any specific test case! @Test(dataProvider="players") public void testSearch(String user, String pass, String name, String search) { // login with user/pass // acknowledge announcements // navigate to search page // ... } ...but there's lots of repetition, as we have to make EVERY function accept two extra parameters. Not to mention, we're also testing the acknowledge announcements feature, which we don't actually want to test. Idea #2 So lets use the factory to initialize things properly! class BaseTestCase { public BaseTestCase(String user, String password, Object[][] data); } class SomeTest { @Factory public void ... } With this, we end up having to write one factory per test case... Although, it does let us have multiple users per test-case. Conclusion I'm about fresh out of ideas. There was another idea I had where I was loading data from an XML file, and then calling the methods from a program... but its getting silly. Any ideas?

    Read the article

  • Is It possible to use the second part of this code for repository patterns and generics

    - by newToCSharp
    Is there any issues in using version 2,to get the same results as version 1. Or is this just bad coding. Any Ideas public class Customer { public int CustomerID { get; set; } public string EmailAddress { get; set; } int Age { get; set; } } public interface ICustomer { void AddNewCustomer(Customer Customer); void AddNewCustomer(string EmailAddress, int Age); void RemoveCustomer(Customer Customer); } public class BALCustomer { private readonly ICustomer dalCustomer; public BALCustomer(ICustomer dalCustomer) { this.dalCustomer = dalCustomer; } public void Add_A_New_Customer(Customer Customer) { dalCustomer.AddNewCustomer(Customer); } public void Remove_A_Existing_Customer(Customer Customer) { dalCustomer.RemoveCustomer(Customer); } } public class CustomerDataAccess : ICustomer { public void AddNewCustomer(Customer Customer) { // MAKE DB CONNECTION AND EXECUTE throw new NotImplementedException(); } public void AddNewCustomer(string EmailAddress, int Age) { // MAKE DB CONNECTION AND EXECUTE throw new NotImplementedException(); } public void RemoveCustomer(Customer Customer) { // MAKE DB CONNECTION AND EXECUTE throw new NotImplementedException(); } } // VERSION 2 public class Customer_New : DataRespository<CustomerDataAccess> { public int CustomerID { get; set; } public string EmailAddress { get; set; } public int Age { get; set; } } public class DataRespository<T> where T:class,new() { private T item = new T(); public T Execute { get { return item; } set { item = value; } } public void Update() { //TO BE CODED } public void Save() { //TO BE CODED } public void Remove() { //TO BE CODED } } class Program { static void Main(string[] args) { Customer_New cus = new Customer_New() { Age = 10, EmailAddress = "[email protected]" }; cus.Save(); cus.Execute.RemoveCustomer(new Customer()); // Repository Version Customer customer = new Customer() { EmailAddress = "[email protected]", CustomerID = 10 }; BALCustomer bal = new BALCustomer(new CustomerDataAccess()); bal.Add_A_New_Customer(customer); } } }

    Read the article

  • How to Transfer Large File from MS Word Add-In (VBA) to Web Server?

    - by Ian Robinson
    Overview I have a Microsoft Word Add-In, written in VBA (Visual Basic for Applications), that compresses a document and all of it's related contents (embedded media) into a zip archive. After creating the zip archive it then turns the file into a byte array and posts it to an ASMX web service. This mostly works. Issues The main issue I have is transferring large files to the web site. I can successfully upload a file that is around 40MB, but not one that is 140MB (timeout/general failure). A secondary issue is that building the byte array in the VBScript Word Add-In can fail by running out of memory on the client machine if the zip archive is too large. Potential Solutions I am considering the following options and am looking for feedback on either option or any other suggestions. Option One Opening a file stream on the client (MS Word VBA) and reading one "chunk" at a time and transmitting to ASMX web service which assembles the "chunks" into a file on the server. This has the benefit of not adding any additional dependencies or components to the application, I would only be modifying existing functionality. (Fewer dependencies is better as this solution should work in a variety of server environments and be relatively easy to set up.) Question: Are there examples of doing this or any recommended techniques (either on the client in VBA or in the web service in C#/VB.NET)? Option Two I understand WCF may provide a solution to the issue of transferring large files by "chunking" or streaming data. However, I am not very familiar with WCF, and am not sure what exactly it is capable of or if I can communicate with a WCF service from VBA. This has the downside of adding another dependency (.NET 3.0). But if using WCF is definitely a better solution I may not mind taking that dependency. Questions: Does WCF reliably support large file transfers of this nature? If so, what does this involve? Any resources or examples? Are you able to call a WCF service from VBA? Any examples?

    Read the article

  • Multi domain rails app. How to intelligently use MVC?

    - by denial
    Background: We have app a, b, and plan to add more apps into this same application. The apps are similar enough they could share many views, assets, and actions. Currently a,b live in a single rails app(2.3.10). c will be similar enough that it could also be in this rails app. The problem: As we continue to add more apps to this one app, there's going to be too much case logic that the app will soon become a nightmare to maintain. There will also be potential namespace issues. However, the apps are very similar in function and layout, it also makes sense to keep them in one app so that it's one app to maintain(since roughly 50% of site look/functionality will be shared). What we are trying to do is keep this as clean as possible so it's easy for multiple teams to work on and easy to maintain. Some things we've thought about/are trying: Engines. Make each app an engine. This would let us base routes on the domain. It also allows us to pull out controllers, models and views for the specific app. This solution does not seem ideal as we won't be reusing the apps any time soon. And explicitly stating the host in the routes doesn't seem right. Skinning/themes. The auth logic would be different between the apps. Each user model would be different. So it's not just a skinning problem. In app/view add folder sitea for sitea views, siteb for siteb views and so on. Do the same for controllers and models. This is still pretty messy and since it didn't follow naming conventions, it did not work with rails so nicely and made much of the code messier. Making another rails app. We just didn't want to maintain the same controller or view in 2 apps if they are identical. What we want to do is make the app intelligently use a controller based on the host. So there would be a sessions controller for each app, and perhaps some parent session controller for shared logic(not needed now). In each of these session controllers, it handles authentication for that specific app. So if the domain is a.mysite.com, it would use session controller for app a and know to use app a's views,models,controllers. And if the domain is b.mysite, it would use the session controller for b. And there would be a user model for a and user model for b, which also would be determined by the domain. Does anyone have any suggestions or experience with this situation? And ideally using rails 2.3.x as updating to rails 3 isn't an option right now.

    Read the article

  • How do I find the Next Closest Date to today from a list of dates in a Plist on iOS?

    - by user1173823
    Situation: In short, I have a football schedule. I would like to use a custom cell which provides more info for only the next game date in the schedule. Issue: How do I find only the next closest game in the schedule (for iOS)? I've watched the WWDC 2013 video for "Solutions to Common Date and Time Issues" however this primarily applies to the Mac. I've searched numerous posts here and some are close but not what I need to find ONLY the next date from my list of dates in the schedule. From other posts I see where I can compare two specific dates, but this is not what I want to do. I want to find the next closest date that is equal to or after today from a list of dates. This is where I am now. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { //Populate the table from the plist NSDictionary *season = _schedContentArray[indexPath.section]; NSArray *schedule = season[@"Schedule"]; NSDictionary *game = schedule[indexPath.row]; //find the closest game date after today's date ?? NSString *gameDateStr = game[@"GameDate"]; NSCalendar *calendar = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; NSDateFormatter *dateFormatter = [[NSDateFormatter alloc] init]; dateFormatter.calendar=calendar; [dateFormatter setDateFormat:@"MM/dd/yy"]; NSDate *today = [NSDate date]; NSDate *gameDate = [dateFormatter dateFromString:gameDateStr]; //NSString *nextGame = NSLog(@"game date is %@",gameDate); The NSLog returns the game dates (except for the open date): 2013-11-11 16:10:05.979 Clemson Football[24060:70b] game date is 2013-08-31 04:00:00 +0000 2013-11-11 16:10:05.982 Clemson Football[24060:70b] game date is 2013-09-07 04:00:00 +0000 2013-11-11 16:10:05.985 Clemson Football[24060:70b] game date is (null) 2013-11-11 16:10:05.987 Clemson Football[24060:70b] game date is 2013-09-19 04:00:00 +0000 2013-11-11 16:10:05.988 Clemson Football[24060:70b] game date is 2013-09-28 04:00:00 +0000 2013-11-11 16:10:05.990 Clemson Football[24060:70b] game date is 2013-10-05 04:00:00 +0000 2013-11-11 16:10:05.992 Clemson Football[24060:70b] game date is 2013-10-12 04:00:00 +0000 2013-11-11 16:10:05.993 Clemson Football[24060:70b] game date is 2013-10-19 04:00:00 +0000 2013-11-11 16:10:05.995 Clemson Football[24060:70b] game date is 2013-10-26 04:00:00 +0000 2013-11-11 16:10:05.996 Clemson Football[24060:70b] game date is 2013-11-02 04:00:00 +0000 2013-11-11 16:10:05.998 Clemson Football[24060:70b] game date is 2013-11-09 05:00:00 +0000 2013-11-11 16:10:06.000 Clemson Football[24060:70b] game date is 2013-11-14 05:00:00 +0000 2013-11-11 16:10:06.001 Clemson Football[24060:70b] game date is 2013-11-23 05:00:00 +0000 2013-11-11 16:10:06.003 Clemson Football[24060:70b] game date is 2013-11-30 05:00:00 +0000 2013-11-11 16:10:06.005 Clemson Football[24060:70b] game date is 2013-12-07 05:00:00 +0000 Thanks in advance for any assistance you can provide. This seems like it should be simple but has been fairly frustrating. Let me know if you need additional info.

    Read the article

  • [C++] Multiple inheritance from template class

    - by Tom P.
    Hello, I'm having issues with multiple inheritance from different instantiations of the same template class. Specifically, I'm trying to do this: template <class T> class Base { public: Base() : obj(NULL) { } virtual ~Base() { if( obj != NULL ) delete obj; } template <class T> T* createBase() { obj = new T(); return obj; } protected: T* obj; }; class Something { // ... }; class SomethingElse { // ... }; class Derived : public Base<Something>, public Base<SomethingElse> { }; int main() { Derived* d = new Derived(); Something* smth1 = d->createBase<Something>(); SomethingElse* smth2 = d->createBase<SomethingElse>(); delete d; return 0; } When I try to compile the above code, I get the following errors: 1>[...](41) : error C2440: '=' : cannot convert from 'SomethingElse *' to 'Something *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1> [...](71) : see reference to function template instantiation 'T *Base<Something>::createBase<SomethingElse>(void)' being compiled 1> with 1> [ 1> T=SomethingElse 1> ] 1>[...](43) : error C2440: 'return' : cannot convert from 'Something *' to 'SomethingElse *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast The issue seems to be ambiguity due to member obj being inherited from both Base< Something and Base< SomethingElse , and I can work around it by disambiguating my calls to createBase: Something* smth1 = d->Base<Something>::createBase<Something>(); SomethingElse* smth2 = d->Base<SomethingElse>::createBase<SomethingElse>(); However, this solution is dreadfully impractical, syntactically speaking, and I'd prefer something more elegant. Moreover, I'm puzzled by the first error message. It seems to imply that there is an instantiation createBase< SomethingElse in Base< Something , but how is that even possible? Any information or advice regarding this issue would be much appreciated.

    Read the article

  • JPA Database strcture for internationalisation

    - by IrishDubGuy
    I am trying to get a JPA implementation of a simple approach to internationalisation. I want to have a table of translated strings that I can reference in multiple fields in multiple tables. So all text occurrences in all tables will be replaced by a reference to the translated strings table. In combination with a language id, this would give a unique row in the translated strings table for that particular field. For example, consider a schema that has entities Course and Module as follows :- Course int course_id, int name, int description Module int module_id, int name The course.name, course.description and module.name are all referencing the id field of the translated strings table :- TranslatedString int id, String lang, String content That all seems simple enough. I get one table for all strings that could be internationalised and that table is used across all the other tables. How might I do this in JPA, using eclipselink 2.4? I've looked at embedded ElementCollection, ala this... JPA 2.0: Mapping a Map - it isn't exactly what i'm after cos it looks like it is relating the translated strings table to the pk of the owning table. This means I can only have one translatable string field per entity (unless I add new join columns into the translatable strings table, which defeats the point, its the opposite of what I am trying to do). I'm also not clear on how this would work across entites, presumably the id of each entity would have to use a database wide sequence to ensure uniqueness of the translatable strings table. BTW, I tried the example as laid out in that link and it didn't work for me - as soon as the entity had a localizedString map added, persisting it caused the client side to bomb but no obvious error on the server side and nothing persisted in the DB :S I been around the houses on this about 9 hours so far, I've looked at this Internationalization with Hibernate which appears to be trying to do the same thing as the link above (without the table definitions it hard to see what he achieved). Any help would be gratefully achieved at this point... Edit 1 - re AMS anwser below, I'm not sure that really addresses the issue. In his example it leaves the storing of the description text to some other process. The idea of this type of approach is that the entity object takes the text and locale and this (somehow!) ends up in the translatable strings table. In the first link I gave, the guy is attempting to do this by using an embedded map, which I feel is the right approach. His way though has two issues - one it doesn't seem to work! and two if it did work, it is storing the FK in the embedded table instead of the other way round (I think, I can't get it to run so I can't see exactly how it persists). I suspect the correct approach ends up with a map reference in place of each text that needs translating (the map being locale-content), but I can't see how to do this in a way that allows for multiple maps in one entity (without having corresponding multiple columns in the translatable strings table)...

    Read the article

  • Issue with blocking the UI during a onchange request - prevents other event from firing.

    - by jfrobishow
    I am having issues with jQuery blockUI plugins and firing two events that are (I think, unless I am loosing it) unrelated. Basically I have textboxes with onchange events bound to them. The event is responsible for blocking the UI, doing the ajax call and on success unblocking the UI. The ajax is saving the text in memory. The other control is a button with on onclick event which also block the UI, fire an ajax request saving what's in memory to the database and on success unblock the UI. Both of these work fine separately. The issue arise when I trigger the onchange by clicking on the button. Then only the onchange is fired and the onclick is ignored. I can change the text in the checkbox, click on the link and IF jQuery.blockUI() is present the onchange alone is fired and the save is never called. If I remove the blockUI both function are called. Here's a fully working example where you can see the issue. Please note the setTimeout are there when I was trying to simulate the ajax delay but the issue is happening without it. <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <script src="http://github.com/malsup/blockui/raw/master/jquery.blockUI.js?v2.31"></script> <script> function doSomething(){ $.blockUI(); alert("doing something"); //setTimeout(function(){ $.unblockUI(); //},500); } function save(){ $.blockUI(); //setTimeout(function(){ alert("saving"); $.unblockUI(); //}, 1000); } </script> </head> <body> <input type="text" onchange="doSomething();"> <a href="#" onclick="save()">save</a> </body> </html>

    Read the article

  • Very simple test view in MonoTouch draws a line using Core Graphics but view content is not shown

    - by Krumelur
    Hi, I give up now on this very simple test I've been trying to run. I want to add a subview to my window which does nothing but draw a line from one corner of the iPhone's screen to the other and then, using touchesMoved() it is supposed to draw a line from the last to the current point. The issues: 1. Already the initial line is not visible. 2. When using Interface Builder, the initial line is visible, but drawRect() is never called, even if I call SetNeedsDisplay(). It can't be that hard...can somebody fix the code below to make it work? In main.cs in FinishedLaunching(): oView = new TestView(); oView.AutoresizingMask = UIViewAutoresizing.FlexibleWidth | UIViewAutoresizing.FlexibleHeight; oView.Frame = new System.Drawing.RectangleF(0, 0, 320, 480); window.AddSubview(oView); window.MakeKeyAndVisible (); The TestView.cs: using System; using MonoTouch.UIKit; using MonoTouch.CoreGraphics; using System.Drawing; using MonoTouch.CoreAnimation; using MonoTouch.Foundation; namespace Test { public class TestView : UIView { public TestView () : base() { } public override void DrawRect (RectangleF area, UIViewPrintFormatter formatter) { CGContext oContext = UIGraphics.GetCurrentContext(); oContext.SetStrokeColor(UIColor.Red.CGColor.Components); oContext.SetLineWidth(3.0f); this.oLastPoint.Y = UIScreen.MainScreen.ApplicationFrame.Size.Height - this.oLastPoint.Y; this.oCurrentPoint.Y = UIScreen.MainScreen.ApplicationFrame.Size.Height - this.oCurrentPoint.Y; oContext.StrokeLineSegments(new PointF[] {this.oLastPoint, this.oCurrentPoint }); oContext.Flush(); oContext.RestoreState(); Console.Out.WriteLine("Current X: {0}, Y: {1}", oCurrentPoint.X.ToString(), oCurrentPoint.Y.ToString()); Console.Out.WriteLine("Last X: {0}, Y: {1}", oLastPoint.X.ToString(), oLastPoint.Y.ToString()); } private PointF oCurrentPoint = new PointF(0, 0); private PointF oLastPoint = new PointF(320, 480); public override void TouchesMoved (MonoTouch.Foundation.NSSet touches, UIEvent evt) { base.TouchesMoved (touches, evt); UITouch oTouch = (UITouch)touches.AnyObject; this.oCurrentPoint = oTouch.LocationInView(this); this.oLastPoint = oTouch.PreviousLocationInView(this); this.SetNeedsDisplay(); } } }

    Read the article

  • Convert string from getline into a number

    - by haskellguy
    I am trying to create a 2D array with vectors. I have a file that has for each line a set of numbers. So what I did I implemented a split function that every time I have a new number (separated by \t) it splits that and add it to the vector vector<double> &split(const string &s, char delim, vector<double> &elems) { stringstream ss(s); string item; while (getline(ss, item, delim)) { cout << item << endl; double number = atof(item.c_str()); cout << number; elems.push_back(number); } return elems; } vector<double> split(const string &s, char delim) { vector<double> elems; split(s, delim, elems); return elems; } After that I simply iterate through it. int main() { ifstream file("./data/file.txt"); string row; vector< vector<double> > matrix; int line_count = -1; while (getline(file, row)) { line_count++; if (line_count <= 4) continue; vector<double> cols = split(row, '\t'); matrix.push_back(cols); } ... } Now my issues is in this bit here: while (getline(ss, item, delim)) { cout << item << endl; double number = atof(item.c_str()); cout << number; Where item.c_str() is converted to a 0. Shouldn't that be still a string having the same value as item? It works on a separate example if I do straight from string to c_string, but when I use this getline I end up in this error situation, hints?

    Read the article

  • Web Services, Memory Leaks and CRM

    - by Neil
    Hi, I have a website that allows users to upload a csv file. This calls a service that reads the information from the csv, puts it into DynamicEntity objects and calls the CRM service to Create/Update entities in CRM. When this service creates/updates an entity this kicks off other plugins to apply certain business rules. These rules can also Create or Update entites in CRM. The issue here is that the handle count of the w3wp.exe process that the website is calling increases every time the an entity is created or updated and it never comes back down. I tried putting Garbage Collection code in the business rules and this reduces the handle count of the CRM w3wp process (run by the Network Service), but not the other w3wp process. Should I have Dispose methods on the Web Service that calls the CRM service? I hope that makes sense. I'm not overly familiar with memory management issues so any help is appreciated. Can anybody give me some tips on how to stop this from occurring? Thanks, Neil -- EDIT Okay well the handle count goes up when I call the Service.Create(DynamicEntity) method. I don't think placing any code here would be beneficial. When I exit the method/class/service that contains this call the handle count stays as it is. What I need to know is whether this is something I should be managing or is it something CRM takes care of (or doesn't take care of but I can't do anything about it) -- Another Edit Right this is how it works. 1) We have CRM and its related services 2) We have another service independent of CRM that uses the CRM services (number 1 above) to create entities based on csv info passed into it 3) We have a website that allows a user to upload a csv, and calls service no 2 above to Create/Update entities in CRM 4) We have plugins fired by CRM which use Service 1 above to create/update entities So the user uploads a csv to the website (3), this fires a service(2). When service 2 creates an entity using service 1, Service 4 fires. Service 4 calls also uses service 1 to Create entities, and when these services are called (using the Service.Create() method) the handle count of the process increases. When the method/class/services finish the handle count remains the same, and so when the whole process occurs again the handle count will increased again.

    Read the article

  • WPF PathGeometry/RotateTransform optimization

    - by devinb
    I am having performance issues when rendering/rotating WPF triangles If I had a WPF triangle being displayed and it will be rotated to some degree around a centrepoint, I can do it one of two ways: Programatically determine the points and their offset in the backend, use XAML to simply place them on the canvas where they belong, it would look like this: <Path Stroke="Black"> <Path.Data> <PathGeometry> <PathFigure StartPoint ="{Binding CalculatedPointA, Mode=OneWay}"> <LineSegment Point="{Binding CalculatedPointB, Mode=OneWay}" /> <LineSegment Point="{Binding CalculatedPointC, Mode=OneWay}" /> <LineSegment Point="{Binding CalculatedPointA, Mode=OneWay}" /> </PathFigure> </PathGeometry> </Path.Data> </Path> Generate the 'same' triangle every time, and then use a RenderTransform (Rotate) to put it where it belongs. In this case, the rotation calculations are being obfuscated, because I don't have any access to how they are being done. <Path Stroke="Black"> <Path.Data> <PathGeometry> <PathFigure StartPoint ="{Binding TriPointA, Mode=OneWay}"> <LineSegment Point="{Binding TriPointB, Mode=OneWay}" /> <LineSegment Point="{Binding TriPointC, Mode=OneWay}" /> <LineSegment Point="{Binding TriPointA, Mode=OneWay}" /> </PathFigure> </PathGeometry> </Path.Data> <Path.RenderTransform> <RotateTransform CenterX="{Binding Centre.X, Mode=OneWay}" CenterY="{Binding Centre.Y, Mode=OneWay}" Angle="{Binding Orientation, Mode=OneWay}" /> </Path.RenderTransform> </Path> My question is which one is faster? I know I should test it myself but how do I measure the render time of objects with such granularity. I would need to be able to time how long the actual rendering time is for the form, but since I'm not the one that's kicking off the redraw, I don't know how to capture the start time.

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >