Search Results

Search found 12107 results on 485 pages for 'pinned objects'.

Page 3/485 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Split NSData objects into other NSData objects with a given size

    - by Cedric Vandendriessche
    I'm having an NSData object of approximately 1000kb big. Now I want to transfer this via bluetooth. This would be better if I have let's say 10 objects of 100kb. It comes to mind that I should use the -subdataWithRange: method of NSData. I haven't really worked with NSRange. Well I know how it works, but then to read from a given location with the length: 'to end of file'... I've no idea how to do that. Some code on how to split this into multiple 100kb NSData objects would really help me out here. (it probably involves the length method to see how many objects should be made..?) Thank you in advance.

    Read the article

  • Creating C++ objects

    - by Phenom
    I noticed that there are two ways to create C++ objects: BTree *btree = new BTree; and BTree btree; From what I can tell, the only difference is in how class objects are accessed (. vs. - operator), and when the first way is used, private integers get initialized to 0. Which way is better, and what's the difference? How do you know when to use one or the other?

    Read the article

  • How to count JavaScript array objects?

    - by Nikita Sumeiko
    When I have a JavaScript array like this: var member = { "mother": { "name" : "Mary", "age" : "48" }, "father": { "name" : "Bill", "age" : "50" }, "brother": { "name" : "Alex", "age" : "28" } } How to count objects in this array?!I mean how to get a counting result 3, because there're only 3 objects inside: mother, father, brother?!

    Read the article

  • How value objects are saving and loading?

    - by yeraycaballero
    Since there isn't respositories for value objects. How can I load all value objects? Suppose we are modeling a blog application and we have this classes: Post (Entity) Comment (Value object) Tag (Value object) PostsRespository (Respository) I Know that when I save a new post, its tags are saving with it in the same table. But how could I load all tags of all posts. Has PostsRespository got a method to load all tags? I usually do it, but I want to know others opinions

    Read the article

  • Suggestions on how to map from Domain (ORM) objects to Data Transfer Objects (DTO)

    - by FryHard
    The current system that I am working on makes use of Castle Activerecord to provide ORM (Object Relational Mapping) between the Domain objects and the database. This is all well and good and at most times actually works well! The problem comes about with Castle Activerecords support for asynchronous execution, well, more specifically the SessionScope that manages the session that objects belong to. Long story short, bad stuff happens! We are therefore looking for a way to easily convert (think automagically) from the Domain objects (who know that a DB exists and care) to the DTO object (who know nothing about the DB and care not for sessions, mapping attributes or all thing ORM). Does anyone have suggestions on doing this. For the start I am looking for a basic One to One mapping of object. Domain object Person will be mapped to say PersonDTO. I do not want to do this manually since it is a waste. Obviously reflection comes to mind, but I am hoping with some of the better IT knowledge floating around this site that "cooler" will be suggested. Oh, I am working in C#, the ORM objects as said before a mapped with Castle ActiveRecord. Example code: By @ajmastrean's request I have linked to an example that I have (badly) mocked together. The example has a capture form, capture form controller, domain objects, activerecord repository and an async helper. It is slightly big (3MB) because I included the ActiveRecored dll's needed to get it running. You will need to create a database called ActiveRecordAsync on your local machine or just change the .config file. Basic details of example: The Capture Form The capture form has a reference to the contoller private CompanyCaptureController MyController { get; set; } On initialise of the form it calls MyController.Load() private void InitForm () { MyController = new CompanyCaptureController(this); MyController.Load(); } This will return back to a method called LoadComplete() public void LoadCompleted (Company loadCompany) { _context.Post(delegate { CurrentItem = loadCompany; bindingSource.DataSource = CurrentItem; bindingSource.ResetCurrentItem(); //TOTO: This line will thow the exception since the session scope used to fetch loadCompany is now gone. grdEmployees.DataSource = loadCompany.Employees; }, null); } } this is where the "bad stuff" occurs, since we are using the child list of Company that is set as Lazy load. The Controller The controller has a Load method that was called from the form, it then calls the Asyc helper to asynchronously call the LoadCompany method and then return to the Capture form's LoadComplete method. public void Load () { new AsyncListLoad<Company>().BeginLoad(LoadCompany, Form.LoadCompleted); } The LoadCompany() method simply makes use of the Repository to find a know company. public Company LoadCompany() { return ActiveRecordRepository<Company>.Find(Setup.company.Identifier); } The rest of the example is rather generic, it has two domain classes which inherit from a base class, a setup file to instert some data and the repository to provide the ActiveRecordMediator abilities.

    Read the article

  • Out of memory when creating a lot of objects C#

    - by Bas
    I'm processing 1 million records in my application, which I retrieve from a MySQL database. To do so I'm using Linq to get the records and use .Skip() and .Take() to process 250 records at a time. For each retrieved record I need to create 0 to 4 Items, which I then add to the database. So the average amount of total Items that has to be created is around 2 million. while (objects.Count != 0) { using (dataContext = new LinqToSqlContext(new DataContext())) { foreach (Object objectRecord in objects) { // Create a list of 0 - 4 Random Items and add each Item to the Object for (int i = 0; i < Random.Next(0, 4); i++) { Item item = new Item(); item.Id = Guid.NewGuid(); item.Object = objectRecord.Id; item.Created = DateTime.Now; item.Changed = DateTime.Now; dataContext.InsertOnSubmit(item); } } dataContext.SubmitChanges(); } amountToSkip += 250; objects = objectCollection.Skip(amountToSkip).Take(250).ToList(); } Now the problem arises when creating the Items. When running the application (and not even using dataContext) the memory increases consistently. It's like the items are never getting disposed. Does anyone notice what I'm doing wrong? Thanks in advance!

    Read the article

  • Properly clean up excel interop objects revisited: Wrapper objects

    - by chiccodoro
    Hi all, Excel 2007 Hangs When Closing via .NET How to properly clean up Excel interop objects in C# How to properly clean up interop objects in C# All of these struggle with the problem that C# does not release the Excel COM objects properly after using them. There are mainly two directions of working around this issue: Kill the Excel process when Excel is not used anymore. Take care to assign each COM object used explicitly to a variable and to Marshal.ReleaseComObject all of these. Some have stated that 2 is too tedious and there is always some uncertainty whether you forget to stick to this rule at some places in the code. Still 1 seems dirty and dangerous to me, also I could imagine that in an environment with restricted access killing processes is not allowed. So I've been thinking about solving 2 by creating another proxy object model which mimics the Excel object model (for me, it would suffice to implement the objects I actually need). The principle would look as follows: Each Excel Interop class has its proxy which wraps an object of that class. The proxy releases the COM object in its destructor. The proxy mimics the interface of the Interop class (maybe by inheriting it). Any methods that usually return another COM object return a proxy instead. The other methods simply delegate the implementation to the inner COM object. This is a rough sketch of the code: public class Application : Microsoft.Office.Interop.Excel.Application { private Microsoft.Office.Interop.Excel.Application innerApplication = new Microsoft.Office.Interop.Excel.Application innerApplication(); ~Application() { Marshal.ReleaseCOMObject(innerApplication); } public Workbooks Workbooks { get { return new Workbooks(innerApplication.Workbooks); } } } public class Workbooks { private Microsoft.Office.Interop.Excel.Workbooks innerWorkbooks; Workbooks(Microsoft.Office.Interop.Excel.Workbooks innerWorkbooks) { this.innerWorkbooks = innerWorkbooks; } ~Workbooks() { Marshal.ReleaseCOMObject(innerWorkbooks); } } My questions to you are in particular: Who finds this a bad idea and why? Who finds this a gread idea? If so, why hasn't anybody implemented/published such a model yet? Just due to the effort, or am I missing a killing problem with that idea? Is it impossible/bad/dangerous to do the ReleaseCOMObject in the destructor? (I've only seen proposals to put it in a Dispose() rather than in a destructor - why?) If the approach makes sense, any suggestions to improve it?

    Read the article

  • Accessing Members of Containing Objects from Contained Objects.

    - by Bunkai.Satori
    If I have several levels of object containment (one object defines and instantiates another object which define and instantiate another object..), is it possible to get access to upper, containing - object variables and functions, please? Example: class CObjectOne { public: CObjectOne::CObjectOne() { Create(); }; void Create(); std::vector<ObjectTwo>vObejctsTwo; int nVariableOne; } bool CObjectOne::Create() { CObjectTwo ObjectTwo(this); vObjectsTwo.push_back(ObjectTwo); } class CObjectTwo { public: CObjectTwo::CObjectTwo(CObjectOne* pObject) { pObjectOne = pObject; Create(); }; void Create(); CObjectOne* GetObjectOne(){return pObjectOne;}; std::vector<CObjectTrhee>vObjectsTrhee; CObjectOne* pObjectOne; int nVariableTwo; } bool CObjectTwo::Create() { CObjectThree ObjectThree(this); vObjectsThree.push_back(ObjectThree); } class CObjectThree { public: CObjectThree::CObjectThree(CObjectTwo* pObject) { pObjectTwo = pObject; Create(); }; void Create(); CObjectTwo* GetObjectTwo(){return pObjectTwo;}; std::vector<CObjectsFour>vObjectsFour; CObjectTwo* pObjectTwo; int nVariableThree; } bool CObjectThree::Create() { CObjectFour ObjectFour(this); vObjectsFour.push_back(ObjectFour); } main() { CObjectOne myObject1; } Say, that from within CObjectThree I need to access nVariableOne in CObjectOne. I would like to do it as follows: int nValue = vObjectThree[index].GetObjectTwo()->GetObjectOne()->nVariable1; However, after compiling and running my application, I get Memory Access Violation error. What is wrong with the code above(it is example, and might contain spelling mistakes)? Do I have to create the objects dynamically instead of statically? Is there any other way how to achieve variables stored in containing objects from withing contained objects?

    Read the article

  • Debugging XSLT with extension objects in Visual Studio 2010

    - by Alex Ciminian
    I'm currently working on a project that involves a lot of XSLT transformations and I really need a debugger (I have XSLTs that are 1000+ lines long and I didn't write them :-). The project is written in C# and makes use of extension objects: xslArg.AddExtensionObject("urn:<obj>", new <Obj>()); From my knowledge, in this situation Visual Studio is the only tool that can help me debug the transformations step-by-step. The static debugger is no use because of the extension objects (it throws an error when it reaches elements that reference their namespace). Fortunately, I've found this thread which gave me a starting point (at least I know it can be done). After searching MSDN, I found the criteria that makes stepping into the transform possible. They are listed here. In short: the XML and the XSLT must be loaded via a class that has the IXmlLineInfo interface (XmlReader & co.) the XML resolver used in the XSLTCompiledTransform constructor is file-based (XmlUriResolver should work). the stylesheet should be on the local machine or on the intranet (?) From what I can tell, I fit all these criteria, but it still doesn't work. The relevant code samples are posted below: // [...] xslTransform = new XslCompiledTransform(true); xslTransform.Load(XmlReader.Create(new StringReader(contents)), null, new BaseUriXmlResolver(xslLocalPath)); // [...] // I already had the xml loaded in an xmlDocument // so I have to convert to an XmlReader XmlTextReader r = new XmlTextReader(new StringReader(xmlDoc.OuterXml)); XsltArgumentList xslArg = new XsltArgumentList(); xslArg.AddExtensionObject("urn:[...]", new [...]()); xslTransform.Transform(r, xslArg, context.Response.Output); I really don't get what I'm doing wrong. I've checked the interfaces on both XmlReader objects and they implement the required one. Also, BaseUriXmlResolver inherits from XmlUriResolver and the stylesheet is stored locally. The screenshot below is what I get when stepping into the Transform function. First I can see the stylesheet code after stepping through the parameters (on template-match), I get this: If anyone has any idea why it doesn't work or has an alternative way of getting it to work I'd be much obliged :). Thanks, Alex

    Read the article

  • Windows 7: enabling navigation of subfolders in pinned Start Menu folders

    - by AspNyc
    I'm just about to move from Windows XP to Windows 7, and I'm struggling with some of the interface changes. In XP, I was able to throw a folder intoC:\Documents and Settings\username\Start Menuand have it appear on the Start Menu, complete with the ability to navigate through subfolders. I've figured out how to pin a folder onto the Start Menu in Windows 7, which required a registry hack. However, I am unable to view the subfolders of the pinned folder without opening a new Windows Explorer window. Is there any way to replicate the old XP behavior I'm used to? I'd like to be only a single click away from these handful of application links and folders, since I use them all the time throughout the day.

    Read the article

  • After recovery to restore point, Windows 7 missing pinned items and favorites

    - by Michael Levy
    I believe a recent windows update was interrupted. The next day, I could not logon and was presented with the error "User Profile Service service failed the logon. User profile cannot be loaded". I followed some advice from http://answers.microsoft.com/en-us/windows/forum/windows_vista-security/help-user-profile-service-service-failed-the-logon/4ed66b21-c23e-42f1-98b2-706dcf931fae and logged in with a different admin account and used system restore to restore to a recent restore point. Most everything is working fine, but I have noticed two odd things: Any items that were pinned to my start menu or task bar were not accessible. I had to un-pin and re-pin the items. In Windows Explorer, my favorites are gone and I can't seem to add any favorites. If I browse to a folder and right click on the Favorites Icon and select "add current location to favorites" nothing is saved. I'd appreciate any explanation to understand why these things did not get recovered properly and any help fixing the favorite functionality.

    Read the article

  • Windows 7: enabling navigation of subfolders in pinned Start Menu folders

    - by AspNyc
    I'm just about to move from Windows XP to Windows 7, and I'm struggling with some of the interface changes. In XP, I was able to throw a folder intoC:\Documents and Settings\username\Start Menuand have it appear on the Start Menu, complete with the ability to navigate through subfolders. I've figured out how to pin a folder onto the Start Menu in Windows 7, which required a registry hack. However, I am unable to view the subfolders of the pinned folder without opening a new Windows Explorer window. Is there any way to replicate the old XP behavior I'm used to? I'd like to be only a single click away from these handful of application links and folders, since I use them all the time throughout the day.

    Read the article

  • Converting formCollection array to objects in the controller

    - by bergin
    in my view I have several [n].propertyName array fields I want to turn the formCollection fields into objects myobject[n].propertyName when it goes to the controller. so for example, the context: View: foreach (var item in Model.SSSubjobs.AsEnumerable()) <%: Html.Hidden("["+c+"].sssj_id", item.sssj_id ) %> <%: Html.Hidden("["+c+"].order_id", item.order_id ) %> <%: Html.TextBox("["+c+"].farm", item.farm %> <%: Html.TextBox("["+c+"].field", item.field %> c++; Controller: I want to take the above [0].sssj_id and turn into sssj[0].sssj_id or a list of sssj objects My first idea was to look in the form collection for things starting with "[" but I have a feeling this isnt right... this is as far as I got: public IList<SoilSamplingSubJob> extractSSSJ(FormCollection c) { IList<SoilSamplingSubJob> sssj_list=null; SoilSamplingSubJob sssj; var n=0; foreach (var key in c.AllKeys) // iterate through the formcollection { var value = c[key]; if(key.StartsWith("[")) // ie turn [0].gps_pk_chx into sssj.gps_pk_chx ??? } return sssj_list; }

    Read the article

  • c# Most efficient way to combine two objects

    - by Dested
    I have two objects that can be represented as an int, float, bool, or string. I need to perform an addition on these two objects with the results being the same thing c# would produce as a result. For instance 1+"Foo" would equal the string "1Foo", 2+2.5 would equal the float 5.5, and 3+3 would equal the int 6 . Currently I am using the code below but it seems like incredible overkill. Can anyone simplify or point me to some way to do this efficiently? private object Combine(object o, object o1) { float left = 0; float right = 0; bool isInt = false; string l = null; string r = null; if (o is int) { left = (int)o; isInt = true; } else if (o is float) { left = (float)o; } else if (o is bool) { l = o.ToString(); } else { l = (string)o; } if (o1 is int) { right = (int)o1; } else if (o is float) { right = (float)o1; isInt = false; } else if (o1 is bool) { r = o1.ToString(); isInt = false; } else { r = (string)o1; isInt = false; } object rr; if (l == null) { if (r == null) { rr = left + right; } else { rr = left + r; } } else { if (r == null) { rr = l + right; } else { rr = l + r; } } if (isInt) { return Convert.ToInt32(rr); } return rr; }

    Read the article

  • Most efficient way to combine two objects in C#

    - by Dested
    I have two objects that can be represented as an int, float, bool, or string. I need to perform an addition on these two objects with the results being the same thing c# would produce as a result. For instance 1+"Foo" would equal the string "1Foo", 2+2.5 would equal the float 5.5, and 3+3 would equal the int 6 . Currently I am using the code below but it seems like incredible overkill. Can anyone simplify or point me to some way to do this efficiently? private object Combine(object o, object o1) { float left = 0; float right = 0; bool isInt = false; string l = null; string r = null; if (o is int) { left = (int)o; isInt = true; } else if (o is float) { left = (float)o; } else if (o is bool) { l = o.ToString(); } else { l = (string)o; } if (o1 is int) { right = (int)o1; } else if (o is float) { right = (float)o1; isInt = false; } else if (o1 is bool) { r = o1.ToString(); isInt = false; } else { r = (string)o1; isInt = false; } object rr; if (l == null) { if (r == null) { rr = left + right; } else { rr = left + r; } } else { if (r == null) { rr = l + right; } else { rr = l + r; } } if (isInt) { return Convert.ToInt32(rr); } return rr; }

    Read the article

  • Optional Member Objects

    - by David Relihan
    Okay, so you have a load of methods sprinkled around your systems main class. So you do the right thing and refactor by creating a new class and perform move method(s) into a new class. The new class has a single responsibility and all is right with the world again: class Feature { public: Feature(){}; void doSomething(); void doSomething1(); void doSomething2(); }; So now your original class has a member variable of type object: Feature _feature; Which you will call in the main class. Now if you do this many times, you will have many member-objects in your main class. Now these features may or not be required based on configuration so in a way it's costly having all these objects that may or not be needed. Can anyone suggest a way of improving this? At the moment I plan to test in the newly created class if the feature is enabled - so the when a call is made to method I will return if it is not enabled. I could have a pointer to the object and then only call new if feature is enabled - but this means I will have to test before I call a method on it which would be potentially dangerous and not very readable. Would having an auto_ptr to the object improve things: auto_ptr<Feature> feature; Or am I still paying the cost of object invokation even though the object may\or may not be required. BTW - I don't think this is premeature optimisation - I just want to consider the possibilites.

    Read the article

  • Moving objects colliding when using unalligned collision avoidance (steering)

    - by James Bedford
    I'm having trouble with unaligned collision avoidance for what I think is a rare case. I have set two objects to move towards each other but with a slight offset, so one of the objects is moving slightly upwards, and one of the objects is moving slightly downwards. In my unaligned collision avoidance steering algorithm I'm finding the points on the object's forward line and the other object's forward line where these two lines are the closest. If these closest points are within a collision avoidance distance, and if the distance between them is smaller than the two radii of the two object's bounding spheres, then the objects should steer away in the appropriate direction. The problem is that for my case, the closest points on the lines are calculated to be really far away from the actual collision point. This is because the two forward lines for each object are moving away from each other as the objects pass. The problem is that because of this, no steering takes place, and the two objects partially collide. Does anyone have any suggestions as to how I can correctly calculate the point of collision? Perhaps by somehow taking into account the size of the two objects?

    Read the article

  • Core Data Model Design Question - Changing "Live" Objects also Changes Saved Objects

    - by mwt
    I'm working on my first Core Data project (on iPhone) and am really liking it. Core Data is cool stuff. I am, however, running into a design difficulty that I'm not sure how to solve, although I imagine it's a fairly common situation. It concerns the data model. For the sake of clarity, I'll use an imaginary football game app as an example to illustrate my question. Say that there are NSMO's called Downs and Plays. Plays function like templates to be used by Downs. The user creates Plays (for example, Bootleg, Button Hook, Slant Route, Sweep, etc.) and fills in the various properties. Plays have a to-many relationship with Downs. For each Down, the user decides which Play to use. When the Down is executed, it uses the Play as its template. After each down is run, it is stored in history. The program remembers all the Downs ever played. So far, so good. This is all working fine. The question I have concerns what happens when the user wants to change the details of a Play. Let's say it originally involved a pass to the left, but the user now wants it to be a pass to the right. Making that change, however, not only affects all the future executions of that Play, but also changes the details of the Plays stored in history. The record of Downs gets "polluted," in effect, because the Play template has been changed. I have been rolling around several possible fixes to this situation, but I imagine the geniuses of SO know much more about how to handle this than I do. Still, the potential fixes I've come up with are: 1) "Versioning" of Plays. Each change to a Play template actually creates a new, separate Play object with the same name (as far as the user can tell). Underneath the hood, however, it is actually a different Play. This would work, AFAICT, but seems like it could potentially lead to a wild proliferation of Play objects, esp. if the user keeps switching back and forth between several versions of the same Play (creating object after object each time the user switches). Yes, the app could check for pre-existing, identical Plays, but... it just seems like a mess. 2) Have Downs, upon saving, record the details of the Play they used, but not as a Play object. This just seems ridiculous, given that the Play object is there to hold those just those details. 3) Recognize that Play objects are actually fulfilling 2 functions: one to be a template for a Down, and the other to record what template was used. These 2 functions have a different relationship with a Down. The first (template) has a to-many relationship. But the second (record) has a one-to-one relationship. This would mean creating a second object, something like "Play-Template" which would retain the to-many relationship with Downs. Play objects would get reconfigured to have a one-to-one relationship with Downs. A Down would use a Play-Template object for execution, but use the new kind of Play object to store what template was used. It is this change from a to-many relationship to a one-to-one relationship that represents the crux of the problem. Even writing this question out has helped me get clearer. I think something like solution 3 is the answer. However if anyone has a better idea or even just a confirmation that I'm on the right track, that would be helpful. (Remember, I'm not really making a football game, it's just faster/easier to use a metaphor everyone understands.) Thanks.

    Read the article

  • C++ Pointers, objects, etc

    - by Zeee
    It may be a bit confusing, but... Let's say I have a vector type in a class to store objects, something like vector, and I have methods on my class that will later return Operators from this vector. Now if any of my methods receives an Operator, will I have any trouble to insert it directly into the vector? Or should I use the copy constructor to create a new Operator and put this new one on the vector?

    Read the article

  • Why do transfer objects need to implement Serializable?

    - by smaye81
    I realized today that I have blindly just followed this requirement for years without ever really asking why. Today, I ran across a NotSerializableException with a model object I created from scratch and I realized enough is enough. I was told this was because of session replication between load-balanced servers, but I know I've seen other objects at session scope that do not implement Serializable. Is this the real reason?

    Read the article

  • Auto Generate Objects in DBIx::Class ORM in Perl

    - by Sam
    Hello, I started learning DBIx::class and I reach the point where you have to create the Objects that represents tables. Should this classes be created manually ( hard coding all the fields and relationships.....) or there is a way to generate them automatically using the database schema. I read something about loaders, but i did not know where they are really used. Please advice. Thanks

    Read the article

  • Managing many objects at once.

    - by Jeff
    Hi, I want to find a way to efficiently keep track of a lot of objects at once. One practical example I can think of would be a particle system. How are hundreds of particles kept track of? I think I'm on the right track, I found the term 'instancing' and I also learned about flyweights. Hopefully somebody can shed some light on this and share some techniques with me. Thanks.

    Read the article

  • Hadoop: Processing large serialized objects

    - by restrictedinfinity
    I am working on development of an application to process (and merge) several large java serialized objects (size of order GBs) using Hadoop framework. Hadoop stores distributes blocks of a file on different hosts. But as deserialization will require the all the blocks to be present on single host, its gonna hit the performance drastically. How can I deal this situation where different blocks have to cant be individually processed, unlike text files ?

    Read the article

  • Designing Business Objects to indicate constraints such as Max Length

    - by JR
    Is there a standard convention when designing business objects for providing consumers with a way to discover constraints such as a property's maximum length? It could be used up in the UI layer to, for example, set a Textbox's MaxLength property according to the maximum length limit back in the business object. Is there a standard design approach for this?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >