Search Results

Search found 16134 results on 646 pages for 'reference guide'.

Page 199/646 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • What is the Proper approach for Constructing a PhysicalAddress object from Byte Array

    - by Paul Farry
    I'm trying to understand what the correct approach for a constructor that accepts a Byte Array with regard to how it stores it's data (specifically with PhysicalAddress) I have an array of 6 bytes (theAddress) that is constructed once. I have a source array of 18bytes (theAddresses) that is loaded from a TCP Connection. I then copy the 6bytes from theAddress+offset into theAddress and construct the PhysicalAddress from it. Problem is that the PhysicalAddress just stores the Reference to the array that was passed in. Therefore if you subsequently check the addresses they only ever point to the last address that was copied in. When I took a look inside the PhysicalAddress with reflector it's easy to see what's going on. public PhysicalAddress(byte[] address) { this.changed = true; this.address = address; } Now I know this can be solved by creating theAddress array on each pass, but I wanted to find out what really is the best practice for this. Should the constructor of an object that accepts a byte array create it's own private Variable for holding the data and copy it from the original Should it just hold the reference to what was passed in. Should I just created theAddress on each pass in the loop

    Read the article

  • What does the windbg command "kd" do?

    - by Oskar
    I ran kd by mistake and got some output that inteerested me, a reference to a line of code in my module that I can't see on the call stack of any thread. The lines weren't the beginnning of the method so I don't think the reference is to a function pointer, but possibly the result of an exception being stored in memory??? Of course, that happens to be what I'm looking for... Update: The stack trace of the exception is: 0:000> kb *** Stack trace for last set context - .thread/.cxr resets it ChildEBP RetAddr Args to Child 0174f168 734ea84f 2cb9e950 00000000 2cb9e950 kernel32!LoadTimeZoneInformation+0x2b 0174f1c4 734ead92 00000022 00000001 000685d0 msvbvm60! RUN_INSTMGR::ExecuteInitTerm+0x178 0174f1f8 734ea9ee 00000000 0000002f 2dbc2abc msvbvm60! RUN_INSTMGR::CreateObjInstanceWithParts+0x1e4 0174f278 7350414e 2cb9e96c 00000000 0174f2f0 msvbvm60! RUN_INSTMGR::CreateObjInstance+0x14d 0174f2e4 734fa071 00000000 2cb9e96c 0174f2fc msvbvm60!RcmConstructObjectInstance+0x75 0174f31c 00976ef1 2cb9e950 00591bc0 0174fddc msvbvm60!__vbaNew+0x21 and into our code (create a new Form derived class) the dds output: 0:000> dds esp-0x40 esp+0x100 0174f05c 00000000 0174f060 00000000 0174f064 00000000 0174f068 00000000 0174f06c 00000000 0174f070 00000000 0174f074 00000000 0174f078 00000000 0174f07c 00000000 0174f080 00000000 0174f084 00000000 0174f088 00000000 0174f08c 00000000 0174f090 00000000 0174f094 00000000 0174f098 00000000 0174f09c 007f4f9b ourDll!formDerivedClass::Form_Initialize+0x10b [C:\Buildbox\formDerivedClass.frm @ 1452] etc which seems to indicate that Initialize is being called even though it isn't on the stack trace of either this exception or any of the threads. As suggested, it might all be a mismatch between pdbs and dlls, but it seems a coincidence that we end up in the right classes and methods

    Read the article

  • Bytecode and Objects

    - by HH
    Hey everyone, I am working on a bytecode instrumentation project. Currently when handling objects, the verifier throws an error most of the time. So I would like to get things clear concerning rules with objects (I read the JVMS but couldn't find the answer I was looking for): I am instrumenting the NEW instruction: original bytecode NEW <MyClass> DUP INVOKESPECIAL <MyClass.<init>> after instrumentation NEW <MyClass> DUP INVOKESTATIC <Profiler.handleNEW> DUP INVOKESPECIAL <MyClass.<init>> Note that I added a call to Profiler.handleNEW() which takes as argument an object reference (the newly created object). The piece of code above throws a VerificationError. While if I don't add the INVOKESTATIC (leaving only the DUP), it doesn't. So what is the rule that I'm violating? I can duplicate an uninitialized reference but I can't pass it as parameter? I would appreciate any help. Thank you

    Read the article

  • Developing Job References

    - by Joe Smith
    How do you develop references for jobs? I have 6 years of programming experience spanning two jobs, but sadly I don't have a lot of people I can draw on as references. It's been several years since I left my last job, which was at a small company, and I've lost touch with the few people I knew there. I now work at another small company. I think I've gone as far as I can in my current position, and would like to look for greener pastures, but I can't exactly use my current boss as a reference, even though I have a very good repore with him. I'm sure he'd make a great reference down the road, but I'm afraid I'd insult him or jeopardize my current job by mentioning that I'm thinking of leaving and would like him to help me. I've applied to some jobs, and I have gotten several replies like, "Oh, you're exactly what we're looking for. Send us a couple references and we'll schedule an interview. Oh, no references? You must be a psychopath, nevermind." I've tried doing some small freelance work on the side, just so I can have a contact who can vouch for my work, but the competition for even small projects is pretty fierce and I can rarely devote adequate time to freelancing while holding a full time job. In addition, I often encounter a Catch-22 where a lot of freelancing jobs also require references. So how do programmers maintain existing references and develop new ones, especially while holding a full time job?

    Read the article

  • wpf observableCollection

    - by Asha
    I have an ObservableCollection which is dataContext for my treeview when I try to remove an Item from ObservableCollection I will get an error that Object reference not set to an instance of an object . can you please tell me why this error is happening and what is the solution thanks EDIT 1: The code is something like : class MyClass : INotifyPropertyChanged { //my class code here } public partial class UC_myUserControl : UserControl { private ObservableCollection<MyClass> myCollection = new ObservableCollection<MyClass>(); private void UserControl_Loaded(object sender, RoutedEventArgs e) { myCollection.add(new myClass); myTreeView.DataContext = myCollection ; } private void deleteItem() { myCollection.RemoveAt(0); //after removing I get error Which I guess should be something related //to interface update but I don't know how can I solve it } } Exception Detail : System.NullReferenceException was unhandled Message="Object reference not set to an instance of an object." Source="PresentationFramework" EDIT 3: I have a style which is for my treeitem to keep the treeitems expanded <Style TargetType="TreeViewItem"> <Setter Property="IsExpanded" Value="True" /> </Style> and with commenting this part I wont get any error !!! Now I want to change my question to why having this style is causing error ?

    Read the article

  • Why do WCF clients depend on the app.config file?

    - by routeNpingme
    Like a lot of things, I'm sure there's a good reason for this, so please help me understand... Why, by default, do WCF services store settings in app.config? This has been so frustrating trying to work with multiple Silverlight class libraries. These class libraries are supposed to be completely independent from each other, and this dependency on the app.config seems to cause the following headaches: Single Responsibility Principle - I should be able to add a reference to a class library and go. If that class library uses a service reference, this idea is shot before I even start coding against it. Muddy Configuration - To get other libraries to work, I have to copy and paste the service configurations into the "main" application configs. If an endpoint changes in any way, I can't just worry about a new version of that class DLL - I have to worry about anything that uses it, too. Complex Alternatives - Programmatically creating the endpoint isn't pretty. Period. There has to be a better way. Why doesn't WCF at least separate the service configurations into a ServiceName.config or something that gets copied to an output directory. What am I missing? How do you deal with this?

    Read the article

  • Entity Framework 4 Self Many-To-Many with Properties

    - by csharpnoob
    UPDATE: Solved by myself. Tricky but works. If you know a better solution, feel free to correct me. DESIGNER: CODE: Product product1 = new Product{key = "Product 1"}; sd.AddToProducts(product1); Product product2 = new Product{key = "Product 2"}; sd.AddToProducts(product2); Product product3 = new Product{key = "Product 3"}; ProductRelated pr = new ProductRelated(); pr.Products.Add(product1); pr.Products.Add(product2); product3.ProductRelateds.Add(pr); sd.AddToProducts(product3); CODE VIEW: foreach(var x in (from b in sd.Products select b)) { %><%=x.key %><br /> <% foreach (var y in x.ProductRelateds) { foreach (var k in y.Products) { %>- <%=k.key%><br /><%} } } OUTPUT Product1 Product2 Product3 - Product2 - Product1 QUESTION: Hi, i want to have a Self Reference for Releated Products on a Product Entity, something like here: http://my.safaribooksonline.com/9781430227038/modeling_a_many-to-many_comma_self-refer But i also want on the Many-To-Many Reference addional Properties like deleted, created etc. I tried to do it by another Entity "Related", but somehow it won't work. Does anyone had this Problem before? Is there any other Example? Thanks

    Read the article

  • Instanced drawing with OpenGL ES 2.0

    - by Mårten Wikström
    In short: Is it possible to use the gl_InstanceID built-in variable in OpenGL ES 2.0? And, if so, how? Some more info: I want to draw multiple instances of an object using glDrawArraysInstanced and gl_InstanceID, and I want my application to run on multiple platforms, including iOS. The specification clearly says that these features require ES 3.0. According to the iOS Device Compatibility Reference ES 3.0 is only available on a few devices (those based on the A7 GPU; so iPhone 5s, but not on iPhone 5 or earlier). So my first assumption was that I needed to avoid using instanced drawing on older iOS devices. However, further down in the compatibility reference document it says that the EXT_draw_instanced extension is supported for all SGX Series 5 processors (that includes iPhone 5 and 4s). This makes me think that I could indeed use instanced drawing on older iOS devices too, by looking up and using the appropriate extension function (EXT or ARB) for glDrawArraysInstanced. I'm currently just running some test code using SDL and GLEW on Windows so I haven't tested anything on iOS yet. However, in my current setup I'm having trouble using the gl_InstanceID built-in variable in a vertex shader. I'm getting the following error message: 'gl_InstanceID' : variable is not available in current GLSL version Enabling the "draw_instanced" extension in GLSL has no effect: #extension GL_ARB_draw_instanced : enable #extension GL_EXT_draw_instanced : enable The error goes away when I specifically declare that I need ES 3.0 (GLSL 300 ES): #version 300 es Although that seem to work fine on my Windows desktop machine in an ES 2.0 context I doubt that this would work on an iPhone 5. So, shall I abandon the idea of being able to use instanced drawing on older iOS devices?

    Read the article

  • How is a referencing environment generally implemented for closures?

    - by Alexandr Kurilin
    Let's say I have a statically/lexically scoped language with deep binding and I create a closure. The closure will consist of the statements I want executed plus the so called referencing environment, or, to quote this post, the collection of variables which can be used. What does this referencing environment actually look like implementation-wise? I was recently reading about ObjectiveC's implementation of blocks, and the author suggests that behind the scenes you get a copy of all of the variables on the stack and also of all the references to heap objects. The explanation claims that you get a "snapshot" of the referencing environment at the point in time of the closure's creation. Is that more or less what happens, or did I misread that? Is anything done to "freeze" a separate copy of the heap objects, or is it safe to assume that if they get modified between closure creation and the closure executing, the closure will no longer be operating on the original version of the object? If indeed there's copying being made, are there memory usage considerations in situations where one might want to create plenty of closures and store them somewhere? I think that misunderstanding of some of these concepts might lead to tricky issues like the ones Eric Lippert mentions in this blog post. It's interesting because you'd think that it wouldn't make sense to keep a reference to a value type that might be gone by the time the closure is called, but I'm guessing that in C# the compiler will figure out that the variable is needed later and put it into the heap instead. It seems that in most memory-managed languages everything is a reference and thus ObjectiveC is a somewhat unique situation with having to deal with copying what's on the stack.

    Read the article

  • Going "behind Hibernate's back" to update foreign key values without an associated entity

    - by Alex Cruise
    Updated: I wound up "solving" the problem by doing the opposite! I now have the entity reference field set as read-only (insertable=false updatable=false), and the foreign key field read-write. This means I need to take special care when saving new entities, but on querying, the entity properties get resolved for me. I have a bidirectional one-to-many association in my domain model, where I'm using JPA annotations and Hibernate as the persistence provider. It's pretty much your bog-standard parent/child configuration, with one difference being that I want to expose the parent's foreign key as a separate property of the child alongside the reference to a parent instance, like so: @Entity public class Child { @Id @GeneratedValue Long id; @Column(name="parent_id", insertable=false, updatable=false) private Long parentId; @ManyToOne(cascade=CascadeType.ALL) @JoinColumn(name="parent_id") private Parent parent; private long timestamp; } @Entity public class Parent { @Id @GeneratedValue Long id; @OrderBy("timestamp") @OneToMany(mappedBy="parent", cascade=CascadeType.ALL, fetch=FetchType.LAZY) private List<Child> children; } This works just fine most of the time, but there are many (legacy) cases when I'd like to put an invalid value in the parent_id column without having to create a bogus Parent first. Unfortunately, Hibernate won't save values assigned to the parentId field due to insertable=false, updatable=false, which it requires when the same column is mapped to multiple properties. Is there any nice way to "go behind Hibernate's back" and sneak values into that field without having to drop down to JDBC or implement an interceptor? Thanks!

    Read the article

  • Does unboxing just return a pointer to the value within the boxed object on the heap?

    - by Charles
    I this MSDN Magazine article, the author states (emphasis mine): Note that boxing always creates a new object and copies the unboxed value's bits to the object. On the other hand, unboxing simply returns a pointer to the data within a boxed object: no memory copy occurs. However, it is commonly the case that your code will cause the data pointed to by the unboxed reference to be copied anyway. I'm confused by the sentence I've bolded and the sentence that follows it. From everything else I've read, including this MSDN page, I've never before heard that unboxing just returns a pointer to the value on the heap. I was under the impression that unboxing would result in you having a variable containing a copy of the value on the stack, just as you began with. After all, if my variable contains "a pointer to the value on the heap", then I haven't got a value type, I've got a pointer. Can someone explain what this means? Was the author on crack? (There is at least one other glaring error in the article). And if this is true, what are the cases where "your code will cause the data pointed to by the unboxed reference to be copied anyway"? I just noticed that the article is nearly 10 years old, so maybe this is something that changed very early on in the life of .Net.

    Read the article

  • Understanding Hibernate saveOrUpdate and the Persistence Life Cycle

    - by Stephano
    The books that I've read regarding hibernate are, at best, reference tomes. They very seldom have good code examples, so I tend to use online resources for those needs. However, I've always had a problem understanding the basic idea of hibernate persistence. I've read the books and understand the concepts, but in practice, I often see results that I don't understand. Perhaps you all can help, as you have in the past. Let's look at a simple example of a dog and a cat that are friends. This isn't a rare occurrence. It also has the benefit of being much more interesting than my business case. We want a function called "saveFriends" that takes a dog name and a cat name. We'll save the Dog and then the Cat. For this example to work, the cat is going to have a reference back to the dog. I understand this isn't an ideal example, but it's cute and works for our purposes. FriendService.java public int saveFriends(String dogName, String catName) { Dog fido = new Dog(); Cat felix = new Cat(); fido.name = dogName; fido = animalDao.saveDog(fido); felix.name = catName; [ex.A]felix.friend = fido; [ex.B]felix.friend = animalDao.getDogByName(dogName); animalDao.saveCat(felix); } AnimalDao.java (extends HibernateDaoSupport) public Dog saveDog(Dog dog) { getHibernateTemplate().saveOrUpdate(dog); return dog } public Cat saveCat(Cat cat) { getHibernateTemplate().saveOrUpdate(cat); return cat; } public Dog getDogByName(String name) { return (Dog) getHibernateTemplate().find("from Dog where name=?", name).get(0); } Now, assume for a minute that I would like to use either example A or example B to save my friend. Is one better than the other to use? I'll understand if neither of those examples work, but please explain why.

    Read the article

  • Extracting function declarations from a PHP file

    - by byronh
    I'm looking to create an on-site API reference for my framework and application. Basically, say I have a class file model.class.php: class Model extends Object { ... code here ... // Separates results into pages. // Returns Paginator object. final public function paginate($perpage = 10) { ... more code here ... } } and I want to be able to generate a reference that my developers can refer to quickly in order to know which functions are available to be called. They only need to see the comments directly above each function and the declaration line. Something like this (similar to a C++ header file): // Separates results into pages. // Returns Paginator object. final public function paginate($perpage = 10); I've done some research and this answer looked pretty good (using Reflection classes), however, I'm not sure how I can keep the comments in. Any ideas? EDIT: Sorry, but I want to keep the current comment formatting. Myself and the people who are working on the code hate the verbosity associated with PHPDocumentor. Not to mention a comment-rewrite of the entire project would take years, so I want to preserve just the // comments in plaintext.

    Read the article

  • How to add a service to the type descriptor context of a property grid in .Net?

    - by Jules
    I have an app that allows the user to choose an image, at design time, either as a straight image, or from an image list. All cool so far, except that this is not happening from the visual studio property browser, its happening from a property grid that is a part of a type editor. My problem is, both the image picker (actually resource picker), and the imagelist type converter rely on some design-time services to get the job done. In the case of imagelist, its the IReferenceService and in the case of the resource picker its a service called _DTE. In the first instance of an edit from the visual studio property browser, I could get a reference to these services but (1) how can I add them to the type descriptor context of my property grid? It would be better, for future proofing, if I could just copy a reference to all of the services in the type descriptor context. (2) Where does the property browser get these services from in the first place? ETA: I still don't know how to do it, but I now know it is possible. (1) Sub-class control and add a property whose type is an array of buttons. (2) Add it to a form. (3) Select the new control on the design service and edit the new property in the property browser. (4) The collection editor dialog pops-up (5) Add a button (6) Edit image and image list - the type editor and type converter, respectively, behave as they should.

    Read the article

  • Developing ASP.Net User Control to be imported to SharePoint MOSS 2007

    - by Don Kirkham
    Apologies if this has been answered, but I could not find a similar question: I am developing a webpart for MOSS 2007. I am using WSPBuilder to built a visual webpart (ascx) and everything works fine, but the development/debug cycle is just painfully slow, so I'd like to know if it is possible (without being too painful) to develop the user control faster using an .Net Web Application project with all of the nice F5 debugging, then import the final product into my SharePoint visual webpart. The user control interacts with a LOB system (SQL) and does not reference the SharePoint API at all. (The reason I am building this as a webpart is because I don't need another web app to run this one page, so putting it into a webpart on a new webpart page on my existing site is the best solution IMO.) I would obviously need to import (reference?) my data access classes into my "temp" web app, but think that would not be too much trouble. I realize this will be extra effort to get this set up, but am thinking the payoff will be reduced development time of the actual user control using a little web application vs having to use the compile/build WSP/deploy WSP/reset ISS/test/make a change/repeat cycle that MOSS requires. (I guess SP2010/VS2010 has spoiled me with the native SharePoint tools available.)

    Read the article

  • Enum exeeding the 65535 bytes limit of static initializer... what's best to do?

    - by Daniel Bleisteiner
    I've started a rather large Enum of so called Descriptors that I've wanted to use as a reference list in my model. But now I've come across a compiler/VM limit the first time and so I'm looking for the best solution to handle this. Here is my error : The code for the static initializer is exceeding the 65535 bytes limit It is clear where this comes from - my Enum simply has far to much elements. But I need those elements - there is no way to reduce that set. Initialy I've planed to use a single Enum because I want to make sure that all elements within the Enum are unique. It is used in a Hibernate persistence context where the reference to the Enum is stored as String value in the database. So this must be unique! The content of my Enum can be devided into several groups of elements belonging together. But splitting the Enum would remove the unique safety I get during compile time. Or can this be achieved with multiple Enums in some way? My only current idea is to define some Interface called Descriptor and code several Enums implementing it. This way I hope to be able to use the Hibernate Enum mapping as if it were a single Enum. But I'm not even sure if this will work. And I loose unique safety. Any ideas how to handle that case?

    Read the article

  • Setting objct literal property value via asynchronous callback.

    - by typeof
    I'm creating a self-contained javascript utility object that detects advanced browser features. Ideally, my object would look something like this: Support = { borderRadius : false, // values returned by functions gradient : false, // i am defining dataURI : true }; My current problem deals with some code I'm adapting from Weston Ruter's site which detects dataURI support. It attempts to use javascript to create an image with a dataURI source, and uses onload/onerror callbacks to check the width/height. Since onload is asynchronous, I lose my scope and returning true/false does not assign true/false to my object. Here is my attempt: Support = { ... dataURI : function(prop) { prop = prop; // keeps in closure for callback var data = new Image(); data.onload = data.onerror = function(){ if(this.width != 1 || this.height != 1) { that = false; } that = true; } data.src = "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///ywAAAAAAQABAAACAUwAOw=="; return -1; }(this) }; I'm executing the anonymous function immediately, passing this (which I hoped was a reference to Support.dataURI), but unfortunately references the window object -- so the value is always -1. I can get it to work by using an externally defined function to assign the value after the Support object is created... but I don't think it's very clean that way. Is there a way for it to be self-contained? Can the object literal's function reference the property it's assigned to?

    Read the article

  • C++ Header file questions

    - by Karl
    So I'm trying to learn C++ and I've gotten as far as using header files. They really make no sense to me. I've tried many combinations of this but nothing so far has worked: Main.cpp: #include "test.h" int main() { testClass Player1; return 0; } test.h: #ifndef TEST_H_INCLUDED #define TEST_H_INCLUDED class testClass { private: int health; public: testClass(); ~testClass(); int getHealth(); void setHealth(int inH); }; #endif // TEST_H_INCLUDED test.cpp: #include "test.h" testClass::testClass() { health = 100; } testClass::~testClass() {} int testClass::getHealth() { return(health); } void testClass::setHealth(int inH) { health = inH; } What I'm trying to do is pretty simple, but the way the header files work just makes no sense to me at all. Code blocks returns the following on build: obj\Debug\main.o(.text+0x131)||In function main':| *voip*\test\main.cpp |6|undefined reference totestClass::testClass()'| obj\Debug\main.o(.text+0x13c):voip\test\main.cpp|7|undefined reference to `testClass::~testClass()'| ||=== Build finished: 2 errors, 0 warnings ===| I'd appreciate any help. Or if you have a decent tutorial for it, that would be fine too (most of the tutorials I've googled haven't helped)

    Read the article

  • Fetching just the Key/id from a ReferenceProperty in App Engine

    - by ozone
    Hi SO, I could use a little help in AppEngine land... Using the [Python] API I create relationships like this example from the docs: class Author(db.Model): name = db.StringProperty() class Story(db.Model): author = db.ReferenceProperty(Author) story = db.get(story_key) author_name = story.author.name As I understand it, that example will make two datastore queries. One to fetch the Story and then one to deference the Author inorder to access the name. But I want to be able to fetch the id, so do something like: story = db.get(story_key) author_id = story.author.key().id() I want to just get the id from the reference. I do not want to have to deference (therefore query the datastore) the ReferenceProperty value. From reading the documentation it says that the value of a ReferenceProperty is a Key Which leads me to think that I could just call .id() on the reference's value. But it also says: The ReferenceProperty model provides features for Key property values such as automatic dereferencing. I can't find anything that explains when this referencing takes place? Is it safe to call .id() on the ReferenceProperty's value? Can it be assumed that calling .id() will not cause a datastore lookup?

    Read the article

  • Azure Table Storage rejects an entity with a Property whose value is an Interface

    - by Andrew B Schultz
    I have a type called "Comment" that I'm saving to Azure Table Storage. Since a comment can be about any number of other types, I created an interface which all of these types implement, and then put a property of type ICommentable on the comment. So Comment has a property called About of type ICommentable. When I try to save a Comment to Azure Table Storage, if the Comment.About property has a value, I get the worthless invalid input error. However, if there is no value for Comment.About, I have no problem. Why would this be? Comment.About is not the only property that is a reference type. For example, Comment.From is a reference type, but the Comment.About is the only property of a type that is an interface. Fails: var comment = new Comment(); comment.CommentText = "It fails!"; comment.PartitionKey = "TEST"; comment.RowKey = "TEST123"; comment.About = sow1; comment.From = person1; Works: var comment = new Comment(); comment.CommentText = "It works!"; comment.PartitionKey = "TEST"; comment.RowKey = "TEST123"; //comment.About = sow1; comment.From = person1; Thanks!

    Read the article

  • Adding an Array inside an array in a PHP function

    - by bateman_ap
    I have created a function in PHP that calls a webservice and parses through the result, assinging values to variables and returning them all as an Array. This all works perfectly, however I have come across a need to have an "array within my array" I am assigning values as below: $productName = $product->Name; $productID = $product->ID; $productArray = array( 'productName' => "$productName", 'productID' => "$productID" ); return $productArray; However I now have a piece of data that comes back with multiple results so I need to have a additional array to store these, I am getting the values from the returned XML using a foreach loop, however I want to be able to add them to the array with a name so I can reference them in the returned data, this is where I have a problem... $bestForLists = $product->BestFors; foreach( $bestForLists as $bestForList ) { $productBestFors = $bestForList->BestFor; foreach( $productBestFors as $productBestFor ) { $productBestForName = $productBestFor->Name; $productBestForID = $productBestFor->ID; } } I tried creating an array for these using the below code: $bestForArray[] = (array( "productBestForID" => "$productBestForID", "productBestForName" => "$productBestForName" )); And then at the end merging these together: $productArray= array_merge($productArray,$bestForArray); If I print out the returned value I get: Array ( [productName] => Test Product [productID] => 14128 [0] => Array ( [productBestForID] => 56647 [productBestForName] => Lighting ) [1] => Array ( [productBestForID] => 56648 [productBestForName] => Sound ) ) I would like to give the internal Array a name so I can reference it in my code, or is there a better way of doing this, at the moment I am using the following in my PHP page to get values: $productName = $functionReturnedValues['productName']; I would like to use the following to access the array I could then loop through: $bestForArray = $functionReturnedValues['bestForArray']; Hope someone can help

    Read the article

  • Can't get KnownType to work with WCF

    - by Kelly Cline
    I have an interface and a class defined in separate assemblies, like this: namespace DataInterfaces { public interface IPerson { string Name { get; set; } } } namespace DataObjects { [DataContract] [KnownType( typeof( IPerson ) ) ] public class Person : IPerson { [DataMember] public string Name { get; set; } } } This is my Service Interface: public interface ICalculator { [OperationContract] IPerson GetPerson ( ); } When I update my Service Reference for my Client, I get this in the Reference.cs: public object GetPerson() { return base.Channel.GetPerson(); I was hoping that KnownType would give me IPerson instead of "object" here. I have also tried [KnownType( typeof( Person ) ) ] with the same result. I have control of both client and server, so I have my DataObjects (where Person is defined) and DataInterfaces (where IPerson is defined) assemblies in both places. Is there something obvious I am missing? I thought KnownType was the answer to being able to use interfaces with WCF. ----- FURTHER INFORMATION ----- I removed the KnownType from the Person class and added [ServiceKnownType( typeof( Person ) ) ] to my service interface, as suggested by Richard. The client-side proxy still looks the same, public object GetPerson() { return base.Channel.GetPerson(); , but now it doesn't blow up. The client just has an "object", though, so it has to cast it to IPerson before it is useful. var person = client.GetPerson ( ); Console.WriteLine ( ( ( IPerson ) person ).Name );

    Read the article

  • Python: Does one of these examples waste more memory?

    - by orokusaki
    In a Django view function which uses manual transaction committing, I have: context = RequestContext(request, data) transaction.commit() return render_to_response('basic.html', data, context) # Returns a Django ``HttpResponse`` object which is similar to a dictionary. I think it is a better idea to do this: context = RequestContext(request, data) response = render_to_response('basic.html', data, context) transaction.commit() return response If the page isn't rendered correctly in the second version, the transaction is rolled back. This seems like the logical way of doing it albeit there won't likely be many exceptions at that point in the function when the application is in production. But... I fear that this might cost more and this will be replete through a number of functions since the application is heavy with custom transaction handling, so now is the time to figure out. If the HttpResponse instance is in memory already (at the point of render_to_response()), then what does another reference cost? When the function ends, doesn't the reference (response variable) go away so that when Django is done converting the HttpResponse into a string for output Python can immediately garbage collect it? Is there any reason I would want to use the first version (other than "It's 1 less line of code.")?

    Read the article

  • Explanation of `self` usage during dealloc?

    - by Greg
    I'm trying to lock down my understanding of proper memory management within Objective-C. I've gotten into the habit of explicitly declaring self.myProperty rather than just myProperty because I was encountering occasional scenarios where a property would not be set to the reference that I intended. Now, I'm reading Apple documentation on releasing IBOutlets, and they say that all outlets should be set to nil during dealloc. So, I put this in place as follows and experienced crashes as a result: - (void)dealloc { [self.dataModel close]; [self.dataModel release], self.dataModel = nil; [super dealloc]; } So, I tried taking out the "self" references, like so: - (void)dealloc { [dataModel close]; [dataModel release], dataModel = nil; [super dealloc]; } This second system seems to work as expected. However, it has me a bit confused. Why would self cause a crash in that case, when I thought self was a fairly benign reference more used as a formality than anything else? Also, if self is not appropriate in this case, then I have to ask: when should you include self references, and when should you not?

    Read the article

  • IBM WESB/WAS JCA security configuration

    - by user594883
    I'm working with IBM tools. I have a Websphere ESB (WESB) and a CICS transaction gateway (CTG). The basic set-up is as follows: A SOAP service needs data from CICS. The SOAP-service is connecting to service bus (WESB) to handle data and protocol transformation and then WESB calls the CTG which in turn calls CICS and the reply is handled vice verse (synchronously). WESB calls the CTG using Resource Adapter and JCA connector (or CICS adapter as it is called in WESB). Now, I have all the pieces in place and working. My question is about the security, and even though I'm working with WESB, the answer is probably the same as in Websphere Application Server (WAS). The Resource Adaper is secured using JAAS - J2C authentication data. I have configured the security using J2C authentication data entry, so basically I have a reference in the application I'm running and at runtime the application does a lookup for the security attributes from the server. So basically I'm always accessing the CICS adapter with the same security reference. My problem is that I need to access the resource in more dynamic way in the future. The security cannot be welded into the application anymore but instead given as a parameter. Could some WESB or WAS guru help me out, how this could be done in WESB/WAS exactly?

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >