Search Results

Search found 49170 results on 1967 pages for 'running objects'.

Page 2/1967 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Simple way of converting server side objects into client side using JSON serialization for asp.net websites

    - by anil.kasalanati
     Introduction:- With the growth of Web2.0 and the need for faster user experience the spotlight has shifted onto javascript based applications built using REST pattern or asp.net AJAX Pagerequest manager. And when we are working with javascript wouldn’t it be much better if we could create objects in an OOAD way and easily push it to the client side.  Following are the reasons why you would push the server side objects onto client side -          Easy availability of the complex object. -          Use C# compiler and rick intellisense to create and maintain the objects but use them in the javascript. You could run code analysis etc. -          Reduce the number of calls we make to the server side by loading data on the pageload.   I would like to explain about the 3rd point because that proved to be highly beneficial to me when I was fixing the performance issues of a major website. There could be a scenario where in you be making multiple AJAX based webrequestmanager calls in order to get the same response in a single page. This happens in the case of widget based framework when all the widgets are independent but they need some common information available in the framework to load the data. So instead of making n multiple calls we could load the data needed during pageload. The above picture shows the scenario where in all the widgets need the common information and then call GetData webservice on the server side. Ofcourse the result can be cached on the client side but a better solution would be to avoid the call completely.  In order to do that we need to JSONSerialize the content and send it in the DOM.                                                                                                                                                                                                                                                                                                                                                                                            Example:- I have developed a simple application to demonstrate the idea and I would explaining that in detail here. The class called SimpleClass would be sent as serialized JSON to the client side .   And this inherits from the base class which has the implementation for the GetJSONString method. You can create a single base class and all the object which need to be pushed to the client side can inherit from that class. The important thing to note is that the class should be annotated with DataContract attribute and the methods should have the Data Member attribute. This is needed by the .Net DataContractSerializer and this follows the opt-in mode so if you want to send an attribute to the client side then you need to annotate the DataMember attribute. So if I didn’t want to send the Result I would simple remove the DataMember attribute. This is default WCF/.Net 3.5 stuff but it provides the flexibility of have a fullfledged object on the server side but sending a smaller object to the client side. Sometimes you may hide some values due to security constraints. And thing you will notice is that I have marked the class as Serializable so that it can be stored in the Session and used in webfarm deployment scenarios. Following is the implementation of the base class –  This implements the default DataContractJsonSerializer and for more information or customization refer to following blogs – http://softcero.blogspot.com/2010/03/optimizing-net-json-serializing-and-ii.html http://weblogs.asp.net/gunnarpeipman/archive/2010/12/28/asp-net-serializing-and-deserializing-json-objects.aspx The next part is pretty simple, I just need to inject this object into the aspx page.   And in the aspx markup I have the following line – <script type="text/javascript"> var data =(<%=SimpleClassJSON  %>);   alert(data.ResultText); </script>   This will output the content as JSON into the variable data and this can be any element in the DOM. And you can verify the element by checking data in the Firebug console.    Design Consideration – If you have a lot of javascripts then you need to think about using Script # and you can write javascript in C#. Refer to Nikhil’s blog – http://projects.nikhilk.net/ScriptSharp Ensure that you are taking security into consideration while exposing server side objects on to client side. I have seen application exposing passwords, secret key so it is not a good practice.   The application can be tested using the following url – http://techconsulting.vpscustomer.com/Samples/JsonTest.aspx The source code is available at http://techconsulting.vpscustomer.com/Source/HistoryTest.zip

    Read the article

  • Cleaning up a dynamic array of Objects in C++

    - by Dr. Monkey
    I'm a bit confused about handling an array of objects in C++, as I can't seem to find information about how they are passed around (reference or value) and how they are stored in an array. I would expect an array of objects to be an array of pointers to that object type, but I haven't found this written anywhere. Would they be pointers, or would the objects themselves be laid out in memory in an array? In the example below, a custom class myClass holds a string (would this make it of variable size, or does the string object hold a pointer to a string and therefore take up a consistent amount of space. I try to create a dynamic array of myClass objects within a myContainer. In the myContainer.addObject() method I attempt to make a bigger array, copy all the objects into it along with a new object, then delete the old one. I'm not at all confident that I'm cleaning up my memory properly with my destructors - what improvements could I make in this area? class myClass { private string myName; public unsigned short myAmount; myClass(string name, unsigned short amount) { myName = name; myAmount = amount; } //Do I need a destructor here? I don't think so because I don't do any // dynamic memory allocation within this class } class myContainer { int numObjects; myClass * myObjects; myContainer() { numObjects = 0; } ~myContainer() { //Is this sufficient? //Or do I need to iterate through myObjects and delete each // individually? delete [] myObjects; } void addObject(string name, unsigned short amount) { myClass newObject = new myClass(name, amount); myClass * tempObjects; tempObjects = new myClass[numObjects+1]; for (int i=0; i<numObjects; i++) tempObjects[i] = myObjects[i]); tempObjects[numObjects] = newObject; numObjects++; delete newObject; //Will this delete all my objects? I think it won't. //I'm just trying to delete the old array, and have the new array hold // all the objects plus the new object. delete [] myObjects; myObjects = tempObjects; } }

    Read the article

  • Python - Launch a Long Running Process from a Web App

    - by Greg
    I have a python web application that needs to launch a long running process. The catch is I don't want it to wait around for the process to finish. Just launch and finish. I'm running on windows XP, and the web app is running under IIS (if that matters). So far I tried popen but that didn't seem to work.

    Read the article

  • Unable to connect to CopSSH when running Windows service, works when running sshd directly

    - by Joe Enos
    I've been using CopSSH (that uses OpenSSH and Cygwin, so I don't know which of the three is the problem) as my SSH server application at home on Windows 7 Ultimate 32 bit. I have used it for about a year with no real problems, other than it sometimes takes 2 or 3 connection attempts to get through, but it's always worked within a few attempts. A few days ago, it just stopped working. The Windows service is still running, and I've rebooted, restarted the service, etc. with no change. On the client (using Putty on Windows), I get the message "Software caused connection abort". On the server, my event viewer registers the following: fatal: Write failed: Socket operation on non-socket I finally got it working, but only by executing sshd.exe directly from the command line on the server. No special flags or options, just straight execution, and then when I connect remotely, it goes through. I do have firewall and anti-virus software which appears to be configured properly, but the fact that things work when running sshd.exe also indicates that the firewall is fine. I thought the service and executable did exactly the same thing, but apparently there's some difference. Does anyone have any ideas on where I should look for the problem? If I can't find something, I suppose I can write a Windows service or scheduled task that fires off sshd.exe directly and ensures that it stays running, but that's kind of a last resort, since it's just wrapping around something that should already work. I appreciate your help.

    Read the article

  • Architecture choice about representation of collections in Business Objects

    - by Rajarshi
    I have made certain choices in my architecture which I request the community to review and comment. I am breaking up the post in smaller sections to make it easier to understand the context and then suggest/comment. I am sorry that the post is long, but is required to explain the context. What am I building A typical business application where there are application users, security roles, business operation/action rights based on roles and several business modules like Stock Receive, Stock Transfer, Sale Order, Sale Invoice, Sale Return, Stock Audit etc. and several reports. The application is a WinForm application since it has a lot of rich and responsive UI requirements and has to operate in disconnected mode (with a local SQL Server), most of the time. What have I done I have built a framework - nothing to boast about, but just a set of libraries that serves the repetative requirements of my application, e.g. authentication, role based authorization, data access, validation, exception handling, logging, change status tracking, presentation model compliance and reasonable loose coupling between components. No, I have not written everything from scratch, you can say I have consolidated many things together like some concepts from CSLA, Martin Fowler for Presentation Model, blocks from Enterprise Library, Unity etc. to build a set of libraries that will help my developers be productive quickly without having to look up Google for many of the technical requirements. I have tried to keep the framework generic so that it can be used in typical business applications and also tried to follow some best practices that will support the same Business Objects to be used in an ASP.NET MVC environment also. My present architecture serves my objectives well, and have built several modules (on WinForm) without much trouble. The architecture also lent itself well to build some usable prototype on ASP.NET MVC with the same set of business objects, without changing a single line of code. My Dilemma I have used Custom Business Objects since that gives me a clearer OOP representation of the problem scope in my solution scope, and helps me visualize my entire solution as collection of objects with data and behavior rather than having a set of relational data (DataSet) and implement behaviours (business logic, validation) etc. separately. With rich databinding support in .NET 2.0 binding Custom Business Objects to UI was a breeze. Now while building my business objects, I am still in a dilemma about representation of collections in business objects. Currently I am using DataSets to represent collections while I have seen many suggestions to implement custom collections. For example, in my vision, a typical Sale Invoice Object will contain 'Sales Invoice Items' as a collection. Now theoritically, I can accept that the each 'Sales Invoice Item' should have its own behavior along with their data (ItemCode, Name, Qty, Price etc.) but typically managing of Sale Invoice Items in a Sale Invoice is handled by the Sale Invoice Object itself, e.g. adding/removing Items from collection. Additionally, we can also put business logic/rules for the Sales Invoice Items like "Qty should not be greater than the ordered qty", "Price should be max 10% above the price in Sale Order" etc. in the Sale Invoice object itself. With that kind of a vision, I felt that most business object child collections can be managed by the parent itself, including add/remove from collection as well and implementing business logic for the collection items, hence the collection items hold nothing but data. Additionally, typical collections are represented in UI in Grids, where ability to support DataBinding becomes very important for any collection. Implementing a custom collection, in that case would also mean, I have to implement robust DataBinding support as well, for the collection, which is of course time consuming. Now, considering child collection behaviors are implemented in the parent and the need for DataBinding of child collections, I chose DataSet to represent any child collection in my business objects. In the above example of Sale Invoice I will have 'Invoice Number', 'Date', 'Customer' etc. as attributes of the 'Sale Invoice' but 'InvoiceItems' as a DataSet. Of course, when I say DataSet, it is not a vanilla dataset but an extended DataSet that supports business rule validation and the same role based security model of my framework to allow/deny any business operation to rows/columns of the DataSet, automatically. This approach has allowed easier collection management and databinding in my business objects and my developers are able to deliver modules rapidly. Questions Do you feel that the approach is reasonable? Do you see any shortcomings of this approach? I am recently thinking of using 'Typed DataSets' as child collections, for easier representation in code, that will allow me to write 'currentInvoice.InvoiceItems' (for the DataTable) and 'invoiceItem.ProductCode' or 'invoiceItem.Qty', instead of 'drow["ProductCode"].ToString()' or '(int)drow["Qty"]' etc. Does this choice have any demerits? Thank you if you have read so far and a salute if you still have the Energy to answer.

    Read the article

  • Sharing laptop's internal optical drive running windows XP Media Center Edition with Netbook running

    - by Col
    just got a new HP netbook with no optical drive and guide said I should be able to share the optical drive of another windows computer. The netbook is running Windows 7 and the laptop, also HP, with the internal optical drive is running Windows XP Media Center Edition. I have wireless network that both the laptop and netbook access without a problem. The instructions did not seem to work in my case. When I right clicked on Properties of the optical drive and went to the Sharing tab, there was no selction for Advanced Sharing as the instructions said. XP made me go to Network wizard and set up a network, (which I already had). After doing that I could not access the drive from Windows 7. Has anyone benn able to do this?

    Read the article

  • Do running times match with O(nlogn)?

    - by user472221
    Hi I have written a class(greedy strategy) that at first i used sort method which has O(nlogn) Collections.sort(array, new SortingObjectsWithProbabilityField()); and then i used the insert method of binary search tree which takes O(h) and h here is the tree height. for different n ,the running time will be : n,running time 17,515428 33,783340 65,540572 129,1285080 257,2052216 513,4299709 which I think is not correct because for increasing n , the running time should almost increase. This method will take the running time: Exponent = -1; for(int n = 2;n<1000;n+=Math.pow(2,exponent){ for (int j = 1; j <= 3; j++) { Random rand = new Random(); for (int i = 0; i < n; i++) { Element e = new Element(rand.nextInt(100) + 1, rand.nextInt(100) + 1, 0); for (int k = 0; k < i; k++) { if (e.getDigit() == randList.get(k).getDigit()) { e.setDigit(e.getDigit() + 1); } } randList.add(e); } double sum = 0.0; for (int i = 0; i < randList.size(); i++) { sum += randList.get(i).getProbability(); } for (Element i : randList) { i.setProbability(i.getProbability() / sum); } //Get time. long t2 = System.nanoTime(); GreedyVersion greedy = new GreedyVersion((ArrayList<Element>) randList); long t3 = System.nanoTime(); timeForGreedy = timeForGreedy + t3 - t2; } System.out.println(n + "," + "," + timeForGreedy/3 ); exponent++; } thanks

    Read the article

  • Is it possible to compile a query in linq-to-objects

    - by Luke101
    I have a linq to objects query in a recursive loop and afraid when the objects approach more then 1000 and a have more then 100 users on the site -- my website will break. so is it possible to compile a linq to objects query. The linq query does nothing more then find the direct children of a node.

    Read the article

  • Is it possible to compile a query for linq-to-objects

    - by Luke101
    I have a linq to objects query in a recursive loop and afraid when the objects approach more then 1000 and a have more then 100 users on the site -- my website will break. so is it possible to compile a linq to objects query. The linq query does nothing more then find the direct children of a node.

    Read the article

  • String of KML needs to be converted to java objects

    - by spartikus
    I have a string of kml coming in on a request object. I have used xjc to create the kml java objects. I am looking for an easy way to create the kml nested java objects from this string. I could parse the string and create each object in the tree by hand but wouldn't it be cool if there was a library or something that would create the java objects for me? Something like KmlType type = parseKML(mykmlStringFromTheRequest); Then type would be a Tree of kml objects. Thanks for the help all.

    Read the article

  • Value objects in DDD - Why immutable?

    - by Hobbes
    I don't get why value objects in DDD should be immutable, nor do I see how this is easily done. (I'm focusing on C# and Entity Framework, if that matters.) For example, let's consider the classic Address value object. If you needed to change "123 Main St" to "123 Main Street", why should I need to construct a whole new object instead of saying myCustomer.Address.AddressLine1 = "123 Main Street"? (Even if Entity Framework supported structs, this would still be a problem, wouldn't it?) I understand (I think) the idea that value objects don't have an identity and are part of a domain object, but can someone explain why immutability is a Good Thing? EDIT: My final question here really should be "Can someone explain why immutability is a Good Thing as applied to Value Objects?" Sorry for the confusion! EDIT: To clairfy, I am not asking about CLR value types (vs reference types). I'm asking about the higher level DDD concept of Value Objects. For example, here is a hack-ish way to implement immutable value types for Entity Framework: http://rogeralsing.com/2009/05/21/entity-framework-4-immutable-value-objects. Basically, he just makes all setters private. Why go through the trouble of doing this?

    Read the article

  • Creating New Objects in JavaScript

    - by Ken Ray
    I'm a relatively newbie to object oriented programming in JavaScript, and I'm unsure of the "best" way to define and use objects in JavaScript. I've seen the "canonical" way to define objects and instantiate a new instance, as shown below. function myObjectType(property1, propterty2) { this.property1 = property1, this.property2 = property2 } // now create a new instance var myNewvariable = new myObjectType('value for property1', 'value for property2'); But I've seen other ways to create new instances of objects in this manner: var anotherVariable = new someObjectType({ property1: "Some value for this named property", property2: "This is the value for property 2" }); I like how that second way appears - the code is self documenting. But my questions are: Which way is "better"? Can I use that second way to instantiate a variable of an object type that has been defined using the "classical"way of defining the object type with that implicit constructor? If I want to create an array of these objects, are there any other considerations? Thanks in advance.

    Read the article

  • Rails: Serializing objects in a database?

    - by keruilin
    I'm looking for some general guidance on serializing objects in a database. What are serialized objects? What are some best-practice scenarios for serializing objects in a DB? What attributes do you use when creating the column in the DB so you can use a serialized object? How to save a serialized object? And how to access the serialized object and its attributes? (Using hashes?)

    Read the article

  • Long running operations (threads) in a web (asp.net) environment

    - by rrejc
    I have an asp.net (mvc) web site. As the part of the functions I will have to support some long running operations, for example: Initiated from user: User can upload (xml) file to the server. On the server I need to extract file, do some manipulation (insert into the db) etc... This can take from one minute to ten minutes (or even more - depends on file size). Of course I don't want to block the request when the import is running , but I want to redirect user to some progress page where he will have a chance to watch the status, errors or even cancel the import. This operation will not be frequently used, but it may happen that two users at the same time will try to import the data. It would be nice to run the imports in parallel. At the beginning I was thinking to create a new thread in the iis (controller action) and run the import in a new thread. But I am not sure if this is a good idea (to create working threads on a web server). Should I use windows services or any other approach? Initiated from system: - I will have to periodically update lucene index with the new data. - I will have to send mass emails (in the future). Should I implement this as a job in the site and run the job via Quartz.net or should I also create a windows service or something? What are the best practices when it comes to running site "jobs"? Thanks!

    Read the article

  • c++ thread running time

    - by chnet
    I want to know whether I can calculate the running time for each thread. I implement a multithread program in C++ using pthread. As we know, each thread will compete the CPU. Can I use clock() function to calculate the actual number of CPU clocks each thread consumes? my program looks like: Class Thread () { Start(); Run(); Computing(); }; Start() is to start multiple threads. Then each thread will run Computing function to do something. My question is how I can calculate the running time of each thread for Computing function

    Read the article

  • Cutting objects and applying texture to cut. Unity3d/C#

    - by Timothy Williams
    Basically what I'm trying to do is figure out how to calculate realtime cutting of objects, and apply a texture to the cut. I found some good scripts, but most of them have been abandoned and aren't really fully working yet. Applying textures: http://forum.unity3d.com/threads/75949-Mesh-Real-Cutting?highlight=mesh+real+cutting Cutting: http://forum.unity3d.com/threads/78594-Object-Cutter Another (Free) Cutter (Also, I'm not entirely sure how this one will handle cutting complex meshes): http://forum.unity3d.com/threads/69992-fake-slicer?p=449114&viewfull=1#post449114 My plan as of right now is to combine links 1 & 2 or 1 & 3 programming wise. What I'm asking here for is any advice on how to advance (links to asset store packages, or other codes to show how to accomplish something complex like this.)

    Read the article

  • Can I get the IE debugger to break into long-running javascript

    - by Brian Deacon
    I have a page that has a byzantine amount of javascript running. In IE only, and only version 8, I get a long-script warning that I can reliably reproduce. I suspect it is event handlers triggering themselves in an infinite loop. The Developer Tools are limping horribly under the weight of the script running, but I do seem to be able to get the log to tell me what line of script it was executing when I aborted, but it is inevitably some of the deep plumbing of the ExtJS code we use, and I can't tell where it is in my stack of code. A way of seeing the call stack would work, but preferably I'd like to be able to just break into the debugger when I get the long script warning so I can just step through the stack. There is a similar question posted, but the answers given were for a not-the-right-tool, or the not terribly helpful advice to eliminate half my code at a time on a binary hunt for the infinite loop. If my code were simple enough that I could do that, it probably wouldn't have gotten the infinite loop in the first place. If I could reproduce the problem in firebug, I'd probably be a lot happier too.

    Read the article

  • Long-running ASP.NET tasks

    - by John Leidegren
    I know there's a bunch of APIs out there that do this, but I also know that the hosting environment (being ASP.NET) puts restrictions on what you can reliably do in a separate thread. I could be completely wrong, so please correct me if I am, this is however what I think I know. A request typically timeouts after 120 seconds (this is configurable) but eventually the ASP.NET runtime will kill a request that's taking too long to complete. The hosting environment, typically IIS, employs process recycling and can at any point decide to recycle your app. When this happens all threads are aborted and the app restarts. I'm however not sure how aggressive it is, it would be kind of stupid to assume that it would abort a normal ongoing HTTP request but I would expect it to abort a thread because it doesn't know anything about the unit of work of a thread. If you had to create a programming model that easily and reliably and theoretically put a long running task, that would have to run for days, how would you accomplish this from within an ASP.NET application? The following are my thoughts on the issue: I've been thinking a long the line of hosting a WCF service in a win32 service. And talk to the service through WCF. This is however not very practical, because the only reason I would choose to do so, is to send tasks (units of work) from several different web apps. I'd then eventually ask the service for status updates and act accordingly. My biggest concern with this is that it would NOT be a particular great experience if I had to deploy every task to the service for it to be able to execute some instructions. There's also this issue of input, how would I feed this service with data if I had a large data set and needed to chew through it? What I typically do right now is this SELECT TOP 10 * FROM WorkItem WITH (ROWLOCK, UPDLOCK, READPAST) WHERE WorkCompleted IS NULL It allows me to use a SQL Server database as a work queue and periodically poll the database with this query for work. If the work item completed with success, I mark it as done and proceed until there's nothing more to do. What I don't like is that I could theoretically be interrupted at any point and if I'm in-between success and marking it as done, I could end up processing the same work item twice. I might be a bit paranoid and this might be all fine but as I understand it there's no guarantee that that won't happen... I know there's been similar questions on SO before but non really answers with a definitive answer. This is a really common thing, yet the ASP.NET hosting environment is ill equipped to handle long-running work. Please share your thoughts.

    Read the article

  • c++ quick sort running time

    - by chnet
    I have a question about quick sort algorithm. I implement quick sort algorithm and play it. The elements in initial unsorted array are random numbers chosen from certain range. I find the range of random number effects the running time. For example, the running time for 1, 000, 000 random number chosen from the range (1 - 2000) takes 40 seconds. While it takes 9 seconds if the 1,000,000 number chosen from the range (1 - 10,000). But I do not know how to explain it. In class, we talk about the pivot value can effect the depth of recursion tree. For my implementation, the last value of the array is chosen as pivot value. I do not use randomized scheme to select pivot value. int partition( vector<int> &vec, int p, int r) { int x = vec[r]; int i = (p-1); int j = p; while(1) { if (vec[j] <= x){ i = (i+1); int temp = vec[j]; vec[j] = vec[i]; vec[i] = temp; } j=j+1; if (j==r) break; } int temp = vec[i+1]; vec[i+1] = vec[r]; vec[r] = temp; return i+1; } void quicksort ( vector<int> &vec, int p, int r) { if (p<r){ int q = partition(vec, p, r); quicksort(vec, p, q-1); quicksort(vec, q+1, r); } } void random_generator(int num, int * array) { srand((unsigned)time(0)); int random_integer; for(int index=0; index< num; index++){ random_integer = (rand()%10000)+1; *(array+index) = random_integer; } } int main() { int array_size = 1000000; int input_array[array_size]; random_generator(array_size, input_array); vector<int> vec(input_array, input_array+array_size); clock_t t1, t2; t1 = clock(); quicksort(vec, 0, (array_size - 1)); // call quick sort int length = vec.size(); t2 = clock(); float diff = ((float)t2 - (float)t1); cout << diff << endl; cout << diff/CLOCKS_PER_SEC <<endl; }

    Read the article

  • Retaining Managed objects - more general retaining objects

    - by Luuk D. Jansen
    A quick question regarding Managed Objects. I created an Array with Managed Objects (in Object 1: TableViewConbtroller), and pass one of those objects to another class/object (object 2: TableCell). The original array should still be retained in the original caller class. Then Object 2 is released, does that mean that that particular item in the array is released as well, as the reference to it in Object 2 was released? I am trying to better understand how to work with ManagedObjects as I get 'Object was released' errors. [EDIT] After some experimenting I came across the following scenario: I have the main AppDelegate. In a different class I create an AppDelegate to obtain the ManagedObjectContext. appDelegate = (iDomsAppDelegate *)[[UIApplication sharedApplication] delegate]; [self setContext:[appDelegate managedObjectContext]]; When the class is finished, and I release it, the variable in the class 'appDelegate' is also released. But then the ManagedObjectContext is closed, and obvious any future attempt to use it will cause a crash. So should I leave the appDelegate unreleased? This comes to the same question as the above about when and how to release in those situations where an objects is used from another class. I think a way of putting it is, how to know when you own an object and when not.

    Read the article

  • Should these concerns be separated into separate objects?

    - by Lewis Bassett
    I have objects which implement the interface BroadcastInterface, which represents a message that is to be broadcast to all users of a particular group. It has a setter and getter method for the Subject and Body properties, and an addRecipientRole() method, which takes a given role and finds the contact token (e.g., an email address) for each user in the role and stores it. It then has a getContactTokens() method. BroadcastInterface objects are passed to an object that implements BroadcasterInterface. These objects are responsible for broadcasting a passed BroadcastInterface object. For example, an EmailBroadcaster implementation of the BroadcasterInterface will take EmailBroadcast objects and use the mailer services to email them out. Now, depending on what BroadcasterInterface implementation is used to broadcast, a different implementation of BroadcastInterface is used by client code. The Single Responsibility Principle seems to suggest that I should have a separate BroadcastFactory object, for creating BroadcastInterface objects, depending on what BroadcasterInterface implementation is used, as creating the BroadcastInterface object is a different responsibility to broadcasting them. But the class used for creating BroadcastInterface objects depends on what implementation of BroadcasterInterface is used to broadcast them. I think, because the knowledge of what method is used to send the broadcasts should only be configured once, the BroadcasterInterface object should be responsible for providing new BroadcastInterface objects. Does the responsibility of “creating and broadcasting objects that implement the BroadcastInterface interface” violate the Single Responsibility Principle? (Because the contact token for sending the broadcast out to the users will differ depending on the way it is broadcasted, I need different broadcast classes—though client code will not be able to tell the difference.)

    Read the article

  • Split NSData objects into other NSData objects with a given size

    - by Cedric Vandendriessche
    I'm having an NSData object of approximately 1000kb big. Now I want to transfer this via bluetooth. This would be better if I have let's say 10 objects of 100kb. It comes to mind that I should use the -subdataWithRange: method of NSData. I haven't really worked with NSRange. Well I know how it works, but then to read from a given location with the length: 'to end of file'... I've no idea how to do that. Some code on how to split this into multiple 100kb NSData objects would really help me out here. (it probably involves the length method to see how many objects should be made..?) Thank you in advance.

    Read the article

  • Creating C++ objects

    - by Phenom
    I noticed that there are two ways to create C++ objects: BTree *btree = new BTree; and BTree btree; From what I can tell, the only difference is in how class objects are accessed (. vs. - operator), and when the first way is used, private integers get initialized to 0. Which way is better, and what's the difference? How do you know when to use one or the other?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >