Search Results

Search found 13526 results on 542 pages for 'distributed objects'.

Page 5/542 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Release management with a distributed version control system

    - by See Sharp Cheddar
    We're considering a switch from SVN to a distributed VCS at my workplace. I'm familiar with all the reasons for wanting to using a DVCS for day-to-day development: local version control, easier branching and merging, etc., but I haven't seen that much that's compelling in terms of managing software releases. Here's our release process: Discover what changes are available for merging. Run a query to find the defects/tickets associated with these changes. Filter out changes associated with "open" tickets. In our environment, tickets must be in a closed state in order to merged with a release branch. Filter out changes we don't want in the release branch. We are very conservative when it comes to merging changes. If a change isn't absolutely necessary, it doesn't get merged. Merge available changes, preferably in chronological order. We group changes together if they're associated with the same ticket. Block unwanted changes from the release branch (svnmerge block) so we don't have to deal with them again. Sometimes we can be juggling 3-5 different milestones at a time. Some milestones have very different constraints, and the block list can get quite long. I've been messing around with git, mercurial and plastic, and as far as I can tell none of them address this model very well. It seems like they would work very well when you have only one product you're releasing, but I can't imagine using them for juggling multiple, very different products from the same codebase. For example, cherry-picking seems to be an afterthought in mercurial. (You have to use the 'transplant' command). After you cherry-pick a change into a branch it still shows up as an available integration. Cherry-picking breaks the mercurial way of working. DVCS seems to be better suited for feature branches. There's no need for cherry-picking if you merge directly from a feature branch to trunk and the release branch. But who wants to do all that merging all the time? And how do you query for what's available to merge? And how do you make sure all the changes in a feature branch belong together? It sounds like total chaos. I'm torn because the coder in me wants DVCS for day-to-day work. I really want it. But I fear the day when I have to put the release manager hat and sort out what needs to be merged and what doesn't. I want to write code, I don't want to be a merge monkey.

    Read the article

  • Simple way of converting server side objects into client side using JSON serialization for asp.net websites

    - by anil.kasalanati
     Introduction:- With the growth of Web2.0 and the need for faster user experience the spotlight has shifted onto javascript based applications built using REST pattern or asp.net AJAX Pagerequest manager. And when we are working with javascript wouldn’t it be much better if we could create objects in an OOAD way and easily push it to the client side.  Following are the reasons why you would push the server side objects onto client side -          Easy availability of the complex object. -          Use C# compiler and rick intellisense to create and maintain the objects but use them in the javascript. You could run code analysis etc. -          Reduce the number of calls we make to the server side by loading data on the pageload.   I would like to explain about the 3rd point because that proved to be highly beneficial to me when I was fixing the performance issues of a major website. There could be a scenario where in you be making multiple AJAX based webrequestmanager calls in order to get the same response in a single page. This happens in the case of widget based framework when all the widgets are independent but they need some common information available in the framework to load the data. So instead of making n multiple calls we could load the data needed during pageload. The above picture shows the scenario where in all the widgets need the common information and then call GetData webservice on the server side. Ofcourse the result can be cached on the client side but a better solution would be to avoid the call completely.  In order to do that we need to JSONSerialize the content and send it in the DOM.                                                                                                                                                                                                                                                                                                                                                                                            Example:- I have developed a simple application to demonstrate the idea and I would explaining that in detail here. The class called SimpleClass would be sent as serialized JSON to the client side .   And this inherits from the base class which has the implementation for the GetJSONString method. You can create a single base class and all the object which need to be pushed to the client side can inherit from that class. The important thing to note is that the class should be annotated with DataContract attribute and the methods should have the Data Member attribute. This is needed by the .Net DataContractSerializer and this follows the opt-in mode so if you want to send an attribute to the client side then you need to annotate the DataMember attribute. So if I didn’t want to send the Result I would simple remove the DataMember attribute. This is default WCF/.Net 3.5 stuff but it provides the flexibility of have a fullfledged object on the server side but sending a smaller object to the client side. Sometimes you may hide some values due to security constraints. And thing you will notice is that I have marked the class as Serializable so that it can be stored in the Session and used in webfarm deployment scenarios. Following is the implementation of the base class –  This implements the default DataContractJsonSerializer and for more information or customization refer to following blogs – http://softcero.blogspot.com/2010/03/optimizing-net-json-serializing-and-ii.html http://weblogs.asp.net/gunnarpeipman/archive/2010/12/28/asp-net-serializing-and-deserializing-json-objects.aspx The next part is pretty simple, I just need to inject this object into the aspx page.   And in the aspx markup I have the following line – <script type="text/javascript"> var data =(<%=SimpleClassJSON  %>);   alert(data.ResultText); </script>   This will output the content as JSON into the variable data and this can be any element in the DOM. And you can verify the element by checking data in the Firebug console.    Design Consideration – If you have a lot of javascripts then you need to think about using Script # and you can write javascript in C#. Refer to Nikhil’s blog – http://projects.nikhilk.net/ScriptSharp Ensure that you are taking security into consideration while exposing server side objects on to client side. I have seen application exposing passwords, secret key so it is not a good practice.   The application can be tested using the following url – http://techconsulting.vpscustomer.com/Samples/JsonTest.aspx The source code is available at http://techconsulting.vpscustomer.com/Source/HistoryTest.zip

    Read the article

  • Advice on designing and building distributed application to track vehicles

    - by dario-g
    I'm working on application for tracking vehicles. There will be about 10k or more vehicles. Each will be sending ~250bytes in each minute. Data contains gps location and everything from CAN Bus (every data that we can read from vehicle computer and dashboard). Data are sent by GSM/GPRS (using UDP protocol). Estimated rows with this data per day is ~2000k. I see there 3 main blocks. 1. Multithreaded Socket Server (MSS) - I have it. MSS stores received data to the queue (using NServiceBus). 2. Rule Processor Server (RPS) - this is core of this system. This block is responsible for parsing received data, storing in the database, processing rules, sending messages to Notifier Server (this will be sending e-mails/sms texts). Rule example. As I said between received bytes there will be information about current speed. When speed will be above 120 then: show alert in web application for specified users, send e-mail, send sms text. (There can be more than one instance of RPS). 3. Web application - allows reporting and defining rules by users, monitoring alerts, etc. I'm looking for advice how to design communication between RPS and Web application. Some questions: - Should Web application and RPS have separated databases or one central database will be enough? I have one domain model in web application. If there will be one central database then can I use the same model (objects) on RPS? So, how to send changed rules to RPS? I try to decouple this blocks as much as possible. I'm planning to create different instance of application for each client (each client will have separated database). One client will be have 10k vehicles, others only 100 vehicles.

    Read the article

  • Cleaning up a dynamic array of Objects in C++

    - by Dr. Monkey
    I'm a bit confused about handling an array of objects in C++, as I can't seem to find information about how they are passed around (reference or value) and how they are stored in an array. I would expect an array of objects to be an array of pointers to that object type, but I haven't found this written anywhere. Would they be pointers, or would the objects themselves be laid out in memory in an array? In the example below, a custom class myClass holds a string (would this make it of variable size, or does the string object hold a pointer to a string and therefore take up a consistent amount of space. I try to create a dynamic array of myClass objects within a myContainer. In the myContainer.addObject() method I attempt to make a bigger array, copy all the objects into it along with a new object, then delete the old one. I'm not at all confident that I'm cleaning up my memory properly with my destructors - what improvements could I make in this area? class myClass { private string myName; public unsigned short myAmount; myClass(string name, unsigned short amount) { myName = name; myAmount = amount; } //Do I need a destructor here? I don't think so because I don't do any // dynamic memory allocation within this class } class myContainer { int numObjects; myClass * myObjects; myContainer() { numObjects = 0; } ~myContainer() { //Is this sufficient? //Or do I need to iterate through myObjects and delete each // individually? delete [] myObjects; } void addObject(string name, unsigned short amount) { myClass newObject = new myClass(name, amount); myClass * tempObjects; tempObjects = new myClass[numObjects+1]; for (int i=0; i<numObjects; i++) tempObjects[i] = myObjects[i]); tempObjects[numObjects] = newObject; numObjects++; delete newObject; //Will this delete all my objects? I think it won't. //I'm just trying to delete the old array, and have the new array hold // all the objects plus the new object. delete [] myObjects; myObjects = tempObjects; } }

    Read the article

  • How to rewrite a TCP MMOG server designed to run in a single machine, in a distributed way?

    - by Dokkat
    I have a MMOG server running on C++, using winsockets. My server won't support more than 200 players. I had the idea of redesigning it so it will use multiple servers instead of one, so, maybe, for example, each server could take care of a number of players, and, if it was too laggy, it could transfer the responsability of that player to other server. I'm not sure of how to program a consistent game logic like that, though. Are there techniques for this?

    Read the article

  • Architecture choice about representation of collections in Business Objects

    - by Rajarshi
    I have made certain choices in my architecture which I request the community to review and comment. I am breaking up the post in smaller sections to make it easier to understand the context and then suggest/comment. I am sorry that the post is long, but is required to explain the context. What am I building A typical business application where there are application users, security roles, business operation/action rights based on roles and several business modules like Stock Receive, Stock Transfer, Sale Order, Sale Invoice, Sale Return, Stock Audit etc. and several reports. The application is a WinForm application since it has a lot of rich and responsive UI requirements and has to operate in disconnected mode (with a local SQL Server), most of the time. What have I done I have built a framework - nothing to boast about, but just a set of libraries that serves the repetative requirements of my application, e.g. authentication, role based authorization, data access, validation, exception handling, logging, change status tracking, presentation model compliance and reasonable loose coupling between components. No, I have not written everything from scratch, you can say I have consolidated many things together like some concepts from CSLA, Martin Fowler for Presentation Model, blocks from Enterprise Library, Unity etc. to build a set of libraries that will help my developers be productive quickly without having to look up Google for many of the technical requirements. I have tried to keep the framework generic so that it can be used in typical business applications and also tried to follow some best practices that will support the same Business Objects to be used in an ASP.NET MVC environment also. My present architecture serves my objectives well, and have built several modules (on WinForm) without much trouble. The architecture also lent itself well to build some usable prototype on ASP.NET MVC with the same set of business objects, without changing a single line of code. My Dilemma I have used Custom Business Objects since that gives me a clearer OOP representation of the problem scope in my solution scope, and helps me visualize my entire solution as collection of objects with data and behavior rather than having a set of relational data (DataSet) and implement behaviours (business logic, validation) etc. separately. With rich databinding support in .NET 2.0 binding Custom Business Objects to UI was a breeze. Now while building my business objects, I am still in a dilemma about representation of collections in business objects. Currently I am using DataSets to represent collections while I have seen many suggestions to implement custom collections. For example, in my vision, a typical Sale Invoice Object will contain 'Sales Invoice Items' as a collection. Now theoritically, I can accept that the each 'Sales Invoice Item' should have its own behavior along with their data (ItemCode, Name, Qty, Price etc.) but typically managing of Sale Invoice Items in a Sale Invoice is handled by the Sale Invoice Object itself, e.g. adding/removing Items from collection. Additionally, we can also put business logic/rules for the Sales Invoice Items like "Qty should not be greater than the ordered qty", "Price should be max 10% above the price in Sale Order" etc. in the Sale Invoice object itself. With that kind of a vision, I felt that most business object child collections can be managed by the parent itself, including add/remove from collection as well and implementing business logic for the collection items, hence the collection items hold nothing but data. Additionally, typical collections are represented in UI in Grids, where ability to support DataBinding becomes very important for any collection. Implementing a custom collection, in that case would also mean, I have to implement robust DataBinding support as well, for the collection, which is of course time consuming. Now, considering child collection behaviors are implemented in the parent and the need for DataBinding of child collections, I chose DataSet to represent any child collection in my business objects. In the above example of Sale Invoice I will have 'Invoice Number', 'Date', 'Customer' etc. as attributes of the 'Sale Invoice' but 'InvoiceItems' as a DataSet. Of course, when I say DataSet, it is not a vanilla dataset but an extended DataSet that supports business rule validation and the same role based security model of my framework to allow/deny any business operation to rows/columns of the DataSet, automatically. This approach has allowed easier collection management and databinding in my business objects and my developers are able to deliver modules rapidly. Questions Do you feel that the approach is reasonable? Do you see any shortcomings of this approach? I am recently thinking of using 'Typed DataSets' as child collections, for easier representation in code, that will allow me to write 'currentInvoice.InvoiceItems' (for the DataTable) and 'invoiceItem.ProductCode' or 'invoiceItem.Qty', instead of 'drow["ProductCode"].ToString()' or '(int)drow["Qty"]' etc. Does this choice have any demerits? Thank you if you have read so far and a salute if you still have the Energy to answer.

    Read the article

  • How DBAs Can Tune Distributed IBM DB2 Applications

    Many critical business applications now execute in an environment separate from that of the enterprise database server. The database administrator often finds monitoring and performance tuning of these "distributed" applications to be especially difficult. This article looks at common performance issues of distributed applications and presents advice to assist the IBM DB2 database administrator in mitigating performance problems.

    Read the article

  • MSDTC attempts to enlist client machine in a distributed transaction

    - by Ken
    Hi there We're seeing the following intermittent warning logged by MSDTC: A caller has attempted to propagate a transaction to a remote system, but MSDTC network DTC access is currently disabled on machine 'X'. Please review the MS DTC configuration settings. However, MSDTC is disabled on machine X by design - it's a client machine, and has no business being enlisted in the transaction! Several windows service endpoints hosting WCF services over TCP Single SQL Server 2005 instance beneath Linq to Sql Remote client receives event callbacks over WCF/TCP The issue is tricky to reproduce - usually following restart of services. We suspect a callback to the client machine is occurring within the context of a transaction. Just wondering if anyone has seen similar issues?? Ken

    Read the article

  • Ehcache - Distributed RMI not working

    - by Ted
    Hi, I have this strange problem with ehcache 2.0 that I hope someone can help me with. I have set up a cluster of two hosts, A and B. I can see that heartbeats are received at both ends, so I'm pretty sure the networking and multicast stuff is working. The problem is that is I put an element into the cache at host A, I can see in the logs of host B that it receives a remote put. But when I request the same element from host B, it runs off to the data base and performs a query nonetheless. What may be the cause of this? Thankful for any pointers!

    Read the article

  • running a python script where dependencies are not avail: distributed computing

    - by sadhu_
    Hi, I have access to a grid (running condor) that would (potentially) allow to very substantially reduce how long by nltk based nlp tasks take. unfortunately, i dont have root access on the cluster so cannot install new packages, only run whatever is available on the linux boxes. python is of course available, but nltk isnt - i was wondering however, if there might be a way around this somehow ? is there a way i can somehow still distribute the task in a self-contained 'package' of some sort? Thanks for your hel

    Read the article

  • Is it possible to compile a query in linq-to-objects

    - by Luke101
    I have a linq to objects query in a recursive loop and afraid when the objects approach more then 1000 and a have more then 100 users on the site -- my website will break. so is it possible to compile a linq to objects query. The linq query does nothing more then find the direct children of a node.

    Read the article

  • Is it possible to compile a query for linq-to-objects

    - by Luke101
    I have a linq to objects query in a recursive loop and afraid when the objects approach more then 1000 and a have more then 100 users on the site -- my website will break. so is it possible to compile a linq to objects query. The linq query does nothing more then find the direct children of a node.

    Read the article

  • Sphinx search distributed index tuning

    - by Andriy Bohdan
    I'm deciding how to split 3 large sphinx indexes between 3 servers. Each of the 3 indexes is searched separately. What's more effective: to host each index on separate machine Example machine1 - index1 machine2 - index2 machine3 - index3 or to split each index into 3 parts and host each part of the same index on separate machine. Example machine1 - index1_chunk1, index2_chunk1, index3_chunk1 machine2 - index1_chunk2, index2_chunk2, index3_chunk2 machine3 - index1_chunk3, index2_chunk3, index3_chunk3 ?

    Read the article

  • Distributed Lucene.NET

    - by user72185
    Hi, I have a Terabyte of data, maybe more, which I'd like to index and search with Lucene. I'd like to be able to split the index out to different machines, similar to what Solr does (if I understand Solr correctly). Are there any existing tools to do this on the Windows platform? Thanks!

    Read the article

  • String of KML needs to be converted to java objects

    - by spartikus
    I have a string of kml coming in on a request object. I have used xjc to create the kml java objects. I am looking for an easy way to create the kml nested java objects from this string. I could parse the string and create each object in the tree by hand but wouldn't it be cool if there was a library or something that would create the java objects for me? Something like KmlType type = parseKML(mykmlStringFromTheRequest); Then type would be a Tree of kml objects. Thanks for the help all.

    Read the article

  • Value objects in DDD - Why immutable?

    - by Hobbes
    I don't get why value objects in DDD should be immutable, nor do I see how this is easily done. (I'm focusing on C# and Entity Framework, if that matters.) For example, let's consider the classic Address value object. If you needed to change "123 Main St" to "123 Main Street", why should I need to construct a whole new object instead of saying myCustomer.Address.AddressLine1 = "123 Main Street"? (Even if Entity Framework supported structs, this would still be a problem, wouldn't it?) I understand (I think) the idea that value objects don't have an identity and are part of a domain object, but can someone explain why immutability is a Good Thing? EDIT: My final question here really should be "Can someone explain why immutability is a Good Thing as applied to Value Objects?" Sorry for the confusion! EDIT: To clairfy, I am not asking about CLR value types (vs reference types). I'm asking about the higher level DDD concept of Value Objects. For example, here is a hack-ish way to implement immutable value types for Entity Framework: http://rogeralsing.com/2009/05/21/entity-framework-4-immutable-value-objects. Basically, he just makes all setters private. Why go through the trouble of doing this?

    Read the article

  • Distributed Message Ordering

    - by sbanwart
    I have an architectural question on handling message ordering. For purposes of this question, the transport is irrelevant, so I'm not going to specify one. Say we have three systems, a website, a CRM and an ERP. For this example, the ERP will be the "master" system in terms of data ownership. The website and the CRM can both send a new customer message to the ERP system. The ERP system then adds a customer and publishes the customer with the newly assigned account number so that the website and CRM can add the account number to their local customer records. This is a pretty straight forward process. Next we move on to placing orders. The account number is required in order for the CRM or website to place an order with the ERP system. However the CRM will permit the user to place an order even if the customer lacks an account number. (For this example assume we can't modify the CRM behavior) This creates the possibility that a user could create a new customer, and place an order before the account number gets updated in the CRM. What is the best way to handle this scenario? Would it be best to send the order message sans account number and let it go to an error queue? Would it be better to have the CRM endpoint hold the message and wait until the account number is updated in the CRM? Maybe something completely different that I haven't thought of? Thanks in advance for any help.

    Read the article

  • Best Work Queue service for distributed clusters

    - by onewheelgood
    Hi there. I require a simple work queue type system for asynchronous task management. I have looked at both beanstalkd and gearman. However, both these seem to assume that the client and the queue server are on the same network, and therefore that there will always be a reliable network between them. I need one that can support the client and server being in different places in the world, and be able to manage temporary loss of network connection between clusters. Ideally, this would work in such a way where I post a job to a local proxy that attempts to send it to the main queue server. If there is no network connection, it would try again later, however it would not lose the job or delay the client. Any recommendations?

    Read the article

  • Creating New Objects in JavaScript

    - by Ken Ray
    I'm a relatively newbie to object oriented programming in JavaScript, and I'm unsure of the "best" way to define and use objects in JavaScript. I've seen the "canonical" way to define objects and instantiate a new instance, as shown below. function myObjectType(property1, propterty2) { this.property1 = property1, this.property2 = property2 } // now create a new instance var myNewvariable = new myObjectType('value for property1', 'value for property2'); But I've seen other ways to create new instances of objects in this manner: var anotherVariable = new someObjectType({ property1: "Some value for this named property", property2: "This is the value for property 2" }); I like how that second way appears - the code is self documenting. But my questions are: Which way is "better"? Can I use that second way to instantiate a variable of an object type that has been defined using the "classical"way of defining the object type with that implicit constructor? If I want to create an array of these objects, are there any other considerations? Thanks in advance.

    Read the article

  • JPA in distributed Java EE configuration

    - by sof
    Hello, I'm developing a JEE application to run on Glassfish: Database (javaDB, MS SQL, MySQL or Oracle) EJB layer with JPA (Toplink essentials - from Glassfish) for database access JSF/Icefaces based web UI accessing the EJB layer The application will have a lot of concurrent web client, so I want to run it on different physical servers and use a load-balancer. My problem is now how to keep the applications synchronized. I intend to set up multiple servers, each running Glassfish with my EAR app installed. Whenever on one of the servers data is added to or removed from the database (via JPA, no direct SQL queries), this change should be reflected in the JPA layer on the other servers. I've been looking around for solutions to this, but couldn't find anything I really like (the full Toplink from Oracle claims to have a solution, but don't know). Doing a refresh before every access to a JPA entity could work, but is far from efficient. Are there any patterns, libraries, ... that could help here? Thanks a lot!

    Read the article

  • hibernate distributed 2nd level cache options

    - by ishmeister
    Not really a question but I'm looking for comments/suggestions from anyone who has experiences using one or more of the following: EhCache with RMI EhCache with JGroups EhCache with Terracotta Gigaspaces Data Grid A bit of background: our applications is read only for the most part but there is some user data that is read-write and some that is only written (and can also be reasonably inaccurate). In addition, it would be nice to have tools that enable us to flush and fill the cache at intervals or by admin intervention. Regarding the first option - are there any concerns about the overhead of RMI and performance of Java serialization?

    Read the article

  • Understanding omission failure in distributed systems

    - by karthik A
    The following text says this which I'm not able to quite agree : client C sends a request R to server S. The time taken by a communication link to transport R over the link is D. P is the maximum time needed by S to recieve , process and reply to R. If omission failure is assumed ; then if no reply to R is received within 2(D+P) , then C will never recieve a reply to R . Why is the time here 2(D+P). As I understand shouldn't it be 2D+P ?

    Read the article

  • High performance distributed asynchronous RPC in java

    - by unludo
    I would like to do RPC to a list of clients with the following requirements: the server does not know the clients (implies a kind of broker?) and the cleints do not know the server there may be several clients - they share the load to treat the RPC The RPC is asynchronous very fast (round-trip < 1ms) optional : offers a fail-over mechanism. It can be done with underlying tools which are not really intended for that (Hazelcast is an example). What would you use for such requirements? Thanks!

    Read the article

  • Rails: Serializing objects in a database?

    - by keruilin
    I'm looking for some general guidance on serializing objects in a database. What are serialized objects? What are some best-practice scenarios for serializing objects in a DB? What attributes do you use when creating the column in the DB so you can use a serialized object? How to save a serialized object? And how to access the serialized object and its attributes? (Using hashes?)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >