Search Results

Search found 13526 results on 542 pages for 'distributed objects'.

Page 6/542 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Generic DRM (Distributed resource management) wrapper

    - by Pavel Bernshtam
    I need to write a software, which launches DRM jobs in a customer environment and monitors those jobs status. It should work with various customer environments and DRMs - like LSF, Sun Grid and others. Can you recommend some 3rd party library, which hides DRM differences from me and has API like "launch job", "get list of jobs", "get job status" etc. ? Both Java and native libraries are good for me.

    Read the article

  • Versioning millions of files with distributed SCM

    - by C. Lawrence Wenham
    I'm looking into the feasibility of using off-the-shelf distributed SCMs such as Git or Mercurial to manage millions of XML files. Each file would be a commercial transaction, such as a purchase order, that would be updated perhaps 10 times during the lifecycle of the transaction until it is "done" and changes no more. And by "manage", I mean that the SCM would be used to not just version the files, but also to replicate them to other machines for redundancy and transfer of IP. Lets suppose, for the sake of example, that a goal is to provide good performance if it was handling the volume of orders that Amazon.com claimed to have at its peak in December 2010: about 150,000 orders per minute. We're expecting the system to be distributed over many servers in order to get reasonable performance. We're also planning to use solid-state drives exclusively. There is a reason why we don't want to use an RDBMS for primary storage, but it's a bit beyond the scope of this question. Does anyone have first-hand experience with the performance of distributed SCMs under such a load, and what strategies were used? Open-source preferred, since the final product is to be FOSS, too.

    Read the article

  • Standard Network Tiers in a Distributed N-Tier System

    Distributed N-Tier client/server architecture allows for segments of an application to be broken up and distributed across multiple locations on a network.  Listed below are standard tiers in a Distributed N-Tier System. End-User Client Tier The End-User Client is responsible for sending and receiving requests from web servers and other applications servers and translating the responses so that the End-User can interpret the data effectively. The primary roles for this tier are to communicate with servers and translate server responses back to the end-user to interpret. Business-Specific Functions Validate Data Display Data Send Data to Webserver Web Server Tier The Web server tier processes new requests for information coming in from the HTTP and HTTPS ports. This primarily handles the generation of user interfaces and calls the application server when needed to access Data and business logic when needed. Business-specific functions Send Data to application server Format Data for Display Validate Data Application Server Tier The application server stores and executes predefined business logic that is applied to various pieces of data as the business determines. The processed data is then returned back to the Webserver. Additionally, this server directly calls the database to obtain and store any data used by the system Business-Specific Functions Validate Data Process Data Send Data to Database Server Database Server Tier The Database Server is responsible for storing and returning all data need by the calling applications. The primary role for this this server is storage. Data is stored as needed and can be recalled at any point later in time. Business-Specific Functions Insert Data Delete Data Return Data to Application Server

    Read the article

  • Cutting objects and applying texture to cut. Unity3d/C#

    - by Timothy Williams
    Basically what I'm trying to do is figure out how to calculate realtime cutting of objects, and apply a texture to the cut. I found some good scripts, but most of them have been abandoned and aren't really fully working yet. Applying textures: http://forum.unity3d.com/threads/75949-Mesh-Real-Cutting?highlight=mesh+real+cutting Cutting: http://forum.unity3d.com/threads/78594-Object-Cutter Another (Free) Cutter (Also, I'm not entirely sure how this one will handle cutting complex meshes): http://forum.unity3d.com/threads/69992-fake-slicer?p=449114&viewfull=1#post449114 My plan as of right now is to combine links 1 & 2 or 1 & 3 programming wise. What I'm asking here for is any advice on how to advance (links to asset store packages, or other codes to show how to accomplish something complex like this.)

    Read the article

  • Are there any viable DNS or LDAP alternatives for distributed key/value storage and retrieval?

    - by makerofthings7
    I'm working on a software app that needs distributed decentralized name resolution, and isn't bound to TCP/IP. Or more precisely, I need to store a "key" and look up it's value, and the key may be a string, a number, or any other realistic data type. Examples: With a phone number, look up a name. (or with an area code, redirect to the server that handles that exchange) With an IP Address get a DNS name, or a Whois contact (string value) With a string, get an IP, ( like a DNS TXT or SRV record). I'm thinking out of the box here and looking for any software that allows for this. (more info below) Are there any secure, scalable DNS alternatives that have gained notoriety? I could ask on StackOverflow, but think the infrastructure groups would have better insight on this. Edit More info: I'm looking at "Namecoin" the DNS version of Bitcoin, and since that project is faltering, I'm looking at alternative ways to store name-value pairs, with an optional qualifier. I think a name value pair is of global interest is useful, but on a limited scale. Namecoin tried to be too much, and ended up becoming nothing. I'm trying to solve that problem in researching alternatives and applying distributed technologies where applicable. Bitcoin/Namecoin offers a Distributed Hash Table, which has some positive aspects, but not useful for DNS, except for root servers.

    Read the article

  • Retaining Managed objects - more general retaining objects

    - by Luuk D. Jansen
    A quick question regarding Managed Objects. I created an Array with Managed Objects (in Object 1: TableViewConbtroller), and pass one of those objects to another class/object (object 2: TableCell). The original array should still be retained in the original caller class. Then Object 2 is released, does that mean that that particular item in the array is released as well, as the reference to it in Object 2 was released? I am trying to better understand how to work with ManagedObjects as I get 'Object was released' errors. [EDIT] After some experimenting I came across the following scenario: I have the main AppDelegate. In a different class I create an AppDelegate to obtain the ManagedObjectContext. appDelegate = (iDomsAppDelegate *)[[UIApplication sharedApplication] delegate]; [self setContext:[appDelegate managedObjectContext]]; When the class is finished, and I release it, the variable in the class 'appDelegate' is also released. But then the ManagedObjectContext is closed, and obvious any future attempt to use it will cause a crash. So should I leave the appDelegate unreleased? This comes to the same question as the above about when and how to release in those situations where an objects is used from another class. I think a way of putting it is, how to know when you own an object and when not.

    Read the article

  • Should these concerns be separated into separate objects?

    - by Lewis Bassett
    I have objects which implement the interface BroadcastInterface, which represents a message that is to be broadcast to all users of a particular group. It has a setter and getter method for the Subject and Body properties, and an addRecipientRole() method, which takes a given role and finds the contact token (e.g., an email address) for each user in the role and stores it. It then has a getContactTokens() method. BroadcastInterface objects are passed to an object that implements BroadcasterInterface. These objects are responsible for broadcasting a passed BroadcastInterface object. For example, an EmailBroadcaster implementation of the BroadcasterInterface will take EmailBroadcast objects and use the mailer services to email them out. Now, depending on what BroadcasterInterface implementation is used to broadcast, a different implementation of BroadcastInterface is used by client code. The Single Responsibility Principle seems to suggest that I should have a separate BroadcastFactory object, for creating BroadcastInterface objects, depending on what BroadcasterInterface implementation is used, as creating the BroadcastInterface object is a different responsibility to broadcasting them. But the class used for creating BroadcastInterface objects depends on what implementation of BroadcasterInterface is used to broadcast them. I think, because the knowledge of what method is used to send the broadcasts should only be configured once, the BroadcasterInterface object should be responsible for providing new BroadcastInterface objects. Does the responsibility of “creating and broadcasting objects that implement the BroadcastInterface interface” violate the Single Responsibility Principle? (Because the contact token for sending the broadcast out to the users will differ depending on the way it is broadcasted, I need different broadcast classes—though client code will not be able to tell the difference.)

    Read the article

  • Split NSData objects into other NSData objects with a given size

    - by Cedric Vandendriessche
    I'm having an NSData object of approximately 1000kb big. Now I want to transfer this via bluetooth. This would be better if I have let's say 10 objects of 100kb. It comes to mind that I should use the -subdataWithRange: method of NSData. I haven't really worked with NSRange. Well I know how it works, but then to read from a given location with the length: 'to end of file'... I've no idea how to do that. Some code on how to split this into multiple 100kb NSData objects would really help me out here. (it probably involves the length method to see how many objects should be made..?) Thank you in advance.

    Read the article

  • Creating C++ objects

    - by Phenom
    I noticed that there are two ways to create C++ objects: BTree *btree = new BTree; and BTree btree; From what I can tell, the only difference is in how class objects are accessed (. vs. - operator), and when the first way is used, private integers get initialized to 0. Which way is better, and what's the difference? How do you know when to use one or the other?

    Read the article

  • How to count JavaScript array objects?

    - by Nikita Sumeiko
    When I have a JavaScript array like this: var member = { "mother": { "name" : "Mary", "age" : "48" }, "father": { "name" : "Bill", "age" : "50" }, "brother": { "name" : "Alex", "age" : "28" } } How to count objects in this array?!I mean how to get a counting result 3, because there're only 3 objects inside: mother, father, brother?!

    Read the article

  • How value objects are saving and loading?

    - by yeraycaballero
    Since there isn't respositories for value objects. How can I load all value objects? Suppose we are modeling a blog application and we have this classes: Post (Entity) Comment (Value object) Tag (Value object) PostsRespository (Respository) I Know that when I save a new post, its tags are saving with it in the same table. But how could I load all tags of all posts. Has PostsRespository got a method to load all tags? I usually do it, but I want to know others opinions

    Read the article

  • Suggestions on how to map from Domain (ORM) objects to Data Transfer Objects (DTO)

    - by FryHard
    The current system that I am working on makes use of Castle Activerecord to provide ORM (Object Relational Mapping) between the Domain objects and the database. This is all well and good and at most times actually works well! The problem comes about with Castle Activerecords support for asynchronous execution, well, more specifically the SessionScope that manages the session that objects belong to. Long story short, bad stuff happens! We are therefore looking for a way to easily convert (think automagically) from the Domain objects (who know that a DB exists and care) to the DTO object (who know nothing about the DB and care not for sessions, mapping attributes or all thing ORM). Does anyone have suggestions on doing this. For the start I am looking for a basic One to One mapping of object. Domain object Person will be mapped to say PersonDTO. I do not want to do this manually since it is a waste. Obviously reflection comes to mind, but I am hoping with some of the better IT knowledge floating around this site that "cooler" will be suggested. Oh, I am working in C#, the ORM objects as said before a mapped with Castle ActiveRecord. Example code: By @ajmastrean's request I have linked to an example that I have (badly) mocked together. The example has a capture form, capture form controller, domain objects, activerecord repository and an async helper. It is slightly big (3MB) because I included the ActiveRecored dll's needed to get it running. You will need to create a database called ActiveRecordAsync on your local machine or just change the .config file. Basic details of example: The Capture Form The capture form has a reference to the contoller private CompanyCaptureController MyController { get; set; } On initialise of the form it calls MyController.Load() private void InitForm () { MyController = new CompanyCaptureController(this); MyController.Load(); } This will return back to a method called LoadComplete() public void LoadCompleted (Company loadCompany) { _context.Post(delegate { CurrentItem = loadCompany; bindingSource.DataSource = CurrentItem; bindingSource.ResetCurrentItem(); //TOTO: This line will thow the exception since the session scope used to fetch loadCompany is now gone. grdEmployees.DataSource = loadCompany.Employees; }, null); } } this is where the "bad stuff" occurs, since we are using the child list of Company that is set as Lazy load. The Controller The controller has a Load method that was called from the form, it then calls the Asyc helper to asynchronously call the LoadCompany method and then return to the Capture form's LoadComplete method. public void Load () { new AsyncListLoad<Company>().BeginLoad(LoadCompany, Form.LoadCompleted); } The LoadCompany() method simply makes use of the Repository to find a know company. public Company LoadCompany() { return ActiveRecordRepository<Company>.Find(Setup.company.Identifier); } The rest of the example is rather generic, it has two domain classes which inherit from a base class, a setup file to instert some data and the repository to provide the ActiveRecordMediator abilities.

    Read the article

  • Out of memory when creating a lot of objects C#

    - by Bas
    I'm processing 1 million records in my application, which I retrieve from a MySQL database. To do so I'm using Linq to get the records and use .Skip() and .Take() to process 250 records at a time. For each retrieved record I need to create 0 to 4 Items, which I then add to the database. So the average amount of total Items that has to be created is around 2 million. while (objects.Count != 0) { using (dataContext = new LinqToSqlContext(new DataContext())) { foreach (Object objectRecord in objects) { // Create a list of 0 - 4 Random Items and add each Item to the Object for (int i = 0; i < Random.Next(0, 4); i++) { Item item = new Item(); item.Id = Guid.NewGuid(); item.Object = objectRecord.Id; item.Created = DateTime.Now; item.Changed = DateTime.Now; dataContext.InsertOnSubmit(item); } } dataContext.SubmitChanges(); } amountToSkip += 250; objects = objectCollection.Skip(amountToSkip).Take(250).ToList(); } Now the problem arises when creating the Items. When running the application (and not even using dataContext) the memory increases consistently. It's like the items are never getting disposed. Does anyone notice what I'm doing wrong? Thanks in advance!

    Read the article

  • Properly clean up excel interop objects revisited: Wrapper objects

    - by chiccodoro
    Hi all, Excel 2007 Hangs When Closing via .NET How to properly clean up Excel interop objects in C# How to properly clean up interop objects in C# All of these struggle with the problem that C# does not release the Excel COM objects properly after using them. There are mainly two directions of working around this issue: Kill the Excel process when Excel is not used anymore. Take care to assign each COM object used explicitly to a variable and to Marshal.ReleaseComObject all of these. Some have stated that 2 is too tedious and there is always some uncertainty whether you forget to stick to this rule at some places in the code. Still 1 seems dirty and dangerous to me, also I could imagine that in an environment with restricted access killing processes is not allowed. So I've been thinking about solving 2 by creating another proxy object model which mimics the Excel object model (for me, it would suffice to implement the objects I actually need). The principle would look as follows: Each Excel Interop class has its proxy which wraps an object of that class. The proxy releases the COM object in its destructor. The proxy mimics the interface of the Interop class (maybe by inheriting it). Any methods that usually return another COM object return a proxy instead. The other methods simply delegate the implementation to the inner COM object. This is a rough sketch of the code: public class Application : Microsoft.Office.Interop.Excel.Application { private Microsoft.Office.Interop.Excel.Application innerApplication = new Microsoft.Office.Interop.Excel.Application innerApplication(); ~Application() { Marshal.ReleaseCOMObject(innerApplication); } public Workbooks Workbooks { get { return new Workbooks(innerApplication.Workbooks); } } } public class Workbooks { private Microsoft.Office.Interop.Excel.Workbooks innerWorkbooks; Workbooks(Microsoft.Office.Interop.Excel.Workbooks innerWorkbooks) { this.innerWorkbooks = innerWorkbooks; } ~Workbooks() { Marshal.ReleaseCOMObject(innerWorkbooks); } } My questions to you are in particular: Who finds this a bad idea and why? Who finds this a gread idea? If so, why hasn't anybody implemented/published such a model yet? Just due to the effort, or am I missing a killing problem with that idea? Is it impossible/bad/dangerous to do the ReleaseCOMObject in the destructor? (I've only seen proposals to put it in a Dispose() rather than in a destructor - why?) If the approach makes sense, any suggestions to improve it?

    Read the article

  • SQLAuthority News – Mark the Date: October 16, 2013 – Introducing NuoDB Blackbirds: THE Distributed Database

    - by Pinal Dave
    I am very excited to announce first on this blog about the release of NuoDB Blackbirds (NuoDB Release 2.0). NuoDB is my favorite application to work with data now a days. They are increasingly gaining market share as well as brining out new features with their every new release. I was very excited when I learned that NuoDB is releasing their flagship release of 2.0 on October 16, 2013. Interesting enough I will be in USA while this release happens and I will be watching it live during my day time. Even though if I had to stay up the entire night to just watch this release, I would do it. Here is the details of the announcements: Introducing NuoDB Blackbirds: THE Distributed Database Date: October 16, 2013 Time: 1:00 PM EDT Location: Online Registration Link What is the best DBMS architecture to handle today’s and tomorrow’s evolving needs? The days of shared disk are over. The times are “a-changin” and IT infrastructure has to change with them. Join NuoDB live for the introduction of our latest major product release, NuoDB Blackbirds, and take a look at why the NuoDB distributed database architecture is the only answer for customers like Fathom Voice, a leading provider of Voice Over IP (VoIP). NuoDB CEO, Barry Morris, welcomes Cameron Weeks, CEO of Fathom Voice to discuss how his company is using DBMS to break away from the pack and become the hottest player in VoIP. The webcast will include demonstrations of a single, logical database running in multiple geographies and a live Q&A. If due to any reason, you cannot watch it live, do not worry at all, just register at this Registration Link, as after the event you will get the link to watch the event on-demand. You can watch the launch event at any time if you have registered for the launch. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: NuoDB

    Read the article

  • Test Data in a Distributed System

    - by Davin Tryon
    A question that has been vexing me lately has been about how to effectively test (end-to-end) features in a distributed system. Particuarly, how to effectively manage (through time) test data for feature testing. The system in question is a typical SOA setup. The composition is done in JavaScript when call to several REST APIs. Each service is built as an independent block. Each service has some kind of persistent storage (SQL Server in most cases). The main issue at the moment is how to approach test data when testing end-to-end features. Functional end-to-end testing occurs through the UI, and it is therefore necessary for test data to be set up before the test run (this could be manual or automated testing). As is typical in a distributed system, identifiers from one service are used as a link in another service. So, some level of synchronization needs to be present in the data to effectively test. What is the best way to manage and set up this data after a successful deployment to a test environment? For example, is it better to manage this test data inside each service? Or package it together with the testing suite? Does that testing suite exist as a separate project? I'm interested in design guidance about how to store and manage this test data as the application features evolve.

    Read the article

  • Distributed Rendering in the UDK and Unity

    - by N0xus
    At the moment I'm looking at getting a game engine to run in a CAVE environment. So far, during my research I've seen a lot of people being able to get both Unity and the Unreal engine up and running in a CAVE (someone did get CryEngine to work in one, but there is little research data about it). As of yet, I have not cemented my final choice of engine for use in the next stage of my project. I've experience in both, so the learning curve will be gentle on both. And both of the engines offer stereoscopic rendering, either already inbuilt with ReadD (Unreal) or by doing it yourself (Unity). Both can also make use of other input devices as well, such as the kinect or other devices. So again, both engines are still on the table. For the last bit of my preliminary research, I was advised to see if either, or both engines could do distributed rendering. I was advised this, as the final game we make could go into a variety of differently sized CAVEs. The one I have access to is roughly 2.4m x 3m cubed, and have been duly informed that this one is a "baby" compared to others. So, finally onto my question: Can either the Unreal Engine, or Unity Engine make it possible for developers to allow distributed rendering? Either through in built devices, or by creating my own plugin / script?

    Read the article

  • Accessing Members of Containing Objects from Contained Objects.

    - by Bunkai.Satori
    If I have several levels of object containment (one object defines and instantiates another object which define and instantiate another object..), is it possible to get access to upper, containing - object variables and functions, please? Example: class CObjectOne { public: CObjectOne::CObjectOne() { Create(); }; void Create(); std::vector<ObjectTwo>vObejctsTwo; int nVariableOne; } bool CObjectOne::Create() { CObjectTwo ObjectTwo(this); vObjectsTwo.push_back(ObjectTwo); } class CObjectTwo { public: CObjectTwo::CObjectTwo(CObjectOne* pObject) { pObjectOne = pObject; Create(); }; void Create(); CObjectOne* GetObjectOne(){return pObjectOne;}; std::vector<CObjectTrhee>vObjectsTrhee; CObjectOne* pObjectOne; int nVariableTwo; } bool CObjectTwo::Create() { CObjectThree ObjectThree(this); vObjectsThree.push_back(ObjectThree); } class CObjectThree { public: CObjectThree::CObjectThree(CObjectTwo* pObject) { pObjectTwo = pObject; Create(); }; void Create(); CObjectTwo* GetObjectTwo(){return pObjectTwo;}; std::vector<CObjectsFour>vObjectsFour; CObjectTwo* pObjectTwo; int nVariableThree; } bool CObjectThree::Create() { CObjectFour ObjectFour(this); vObjectsFour.push_back(ObjectFour); } main() { CObjectOne myObject1; } Say, that from within CObjectThree I need to access nVariableOne in CObjectOne. I would like to do it as follows: int nValue = vObjectThree[index].GetObjectTwo()->GetObjectOne()->nVariable1; However, after compiling and running my application, I get Memory Access Violation error. What is wrong with the code above(it is example, and might contain spelling mistakes)? Do I have to create the objects dynamically instead of statically? Is there any other way how to achieve variables stored in containing objects from withing contained objects?

    Read the article

  • Debugging XSLT with extension objects in Visual Studio 2010

    - by Alex Ciminian
    I'm currently working on a project that involves a lot of XSLT transformations and I really need a debugger (I have XSLTs that are 1000+ lines long and I didn't write them :-). The project is written in C# and makes use of extension objects: xslArg.AddExtensionObject("urn:<obj>", new <Obj>()); From my knowledge, in this situation Visual Studio is the only tool that can help me debug the transformations step-by-step. The static debugger is no use because of the extension objects (it throws an error when it reaches elements that reference their namespace). Fortunately, I've found this thread which gave me a starting point (at least I know it can be done). After searching MSDN, I found the criteria that makes stepping into the transform possible. They are listed here. In short: the XML and the XSLT must be loaded via a class that has the IXmlLineInfo interface (XmlReader & co.) the XML resolver used in the XSLTCompiledTransform constructor is file-based (XmlUriResolver should work). the stylesheet should be on the local machine or on the intranet (?) From what I can tell, I fit all these criteria, but it still doesn't work. The relevant code samples are posted below: // [...] xslTransform = new XslCompiledTransform(true); xslTransform.Load(XmlReader.Create(new StringReader(contents)), null, new BaseUriXmlResolver(xslLocalPath)); // [...] // I already had the xml loaded in an xmlDocument // so I have to convert to an XmlReader XmlTextReader r = new XmlTextReader(new StringReader(xmlDoc.OuterXml)); XsltArgumentList xslArg = new XsltArgumentList(); xslArg.AddExtensionObject("urn:[...]", new [...]()); xslTransform.Transform(r, xslArg, context.Response.Output); I really don't get what I'm doing wrong. I've checked the interfaces on both XmlReader objects and they implement the required one. Also, BaseUriXmlResolver inherits from XmlUriResolver and the stylesheet is stored locally. The screenshot below is what I get when stepping into the Transform function. First I can see the stylesheet code after stepping through the parameters (on template-match), I get this: If anyone has any idea why it doesn't work or has an alternative way of getting it to work I'd be much obliged :). Thanks, Alex

    Read the article

  • Communication Between Different Technologies in a Distributed Application

    - by sjtaheri
    I had to a incorporate several legacy applications and services in a network-distributed application. The existing services and applications are written using different languages and technologies, including: java, C#.Net and C++; all running on MS Windows machines. Now I'm wondering about the communication mechanism between them. What is the simple and standard way? Thanks! PS. communications include simple message sending and remote method invocations.

    Read the article

  • Converting formCollection array to objects in the controller

    - by bergin
    in my view I have several [n].propertyName array fields I want to turn the formCollection fields into objects myobject[n].propertyName when it goes to the controller. so for example, the context: View: foreach (var item in Model.SSSubjobs.AsEnumerable()) <%: Html.Hidden("["+c+"].sssj_id", item.sssj_id ) %> <%: Html.Hidden("["+c+"].order_id", item.order_id ) %> <%: Html.TextBox("["+c+"].farm", item.farm %> <%: Html.TextBox("["+c+"].field", item.field %> c++; Controller: I want to take the above [0].sssj_id and turn into sssj[0].sssj_id or a list of sssj objects My first idea was to look in the form collection for things starting with "[" but I have a feeling this isnt right... this is as far as I got: public IList<SoilSamplingSubJob> extractSSSJ(FormCollection c) { IList<SoilSamplingSubJob> sssj_list=null; SoilSamplingSubJob sssj; var n=0; foreach (var key in c.AllKeys) // iterate through the formcollection { var value = c[key]; if(key.StartsWith("[")) // ie turn [0].gps_pk_chx into sssj.gps_pk_chx ??? } return sssj_list; }

    Read the article

  • c# Most efficient way to combine two objects

    - by Dested
    I have two objects that can be represented as an int, float, bool, or string. I need to perform an addition on these two objects with the results being the same thing c# would produce as a result. For instance 1+"Foo" would equal the string "1Foo", 2+2.5 would equal the float 5.5, and 3+3 would equal the int 6 . Currently I am using the code below but it seems like incredible overkill. Can anyone simplify or point me to some way to do this efficiently? private object Combine(object o, object o1) { float left = 0; float right = 0; bool isInt = false; string l = null; string r = null; if (o is int) { left = (int)o; isInt = true; } else if (o is float) { left = (float)o; } else if (o is bool) { l = o.ToString(); } else { l = (string)o; } if (o1 is int) { right = (int)o1; } else if (o is float) { right = (float)o1; isInt = false; } else if (o1 is bool) { r = o1.ToString(); isInt = false; } else { r = (string)o1; isInt = false; } object rr; if (l == null) { if (r == null) { rr = left + right; } else { rr = left + r; } } else { if (r == null) { rr = l + right; } else { rr = l + r; } } if (isInt) { return Convert.ToInt32(rr); } return rr; }

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >