Search Results

Search found 3061 results on 123 pages for 'interfaces'.

Page 67/123 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • UIView vs CCLayer and Making gestures work in Cocos2d

    - by Lewis
    now I've been been searching for an answer to this question for at least 3 days now. I've tried on many cocos2d forums, including the official one and have heard nothing back. I've found a project which uses custom gestures: https://github.com/melle/OneFingerRotationGestureDemo Explanation here: http://blog.mellenthin.de/archives/2012/02/13/an-one-finger-rotation-gesture-recognizer/ Now I want to implement that behaviour onto a sprite in a cocos2d application. I've tried to do this but it fails to work. It uses a view controller which inherits like this: @interface OneFingerRotationGestureViewController : UIViewController <OneFingerRotationGestureRecognizerDelegate> Now my question is how would I implement the OneFingerRotationGesture behaviour onto a CCSprite in cocos2d 2.0? As interfaces in cocos2d look like this: @interface HelloWorldLayer : CCLayer Now I have asked a similar question to this on stack overflow and a user directed me to this link: https://github.com/krzysztofzablocki/CCNode-SFGestureRecognizers Which I believe makes use of gestures (like the first github linked project) but not custom gestures. I lack the obj-c skills to work out and implement the functionality into my game, so I would appreciate it if someone could explain the differences between CCLayer and UIViewController, and help me implement the OneFingerRotation gesture into a cocos2d 2.0 project. Regards, Lewis.

    Read the article

  • Architecture advice for converting biz app from old school to new school?

    - by Aaron Anodide
    I've got a WinForms business application that evolved over the past few years. It's forms over data with a number custom UI experiences taylored to the business, so I don't think it's a candidate to port to something like SharePoint or re-write in LightSwitch (at least not without significant investment). When I started it in 2009 I was new to this type of development (coming from more low level programming and my RDBMS knowledge was just slightly greater than what I got from school). Thus, when I was confronted with a business model that operates on a strict monthly accounting cycle, I made the unfortunate decision to create a separate database for each accounting period. Also, when I started I knew DataSets, then I learned Linq2Sql, then I learned EntityFramework. The screens are a mix and match of those. Now, after a few years developing this thing by myself I've finally got a small team. Ultimately, I want a web front end (for remote access to more straight up screens with grids of data) and a thick client (for the highly customized interfaces). My question is: can you offer me some broad strokes architecture advice that will help me formulate a battle plan to convert over to a single database and lay the foundations for my future goals at the same time? Here's a screen shot showing how an older screen uses DataSets and a newer screen uses EF (I'm thinking this might make it more real for someone reading the question - I'm willing to add any amount of detail if someone is willing to help).

    Read the article

  • C#: Handling Notifications: inheritance, events, or delegates?

    - by James Michael Hare
    Often times as developers we have to design a class where we get notification when certain things happen. In older object-oriented code this would often be implemented by overriding methods -- with events, delegates, and interfaces, however, we have far more elegant options. So, when should you use each of these methods and what are their strengths and weaknesses? Now, for the purposes of this article when I say notification, I'm just talking about ways for a class to let a user know that something has occurred. This can be through any programmatic means such as inheritance, events, delegates, etc. So let's build some context. I'm sitting here thinking about a provider neutral messaging layer for the place I work, and I got to the point where I needed to design the message subscriber which will receive messages from the message bus. Basically, what we want is to be able to create a message listener and have it be called whenever a new message arrives. Now, back before the flood we would have done this via inheritance and an abstract class: 1:  2: // using inheritance - omitting argument null checks and halt logic 3: public abstract class MessageListener 4: { 5: private ISubscriber _subscriber; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber) 11: { 12: _subscriber = subscriber; 13: _messageThread = new Thread(MessageLoop); 14: _messageThread.Start(); 15: } 16:  17: // user will override this to process their messages 18: protected abstract void OnMessageReceived(Message msg); 19:  20: // handle the looping in the thread 21: private void MessageLoop() 22: { 23: while(!_isHalted) 24: { 25: // as long as processing, wait 1 second for message 26: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 27: if(msg != null) 28: { 29: OnMessageReceived(msg); 30: } 31: } 32: } 33: ... 34: } It seems so odd to write this kind of code now. Does it feel odd to you? Maybe it's just because I've gotten so used to delegation that I really don't like the feel of this. To me it is akin to saying that if I want to drive my car I need to derive a new instance of it just to put myself in the driver's seat. And yet, unquestionably, five years ago I would have probably written the code as you see above. To me, inheritance is a flawed approach for notifications due to several reasons: Inheritance is one of the HIGHEST forms of coupling. You can't seal the listener class because it depends on sub-classing to work. Because C# does not allow multiple-inheritance, I've spent my one inheritance implementing this class. Every time you need to listen to a bus, you have to derive a class which leads to lots of trivial sub-classes. The act of consuming a message should be a separate responsibility than the act of listening for a message (SRP). Inheritance is such a strong statement (this IS-A that) that it should only be used in building type hierarchies and not for overriding use-specific behaviors and notifications. Chances are, if a class needs to be inherited to be used, it most likely is not designed as well as it could be in today's modern programming languages. So lets look at the other tools available to us for getting notified instead. Here's a few other choices to consider. Have the listener expose a MessageReceived event. Have the listener accept a new IMessageHandler interface instance. Have the listener accept an Action<Message> delegate. Really, all of these are different forms of delegation. Now, .NET events are a bit heavier than the other types of delegates in terms of run-time execution, but they are a great way to allow others using your class to subscribe to your events: 1: // using event - ommiting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private bool _isHalted = false; 6: private Thread _messageThread; 7:  8: // assign the subscriber and start the messaging loop 9: public MessageListener(ISubscriber subscriber) 10: { 11: _subscriber = subscriber; 12: _messageThread = new Thread(MessageLoop); 13: _messageThread.Start(); 14: } 15:  16: // user will override this to process their messages 17: public event Action<Message> MessageReceived; 18:  19: // handle the looping in the thread 20: private void MessageLoop() 21: { 22: while(!_isHalted) 23: { 24: // as long as processing, wait 1 second for message 25: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 26: if(msg != null && MessageReceived != null) 27: { 28: MessageReceived(msg); 29: } 30: } 31: } 32: } Note, now we can seal the class to avoid changes and the user just needs to provide a message handling method: 1: theListener.MessageReceived += CustomReceiveMethod; However, personally I don't think events hold up as well in this case because events are largely optional. To me, what is the point of a listener if you create one with no event listeners? So in my mind, use events when handling the notification is optional. So how about the delegation via interface? I personally like this method quite a bit. Basically what it does is similar to inheritance method mentioned first, but better because it makes it easy to split the part of the class that doesn't change (the base listener behavior) from the part that does change (the user-specified action after receiving a message). So assuming we had an interface like: 1: public interface IMessageHandler 2: { 3: void OnMessageReceived(Message receivedMessage); 4: } Our listener would look like this: 1: // using delegation via interface - omitting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private IMessageHandler _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, IMessageHandler handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // handle the looping in the thread 19: private void MessageLoop() 20: { 21: while(!_isHalted) 22: { 23: // as long as processing, wait 1 second for message 24: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 25: if(msg != null) 26: { 27: _handler.OnMessageReceived(msg); 28: } 29: } 30: } 31: } And they would call it by creating a class that implements IMessageHandler and pass that instance into the constructor of the listener. I like that this alleviates the issues of inheritance and essentially forces you to provide a handler (as opposed to events) on construction. Well, this is good, but personally I think we could go one step further. While I like this better than events or inheritance, it still forces you to implement a specific method name. What if that name collides? Furthermore if you have lots of these you end up either with large classes inheriting multiple interfaces to implement one method, or lots of small classes. Also, if you had one class that wanted to manage messages from two different subscribers differently, it wouldn't be able to because the interface can't be overloaded. This brings me to using delegates directly. In general, every time I think about creating an interface for something, and if that interface contains only one method, I start thinking a delegate is a better approach. Now, that said delegates don't accomplish everything an interface can. Obviously having the interface allows you to refer to the classes that implement the interface which can be very handy. In this case, though, really all you want is a method to handle the messages. So let's look at a method delegate: 1: // using delegation via delegate - omitting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private Action<Message> _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, Action<Message> handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // handle the looping in the thread 19: private void MessageLoop() 20: { 21: while(!_isHalted) 22: { 23: // as long as processing, wait 1 second for message 24: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 25: if(msg != null) 26: { 27: _handler(msg); 28: } 29: } 30: } 31: } Here the MessageListener now takes an Action<Message>.  For those of you unfamiliar with the pre-defined delegate types in .NET, that is a method with the signature: void SomeMethodName(Message). The great thing about delegates is it gives you a lot of power. You could create an anonymous delegate, a lambda, or specify any other method as long as it satisfies the Action<Message> signature. This way, you don't need to define an arbitrary helper class or name the method a specific thing. Incidentally, we could combine both the interface and delegate approach to allow maximum flexibility. Doing this, the user could either pass in a delegate, or specify a delegate interface: 1: // using delegation - give users choice of interface or delegate 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private Action<Message> _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, Action<Message> handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // passes the interface method as a delegate using method group 19: public MessageListener(ISubscriber subscriber, IMessageHandler handler) 20: : this(subscriber, handler.OnMessageReceived) 21: { 22: } 23:  24: // handle the looping in the thread 25: private void MessageLoop() 26: { 27: while(!_isHalted) 28: { 29: // as long as processing, wait 1 second for message 30: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 31: if(msg != null) 32: { 33: _handler(msg); 34: } 35: } 36: } 37: } } This is the method I tend to prefer because it allows the user of the class to choose which method works best for them. You may be curious about the actual performance of these different methods. 1: Enter iterations: 2: 1000000 3:  4: Inheritance took 4 ms. 5: Events took 7 ms. 6: Interface delegation took 4 ms. 7: Lambda delegate took 5 ms. Before you get too caught up in the numbers, however, keep in mind that this is performance over over 1,000,000 iterations. Since they are all < 10 ms which boils down to fractions of a micro-second per iteration so really any of them are a fine choice performance wise. As such, I think the choice of what to do really boils down to what you're trying to do. Here's my guidelines: Inheritance should be used only when defining a collection of related types with implementation specific behaviors, it should not be used as a hook for users to add their own functionality. Events should be used when subscription is optional or multi-cast is desired. Interface delegation should be used when you wish to refer to implementing classes by the interface type or if the type requires several methods to be implemented. Delegate method delegation should be used when you only need to provide one method and do not need to refer to implementers by the interface name.

    Read the article

  • How to talk a client out of a Flash website?

    - by bunglestink
    I have recently been doing a bunch of web side projects through word of mouth recommendations only. Although I am much more a of a programmer than a designer by any means, my design skills are not terrible, and do not hate dealing with UI like many programmers. As a result, I find myself lured into a bunch of side projects where aside from a minimal back end for content administration, most of the programming is on front end interfaces (read javascript/css). By far the biggest frustration I have had is convincing clients that they do not want Flash. Aside the fact that I really do not enjoy Flash "development", there are many practical reasons why Flash is not desirable (lack of compatibility across devices, decreased client accessibility, plug-in requirements, increased development time, etc.). Instead of just flat out telling the clients "I will not build you a flash website", I would much rather use tactics to convince/explain to them that this is not what they actually want, ie: meet their requirements any better than standard html/css/js and distract users from their content. What kind of first hand experience do others have with this? How do you explain to someone that javascript/css/AJAX is usually a better option for most websites? Why do people want to use Flash so bad to begin with? This question pertains to clients who do not have any technical reasons for wanting flash, but just want it because they think it makes pretty websites.

    Read the article

  • What is the rationale behind Apache Jena's *everything is an interface if possible* design philosophy?

    - by David Cowden
    If you are familiar with the Java RDF and OWL engine Jena, then you have run across their philosophy that everything should be specified as an interface when possible. This means that a Resource, Statement, RDFNode, Property, and even the RDF Model, etc., are, contrary to what you might first think, Interfaces instead of concrete classes. This leads to the use of Factories quite often. Since you can't instantiate a Property or Model, you must have something else do it for you --the Factory design pattern. My question, then, is, what is the reasoning behind using this pattern as opposed to a traditional class hierarchy system? It is often perfectly viable to use either one. For example, if I want a memory backed Model instead of a database-backed Model I could just instantiate those classes, I don't need ask a Factory to give me one. As an aside, I'm in the process of writing a library for manipulating Pearltrees data, which is exported from their website in the form of an RDF/XML document. As I write this library, I have many options for defining the relationships present in the Peartrees data. What is nice about the Pearltrees data is that it has a very logical class system: A tree is made up of pearls, which can be either Page, Reference, Alias, or Root pearls. My question comes from trying to figure out if I should adopt the Jena philosophy in my library which uses Jena, or if I should disregard it, pick my own design philosophy, and stick with it.

    Read the article

  • Ask the Readers: Do You Use the Command Line?

    - by Jason Fitzpatrick
    Despite over two decades of GUI interfaces many power users still turn to the command prompt. This week we want to hear about when and how you use the command prompt on your computer. Long ago in a time before you could manipulate your computer with a mouse and a series of buttons and windows, the command line ruled all. Even after years of GUI development and refinement many people still turn to the command line to get things done. This week we want to hear all about your command line tips and tricks. Do you use the default command line for your OS? Have you enhanced it? Replaced it? What keeps you coming back to the command line when everyone happily works away in the OS’s GUI? Sound off in the comments and don’t forget to check back in on Friday to see the What You Said roundup. What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • DAO/Webservice Consumption in Web Application

    - by Gavin
    I am currently working on converting a "legacy" web-based (Coldfusion) application from single data source (MSSQL database) to multi-tier OOP. In my current system there is a read/write database with all the usual stuff and additional "read-only" databases that are exported daily/hourly from an Enterprise Resource Planning (ERP) system by SSIS jobs with business product/item and manufacturing/SCM planning data. The reason I have the opportunity and need to convert to multi-tier OOP is a newer more modern ERP system is being implemented business wide that will be a complete replacement. This newer ERP system offers several interfaces for third party applications like mine, from direct SQL access to either a dotNet web-service or a SOAP-like web-service. I have found several suitable frameworks I would be happy to use (Coldspring, FW/1) but I am not sure what design patterns apply to my data access object/component and how to manage the connection/session tokens, with this background, my question has the following three parts: Firstly I have concerns with moving from the relative safety of a SSIS job that protects me from downtime and speed of the ERP system to directly connecting with one of the web services which I note seem significantly slower than I expected (simple/small requests often take up to a whole second). Are there any design patterns I can investigate/use to cache/protect my data tier? It is my understanding data access objects (the component that connects directly with the web services and convert them into the data types I can then work with in my Domain Objects) should be singletons (and will act as an Adapter/Facade), am I correct? As part of the data access object I have to setup a connection by username/password (I could set up multiple users and/or connect multiple times with this) which responds with a session token that needs to be provided on every subsequent request. Do I do this once and share it across the whole application, do I setup a new "connection" for every user of my application and keep the token in their session scope (might quickly hit licensing limits), do I set the "connection" up per page request, or is there a design pattern I am missing that can manage multiple "connections" where a requests/access uses the first free "connection"? It is worth noting if the ERP system dies I will need to reset/invalidate all the connections and start from scratch, and depending on which web-service I use might need manually close the "connection/session"

    Read the article

  • New Study Guide: "Oracle Solaris 11 System Administration"

    - by Harold Green
    A new helpful resource for Solaris 11 exam preparation has just been released. "Oracle Solaris 11 System Administration" by author and educator Bill Calkins covers effective installation and administration of an Oracle Solaris 11 system. In addition to being a valuable, comprehensive study guide, the book also serves as a complete reference guide for the everyday tasks of an Oracle Solaris System Administrator. This book can be a valuable addition to your preparation for the Oracle Solaris 11 Advanced System Administration (1Z1-822) certification exam. This exam, combined with the Oracle Certified Associate, Oracle Solaris 11 System Administrator (OCA) certification and a training requirement will earn you the Oracle Certified Professional, Oracle Solaris 11 System Administrator (OCP) certification. This valuable credential is designed for Oracle Solaris System Administrators with a strong foundation in the Oracle Solaris 11 Operating System as well as a fundamental understanding of the UNIX operating system, commands and utilities. This certification covers topics on core elements such as: configuring network interfaces, managing swap configurations, crash dumps, and core files. The 822 exam is currently in beta at the greatly discounted rate of $50 USD, but the beta period will soon be closing (likely the end of this month/June 2013), so be take advantage of the opportunity to be one of the first to hold this new certification.  Bill Calkins also recently posted some tips for taking Oracle Solaris 11 certification exams.

    Read the article

  • Decoupling software components via naming convention

    - by csteinmueller
    I'm currently evaluating alternatives to refactor a drivermanagement. In my multitier architecture I have Baseclass DAL.Device //my entity Interfaces BL.IDriver //handles the dataprocessing between application and device BL.IDriverCreator //creates an IDriver from a Device BL.IDriverFactory //handles the driver creation requests Every specialization of Device has a corresponding IDriver implementation and a corresponding IDriverCreator implementation. At the moment the mapping is fix via a type check within the business layer / DriverFactory. That means every new driver needs a) changing code within the DriverFactory and b) referencing the new IDriver implementation / assembly. On a customers point of view that means, every new driver, used or not, needs a complex revalidation of their hardware environment, because it's a critical process. My first inspiration was to use a caliburn micro like nameconvention see Caliburn.Micro: Xaml Made Easy BL.RestDriver BL.RestDriverCreator DAL.RestDevice After receiving the RestDevicewithin the IDriverFactory I can load all driver dlls via reflection and do a namesplitting/comparing (extracting the xx from xxDriverCreator and xxDevice) Another idea would be a custom attribute (which also leads to comparing strings). My question: is that a good approach above layer borders? If not, what would be a good approach?

    Read the article

  • Staying OO and Testable while working with a database

    - by Adam Backstrom
    What are some OOP strategies for working with a database but keeping thing testable? Say I have a User class and my production environment works against MySQL. I see a couple possible approaches, shown here using PHP: Pass in a $data_source with interfaces for load() and save(), to abstract the backend source of data. When testing, pass a different data store. $user = new User( $mysql_data_source ); $user-load( 'bob' ); $user-setNickname( 'Robby' ); $user-save(); Use a factory that accesses the database and passes the result row to User's constructor. When testing, manually generate the $row parameter, or mock the object in UserFactory::$data_source. (How might I save changes to the record?) class UserFactory { static $data_source; public static function fetch( $username ) { $row = self::$data_source->get( [params] ); $user = new User( $row ); return $user; } } I have Design Patterns and Clean Code here next to me, but I'm struggling to find applicable concepts.

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • Ubuntu Server 12.04 as a router. Problem with DNS

    - by Lorenzo
    I have a virtualbox lab made up of 4 Windows 2008 R2 servers (DC/DNS,SQL,SHAREPOINT, EXCHANGE) that are configured with static ip addresses with NIC's attached to Internal network. Everything works. I had the requirement to execute some tests that also access external services available on the internet. To keep things clean and similar to the production environment I have installed another VM, with Ubuntu Server 12.04 64 bit and configured (I hope) to work as a router like described on this post. This VM has two network interfaces: first is Bridged with the host and is used as a WAN connection and the other one attached in the Internal Network with its own static IP address on the internal network subnet. But actually the Windows servers does not connect to the internet while the unix one connects. I did a route command. this is the result: Kernel IP Routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.69.121.1 0.0.0.0 UG 100 0 0 eth0 10.69.121.0 * 255.255.255.0 U 0 0 0 eth0 192.168.83.0 * 255.255.255.0 U 0 0 0 eth1 Can somebody help me with this configuration? :) Thanks! Addendum: I forgot to mention that one of the windows server hosts a DNS service for which I should maybe configure a forwarding server but I do not exactly know which server to forward on... :(

    Read the article

  • How do I boot Ubuntu Cloud images in vmware?

    - by Graham
    I want to run an Ubuntu cloud image on on VMWare. I've gotten pretty far but want to know how to set the OVF properties in a way that VMWare understands in order to pass parameters to cloud-init. This is what I've done: Install VMWare Player 4.0.4, using the vmware workstation 8.0.2 / player 4.0.2 fix for linux 3.2+ patch to get around the compilation failure for virtual ethernet module. Download precise-server-cloudimg-amd64.ovf, and also the QCOW2 .img file (220MB) Run qemu-img convert -O vmdk precise-server-cloudimg-amd64-disk1.img precise-server-cloudimg-amd64-disk1.vmdk to convert the image from QCOW2 to VMDK Edit the OVF to change the extension and ovf:size on the <File> element and set ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#sparse" on the <Disk> element. Edit the OVF to remove all the <Property> elements because vmplayer was complaining about unrecognised elements. Run the OVF in vmplayer. Alternatively, run ovftool to convert to vmx and run the vmx. Unfortunately I can't log in as "ubuntu" at the prompt because the OVF properties haven't been provided to cloud-init. How should I do this?

    Read the article

  • Are there any "best practices" on cross-device development?

    - by vstrien
    Developing for smartphones in the way the industry is currently doing is relatively new. Of course, there has been enterprise-level mobile development for several decades. The platforms have changed, however. Think of: from stylus-input to touch-input (different screen res, different control layout etc.) new ways of handling multi-tasking on mobile platforms (e.g. WP7's "tombstoning") The way these platforms work aren't totally new (iPhone has been around for quite awhile now for example), but at the moment when developing a functionally equal application for both desktop and smartphone it comes down to developing two applications from ground up. Especially with the birth of Windows Phone with the .NET-platform on board and using Silverlight as UI-language, it's becoming appealing to promote the re-use of (parts of the UI). Still, it's fairly obvious that the needs of an application on a smartphone (or tablet) are very different compared to the needs of a desktop application. An (almost) one-on-one conversion will therefore be impossible. My question: are there "best practices", pitfalls etc. documented about developing "cross-device" applications (for example, developing an app for both the desktop and the smartphone/tablet)? I've been looking at weblogs, scientific papers and more for a week or so, but what I've found so far is only about "migratory interfaces".

    Read the article

  • Implementing a multilanguage AI contest platform

    - by Alejandro Piad
    This is a followup to this question. To sum: I'm implementing an AI contest site, where each user may submit several AI implementations for different games. Think about Google AI Challenge but instead of just having a big event once a year, I would like it more on a league fashion, with all virtual players playing with each other every some close period of time. I want to support as many programming languages as possible. I've seen that contest sites (like codeforces) ask you to submit a source code and interact through stdin and stdout. The first question is: what is the best way of supporting multiple languages? As I see it, I can either ask people to upload some binary/script, and interact either through stdin/*stdout*, or sockets, or the file system; or ask people to submit source code, and wrap it with whatever is necessary for the interaction. I would like to skip the need to compile the code by myself (in the server, I mean), but I am willing to do it if its the "best" choice. I need to comunicate virtual players with each other, or even better, with some intermediary arbiter. The second question is regarding security. If I'm going to be running user code in my server, I want to ensure strict security conditions, like no file system access, no networking, etc. Otherwise it would be a safe heaven for hackers. I will be implementing the engine/arbiter in .NET. I would like to support at least C#, C++, Java and Python for the user's implementations. I'm willing to write interfaces for each of these languages to simplify the user interaction with the system. Thanks in advance.

    Read the article

  • Why C# does not support multiple inheritance?

    - by Jalpesh P. Vadgama
    Yesterday, One of my friend Dharmendra ask me that why C# does not support multiple inheritance. This is question most of the people ask every time. So I thought it will be good to write a blog post about it. So why it does not support multiple inheritance? I tried to dig into the problem and I have found the some of good links from C# team from Microsoft for why it’s not supported in it. Following is a link for it. http://blogs.msdn.com/b/csharpfaq/archive/2004/03/07/85562.aspx Also, I was giving some of the example to my friend Dharmendra where multiple inheritance can be a problem.The problem is called the diamond problem. Let me explain a bit. If you have class that is inherited from the more then one classes and If two classes have same signature function then for child class object, It is impossible to call specific parent class method. Here is the link that explains more about diamond problem. http://en.wikipedia.org/wiki/Diamond_problem Now of some of people could ask me then why its supporting same implementation with the interfaces. But for interface you can call that method explicitly that this is the method for the first interface and this the method for second interface. This is not possible with multiple inheritance. Following is a example how we can implement the multiple interface to a class and call the explicit method for particular interface. Multiple Inheritance in C# That’s it. Hope you like it. Stay tuned for more update..Till then happy programming.

    Read the article

  • CodePlex Daily Summary for Wednesday, May 30, 2012

    CodePlex Daily Summary for Wednesday, May 30, 2012Popular ReleasesOMS.Ice - T4 Text Template Generator: OMS.Ice - T4 Text Template Generator v1.4.0.14110: Issue 601 - Template file name cannot contain characters that are not allowed in C#/VB identifiers Issue 625 - Last line will be ignored by the parser Issue 626 - Usage of environment variables and macrosSilverlight Toolkit: Silverlight 5 Toolkit Source - May 2012: Source code for December 2011 Silverlight 5 Toolkit release.totalem: version 2012.05.30.1: Beta version added function for mass renaming files and foldersAudio Pitch & Shift: Audio Pitch and Shift 4.4.0: Tracklist added on main window Improved performances with tracklist Some other fixesJson.NET: Json.NET 4.5 Release 6: New feature - Added IgnoreDataMemberAttribute support New feature - Added GetResolvedPropertyName to DefaultContractResolver New feature - Added CheckAdditionalContent to JsonSerializer Change - Metro build now always uses late bound reflection Change - JsonTextReader no longer returns no content after consecutive underlying content read failures Fix - Fixed bad JSON in an array with error handling creating an infinite loop Fix - Fixed deserializing objects with a non-default cons...DBScripterCmd - A command line tool to script database objects to seperate files: DBScripterCmd Source v1.0.2.zip: Add support for SQL Server 2005Indent Guides for Visual Studio: Indent Guides v12.1: Version History Changed in v12.1: Fixed crash when unable to start asynchronous analysis Fixed upgrade from v11 Changed in v12: background document analysis new options dialog with Quick Set selections for behavior new "glow" style for guides new menu icon in VS 11 preview control now uses editor theming highlighting can be customised on each line fixed issues with collapsed code blocks improved behaviour around left-aligned pragma/preprocessor commands (C#/C++) new setting...DotNetNuke® Community Edition CMS: 06.02.00: Major Highlights Fixed issue in the Site Settings when single quotes were being treated as escape characters Fixed issue loading the Mobile Premium Data after upgrading from CE to PE Fixed errors logged when updating folder provider settings Fixed the order of the mobile device capabilities in the Site Redirection Management UI The User Profile page was completely rebuilt. We needed User Profiles to have multiple child pages. This would allow for the most flexibility by still f...StarTrinity Face Recognition Library: Version 1.2: Much better accuracy????: ????2.0.1: 1、?????。WiX Toolset: WiX v3.6 RC: WiX v3.6 RC (3.6.2928.0) provides feature complete Burn with VS11 support. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/5/28/WiX-v3.6-Release-Candidate-availableJavascript .NET: Javascript .NET v0.7: SetParameter() reverts to its old behaviour of allowing JavaScript code to add new properties to wrapped C# objects. The behavior added briefly in 0.6 (throws an exception) can be had via the new SetParameterOptions.RejectUnknownProperties. TerminateExecution now uses its isolate to terminate the correct context automatically. Added support for converting all C# integral types, decimal and enums to JavaScript numbers. (Previously only the common types were handled properly.) Bug fixe...callisto: callisto 2.0.29: Added DNS functionality to scripting. See documentation section for details of how to incorporate this into your scripts.Phalanger - The PHP Language Compiler for the .NET Framework: 3.0 (May 2012): Fixes: unserialize() of negative float numbers fix pcre possesive quantifiers and character class containing ()[] array deserilization when the array contains a reference to ISerializable parsing lambda function fix round() reimplemented as it is in PHP to avoid .NET rounding errors filesize bypass for FileInfo.Length bug in Mono New features: Time zones reimplemented, uses Windows/Linux databaseSharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.1): New fetures:Admin enable / disable match Hide/Show Euro 2012 SharePoint lists (3 lists) Installing SharePoint Euro 2012 PredictorSharePoint Euro 2012 Predictor has been developed as a SharePoint Sandbox solution to support SharePoint Online (Office 365) Download the solution havivi.euro2012.wsp from the download page: Downloads Upload this solution to your Site Collection via the solutions area. Click on Activate to make the web parts in the solution available for use in the Site C...????SDK for .Net 4.0+(OAuth2.0+??V2?API): ??V2?SDK???: ????SDK for .Net 4.X???????PHP?SDK???OAuth??API???Client???。 ??????API?? ???????OAuth2.0???? ???:????????,DEMO??AppKey????????????????,?????AppKey,????AppKey???????????,?????“????>????>????>??????”.Net Code Samples: Code Samples: Code samples (SLNs).LINQ_Koans: LinqKoans v.02: Cleaned up a bitCommonLibrary.NET: CommonLibrary.NET 0.9.8 - Final Release: A collection of very reusable code and components in C# 4.0 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. FluentscriptCommonLibrary.NET 0.9.8 contains a scripting language called FluentScript. Application: FluentScript Version: 0.9.8 Build: 0.9.8.4 Changeset: 75050 ( CommonLibrary.NET ) Release date: May 24, 2012 Binaries: CommonLibrary.dll Namespace: ComLib.Lang Project site: http://fluentscript.codeplex.com...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0 RC1 Refresh 2: JayData is a unified data access library for JavaScript developers to query and update data from different sources like webSQL, indexedDB, OData, Facebook or YQL. See it in action in this 6 minutes video: http://www.youtube.com/watch?v=LlJHgj1y0CU RC1 R2 Release highlights Knockout.js integrationUsing the Knockout.js module, your UI can be automatically refreshed when the data model changes, so you can develop the front-end of your data manager app even faster. Querying 1:N relations in W...New Projects5Widgets: 5Widgets is a framework for building HTML5 canvas interfaces. Written in JavaScript, 5Widgets consists of a library of widgets and a controller that implements the MVC pattern. Though the HTML5 standard is gaining popularity, there is no framework like this at the moment. Yet, as a professional developer, I know many, including myself, would really find such a library useful. I have uploaded my initial code, which can definitely be improved since I have not had the time to work on it fu...Azure Trace Listener: Simple Trace Listener outputting trace data directly to Windows Azure Queue or Table Storage. Unlike the Windows Azure Diagnostics Listener (WAD), logging happens immediately and does not rely on (scheduled or manually triggered) Log Transfer mechanism. A simple Reader application shows how to read trace entries and can be used as is or as base for more advanced scenarios.CodeSample2012: Code Sample is a windows tool for saving pieces of codeEncryption: The goal of the Encryption project is to provide solid, high quality functionality that aims at taking the complexity out of using the System.Security.Cryptography namespace. The first pass of this library provides a very strong password encryption system. It uses variable length random salt bytes with secure SHA512 cryptographic hashing functions to allow you to provide a high level of security to the users. Entity Framework Code-First Automatic Database Migration: The Entity Framework Code-First Automatic Database Migration tool was designed to help developers easily update their database schema while preserving their data when they change their POCO objects. This is not meant to take the place of Code-First Migrations. This project is simply designed to ease the development burden of database changes. It grew out of the desire to not have to delete, recreated, and seed the database every time I made an object model change. Function Point Plugin: Function Point Tracability Mapper VSIX for Visual Studio 2010/TFS 2010+FunkOS: dcpu16 operating systemGit for WebMatrix: This is a WebMatrix Extension that allows users to access Git Source Control functions.Groupon Houses: the groupon site for housesLiquifier - Complete serialisation/deserialisation for complex object graphs: Liquifier is a serialisation/deserialisation library for preserving object graphs with references intact. Liquifier uses attributes and interfaces to allow the user to control how a type is serialised - the aim is to free the user from having to write code to serialise and deserialise objects, especially large or complex graphs in which this is a significant undertaking.MTACompCommEx032012: lak lak lakMVC Essentials: MVC Essentials is aimed to have all my learning in the projects that I have worked.MyWireProject: This project manages wireless networks.Peulot Heshbon - Hebrew: This program is for teaching young students math, until 6th grade. The program gives questions for the user. The user needs to answer the question. After 10 questions the user gets his mark. The marks are saved and can be viewed from every run. PlusOne: A .NET Extension and Utility Library: PlusOne is a library of extension and utility methods for C#.Project Support: This project is a simple project management solution. This will allow you to manage your clients, track bug reports, request additional features to projects that you are currently working on and much more. A client will be allowed to have multiple users, so that you can track who has made reports etc and provide them feedback. The solution is set-up so that if you require you can modify the styling to fit your companies needs with ease, you can even have multiple styles that can be set ...SharePoint 2010 Slide Menu Control: Navigation control for building SharePoint slide menuSIGO: 1 person following this project (follow) Projeto SiGO O Projeto SiGO (Sistema de Gerenciamento Odontologico) tem um Escorpo Complexo com varios programas e rotinas, compondo o modulo de SAC e CRM, Finanças, Estoque e outro itens. Coordenador Heitor F Neto : O Projeto SiGo desenvolvido aqui no CodePlex e open source, sera apenas um Prototipo, assim que desenvolmemos os modulos basicos iremos migrar para um servidor Pago e com segurança dos Codigo Fonte e Banco de Dados. Pessoa...STEM123: Windows Phone 7 application to help people find and create STEM Topic details.TIL: Text Injection and Templating Library: An advanced alternative to string.Format() that encapsulates templates in objects, uses named tokens, and lets you define extra serializing/processing steps before injecting input into the template (think, join an array, serialize an object, etc).UAH Exchange: Ukrainian hrivna currency exchangeuberBook: uberBook ist eine Kontakt-Verwaltung für´s Tray. Das Programm syncronisiert sämtliche Kontakte mit dem Internet und sucht automatisch nach Social-Network Profilen Ihrer KontakteWPF Animated GIF: A simple library to display animated GIF images in WPF, usable in XAML or in code.

    Read the article

  • Atelier gratuit : Découvrir la solution d'exploration de données structuré et non structuré

    - by David lefranc
    Explorer et découvrir l’information… Nous vous proposons un atelier découverte pour vous permettre d’explorer toute type de données grace à la solution Oracle Endeca Information Discovery. Quand : 7 Décembre 2012 De 9h30 à 12h30  Lieu : Oracle 15 Boulevard Charles de gaulle 92715 Colombes Pour s'inscrire : [email protected] Réalisé pour des utilisateurs métiers, cet atelier vous permettera en une demi journée , de découvrir Oracle Endeca Information Discovery afin de : Comprendre et explorer toute information venant de différents horizons ( Big Data, réseaux sociaux, forums, sondages, blogs..) Découvrir en quoi et comment OEID est un complément à des solutions de BI classiques Par une navigation simple et rapide, vous découvrirez combien il est facile de trouver des réponses à des questions imprévues en utilisant OEID sans formation préalable. Utilisez la recherche et la navigation guidée pour voir comment les informations structurées et non structurées peuvent être rapidement réunies pour dégager la valeur cachée. Explorer toutes vos données dans n'importe quel format et à partir de n'importe quelle source, y compris les médias sociaux, documents, fichiers,…. Pouvoir découvrir et explorer vos données sans référentiel pour permettre aux utilisateurs d’être autonome et d’analyser leurs propres données de manière rapide Élaborer une stratégie visant à accroître la valeur des données de l'entreprise tout en réduisant le coût total de possession Découvrez l'incroyable performance d’ Endeca sur Oracle Exalytics la machine In Memory Agenda Après une introduction sur la solution Oracle information Endeca, suivi d’un atelier, vous verrez comment il est facile de: Utiliser la navigation guidée et le moteur de recherche pour explorer les données structurées et non structurées intégrer rapidement les nouvelles sources de données comme les médias sociaux Construire de nouvelles interfaces utilisateur tout en découvrant l’information répondre rapidement aux besoins changeants des entreprises et des environnements de données Quand Lieu 7 Décembre 2012 De 9h30 à 12h30 Oracle 15 Boulevard Charles de gaulle 92715 Colombes

    Read the article

  • The Stub Proto: Not Just For Stub Objects Anymore

    - by user9154181
    One of the great pleasures of programming is to invent something for a narrow purpose, and then to realize that it is a general solution to a broader problem. In hindsight, these things seem perfectly natural and obvious. The stub proto area used to build the core Solaris consolidation has turned out to be one of those things. As discussed in an earlier article, the stub proto area was invented as part of the effort to use stub objects to build the core ON consolidation. Its purpose was merely as a place to hold stub objects. However, we keep finding other uses for it. It turns out that the stub proto should be more properly thought of as an auxiliary place to put things that we would like to put into the proto to help us build the product, but which we do not wish to package or deliver to the end user. Stub objects are one example, but private lint libraries, header files, archives, and relocatable objects, are all examples of things that might profitably go into the stub proto. Without a stub proto, these items were handled in a variety of ad hoc ways: If one part of the workspace needed private header files, libraries, or other such items, it might modify its Makefile to reach up and over to the place in the workspace where those things live and use them from there. There are several problems with this: Each component invents its own approach, meaning that programmers maintaining the system have to invest extra effort to understand what things mean. In the past, this has created makefile ghettos in which only the person who wrote the makefiles feels confident to modify them, while everyone else ignores them. This causes many difficulties and benefits no one. These interdependencies are not obvious to the make, utility, and can lead to races. They are not obvious to the human reader, who may therefore not realize that they exist, and break them. Our policy in ON is not to deliver files into the proto unless those files are intended to be packaged and delivered to the end user. However, sometimes non-shipping files were copied into the proto anyway, causing a different set of problems: It requires a long list of exceptions to silence our normal unused proto item error checking. In the past, we have accidentally shipped files that we did not intend to deliver to the end user. Mixing cruft with valuable items makes it hard to discern which is which. The stub proto area offers a convenient and robust solution. Files needed to build the workspace that are not delivered to the end user can instead be installed into the stub proto. No special exceptions or custom make rules are needed, and the intent is always clear. We are already accessing some private lint libraries and compilation symlinks in this manner. Ultimately, I'd like to see all of the files in the proto that have a packaging exception delivered to the stub proto instead, and for the elimination of all existing special case makefile rules. This would include shared objects, header files, and lint libraries. I don't expect this to happen overnight — it will be a long term case by case project, but the overall trend is clear. The Stub Proto, -z assert_deflib, And The End Of Accidental System Object Linking We recently used the stub proto to solve an annoying build issue that goes back to the earliest days of Solaris: How to ensure that we're linking to the OS bits we're building instead of to those from the running system. The Solaris product is made up of objects and files from a number of different consolidations, each of which is built separately from the others from an independent code base called a gate. The core Solaris OS consolidation is ON, which stands for "Operating System and Networking". You will frequently also see ON called the OSnet. There are consolidations for X11 graphics, the desktop environment, open source utilities, compilers and development tools, and many others. The collection of consolidations that make up Solaris is known as the "Wad Of Stuff", usually referred to simply as the WOS. None of these consolidations is self contained. Even the core ON consolidation has some dependencies on libraries that come from other consolidations. The build server used to build the OSnet must be running a relatively recent version of Solaris, which means that its objects will be very similar to the new ones being built. However, it is necessarily true that the build system objects will always be a little behind, and that incompatible differences may exist. The objects built by the OSnet link to other objects. Some of these dependencies come from the OSnet, while others come from other consolidations. The objects from other consolidations are provided by the standard library directories on the build system (/lib, /usr/lib). The objects from the OSnet itself are supposed to come from the proto areas in the workspace, and not from the build server. In order to achieve this, we make use of the -L command line option to the link-editor. The link-editor finds dependencies by looking in the directories specified by the caller using the -L command line option. If the desired dependency is not found in one of these locations, ld will then fall back to looking at the default locations (/lib, /usr/lib). In order to use OSnet objects from the workspace instead of the system, while still accessing non-OSnet objects from the system, our Makefiles set -L link-editor options that point at the workspace proto areas. In general, this works well and dependencies are found in the right places. However, there have always been failures: Building objects in the wrong order might mean that an OSnet dependency hasn't been built before an object that needs it. If so, the dependency will not be seen in the proto, and the link-editor will silently fall back to the one on the build server. Errors in the makefiles can wipe out the -L options that our top level makefiles establish to cause ld to look at the workspace proto first. In this case, all objects will be found on the build server. These failures were rarely if ever caught. As I mentioned earlier, the objects on the build server are generally quite close to the objects built in the workspace. If they offer compatible linking interfaces, then the objects that link to them will behave properly, and no issue will ever be seen. However, if they do not offer compatible linking interfaces, the failure modes can be puzzling and hard to pin down. Either way, there won't be a compile-time warning or error. The advent of the stub proto eliminated the first type of failure. With stub objects, there is no dependency ordering, and the necessary stub object dependency will always be in place for any OSnet object that needs it. However, makefile errors do still occur, and so, the second form of error was still possible. While working on the stub object project, we realized that the stub proto was also the key to solving the second form of failure caused by makefile errors: Due to the way we set the -L options to point at our workspace proto areas, any valid object from the OSnet should be found via a path specified by -L, and not from the default locations (/lib, /usr/lib). Any OSnet object found via the default locations means that we've linked to the build server, which is an error we'd like to catch. Non-OSnet objects don't exist in the proto areas, and so are found via the default paths. However, if we were to create a symlink in the stub proto pointing at each non-OSnet dependency that we require, then the non-OSnet objects would also be found via the paths specified by -L, and not from the link-editor defaults. Given the above, we should not find any dependency objects from the link-editor defaults. Any dependency found via the link-editor defaults means that we have a Makefile error, and that we are linking to the build server inappropriately. All we need to make use of this fact is a linker option to produce a warning when it happens. Although warnings are nice, we in the OSnet have a zero tolerance policy for build noise. The -z fatal-warnings option that was recently introduced with -z guidance can be used to turn the warnings into fatal build errors, forcing the programmer to fix them. This was too easy to resist. I integrated 7021198 ld option to warn when link accesses a library via default path PSARC/2011/068 ld -z assert-deflib option into snv_161 (February 2011), shortly after the stub proto was introduced into ON. This putback introduced the -z assert-deflib option to the link-editor: -z assert-deflib=[libname] Enables warning messages for libraries specified with the -l command line option that are found by examining the default search paths provided by the link-editor. If a libname value is provided, the default library warning feature is enabled, and the specified library is added to a list of libraries for which no warnings will be issued. Multiple -z assert-deflib options can be specified in order to specify multiple libraries for which warnings should not be issued. The libname value should be the name of the library file, as found by the link-editor, without any path components. For example, the following enables default library warnings, and excludes the standard C library. ld ... -z assert-deflib=libc.so ... -z assert-deflib is a specialized option, primarily of interest in build environments where multiple objects with the same name exist and tight control over the library used is required. If is not intended for general use. Note that the definition of -z assert-deflib allows for exceptions to be specified as arguments to the option. In general, the idea of using a symlink from the stub proto is superior because it does not clutter up the link command with a long list of objects. When building the OSnet, we usually use the plain from of -z deflib, and make symlinks for the non-OSnet dependencies. The exception to this are dependencies supplied by the compiler itself, which are usually found at whatever arbitrary location the compiler happens to be installed at. To handle these special cases, the command line version works better. Following the integration of the link-editor change, I made use of -z assert-deflib in OSnet builds with 7021896 Prevent OSnet from accidentally linking to build system which integrated into snv_162 (March 2011). Turning on -z assert-deflib exposed between 10 and 20 existing errors in our Makefiles, which were all fixed in the same putback. The errors we found in our Makefiles underscore how difficult they can be prevent without an automatic system in place to catch them. Conclusions The stub proto is proving to be a generally useful construct for ON builds that goes beyond serving as a place to hold stub objects. Although invented to hold stub objects, it has already allowed us to simplify a number of previously difficult situations in our makefiles and builds. I expect that we'll find uses for it beyond those described here as we go forward.

    Read the article

  • no way to use opendns on pppoe connection?

    - by magisterludi
    I have an old speedtouch usb modem (revision 0) and on my desktop with xubuntu 12.04 I've configured a pppoe connection. I can connect and my ISP assign an IP address and the DNS but the primary DNS address is not reachable by ping, the secondary yes but no address is resolved then I can't surf the web. Then I want to set the open DNS but there is bo way, if I change manually /etc/resolv.conf it is rewrited by some script (there is the flag usepeerdns on the configuration script, if I exclude it there is no way to assign any DNS server because resolv.conf is not read) also if I set not writable the file changing the default permission. I changed dhclient.conf with the code prepend domain-name-servers 208.67.222.222,208.67.220.220; and now if I connect by a wifi connection to my router I'm using openDNS server but ppp does not use this script as long as I can see and the DNS server is always setted by my ISP. How can I use set DNS manually to a PPP connection? Is there any way to change it after the connection? Why NetworkManager is not able to manage my dsl connection, it seems not able to manage the dsl usb cable modem. If I use pppoeconf NetworkManager doesn't start and I have to manually delete the lines added to /etc/network/interfaces because the system is not able to start with full configuration of network If I connect a modem-router to the same line I can surf with the DNS server assigned by my ISP, I can't figure why. Some suggestion? Thanks to all

    Read the article

  • Single Responsibility Principle - How Can I Avoid Code Fragmentation?

    - by Dean Chalk
    I'm working on a team where the team leader is a virulent advocate of SOLID development principles. However, he lacks a lot of experience in getting complex software out of the door. We have a situation where he has applied SRP to what was already quite a complex code base, which has now become very highly fragmented and difficult to understand and debug. We now have a problem not only with code fragmentation, but also encapsulation, as methods within a class that may have been private or protected have been judged to represent a 'reason to change' and have been extracted to public or internal classes and interfaces which is not in keeping with the encapsulation goals of the application. We have some class constructors which take over 20 interface parameters, so our IoC registration and resolution is becoming a monster in its own right. I want to know if there is any 'refactor away from SRP' approach we could use to help fix some of these issues. I have read that it doesn't violate SOLID if I create a number of empty courser-grained classes that 'wrap' a number of closely related classes to provide a single-point of access to the sum of their functionality (i.e. mimicking a less overly SRP'd class implementation). Apart from that, I cannot think of a solution which will allow us to pragmatically continue with our development efforts, while keeping everyone happy. Any suggestions ?

    Read the article

  • Be liberal in what you accept... or not?

    - by Matthieu M.
    [Disclaimer: this question is subjective, but I would prefer getting answers backed by facts and/or reflexions] I think everyone knows about the Robustness Principle, usually summed up by Postel's Law: Be conservative in what you send; be liberal in what you accept. I would agree that for the design of a widespread communication protocol this may make sense (with the goal of allowing easy extension), however I have always thought that its application to HTML / CSS was a total failure, each browser implementing its own silent tweak detection / behavior, making it near impossible to obtain a consistent rendering across multiple browsers. I do notice though that there the RFC of the TCP protocol deems "Silent Failure" acceptable unless otherwise specified... which is an interesting behavior, to say the least. There are other examples of the application of this principle throughout the software trade that regularly pop up because they have bitten developpers, from the top off my head: Javascript semi-colon insertion C (silent) builtin conversions (which would not be so bad if it did not truncated...) and there are tools to help implement "smart" behavior: name matching phonetic algorithms (Double Metaphone) string distances algorithms (Levenshtein distance) However I find that this approach, while it may be helpful when dealing with non-technical users or to help users in the process of error recovery, has some drawbacks when applied to the design of library/classes interface: it is somewhat subjective whether the algorithm guesses "right", and thus it may go against the Principle of Least Astonishment it makes the implementation more difficult, thus more chances to introduce bugs (violation of YAGNI ?) it makes the behavior more susceptible to change, as any modification of the "guess" routine may break old programs, nearly excluding refactoring possibilities... from the start! And this is what led me to the following question: When designing an interface (library, class, message), do you lean toward the robustness principle or not ? I myself tend to be quite strict, using extensive input validation on my interfaces, and I was wondering if I was perhaps too strict.

    Read the article

  • Kubuntu 12.04 - DNS Issues

    - by AndrewJesaitis
    Starting yesterday (6/11/12), I've been having many network problems. When requesting a page in chrome, the page hangs on "Sending request" and then will eventually timeout. I'm within a VPN that has it's own DNS server. I've tried to manually set my DNS through the Network-Manager applet and by editing /etc/network/interfaces. Having no luck I unlinked the resolv.conf file and dumped the contents of my old resolv.conf into it. Again having no luck, I deactivated the dnsmasq server in /etc/NetworkManager/NetworkManager.conf by commenting out the dns=dnsmasq. $ cat NetworkManager.conf [main] plugins=ifupdown,keyfile #dns=dnsmasq no-auto-default=D0:67:E5:EA:B6:6B, [ifupdown] managed=false $ nm-tool NetworkManager Tool State: connected (global) - Device: eth0 [Wired connection 1] ------------------------------------------- Type: Wired Driver: tg3 State: connected Default: yes HW Address: D0:67:E5:EA:B6:6B Capabilities: Carrier Detect: yes Speed: 1000 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.254.122 Prefix: 24 (255.255.255.0) Gateway: 192.168.254.2 DNS: 192.168.254.1 What is strange is that the network will work fine for a few minutes then start to timeout. A few minutes later it will work again. I'm unable to hit internal or external sites when it is timing out. When I $dig local sites, I receive no answer. I do receive an answer from google.com. At this point, I would usually blame the DNS Server, especially since when I change to Google's DNS server things work. But, I need to use our internal DNS to hit our internal sites. Nobody else is having issues and they are all using DHCP. This group includes one user who is using 11.04. At this point, I'm at a loss for what to do, so any help would be appreciated.

    Read the article

  • How should a developer reject impossible requirements?

    - by sugar
    Here's the problem I'm facing: Quote From Project Manager: Hey Sugar, I'm assigning you the task of developing a framework that could be used for many different iOS applications. Here are the requirements: It should be able to detect the thickness of the thumb or fingers being used to manipulate the UI. With this information, all elements of the UI should be arranged & sized automatically. For a larger thumb, elements should be arranged nearer the center of the screen. For a smaller thumb, elements should be arranged nearer the corners of the screen. For a larger thumb, all fonts should be smaller. (We're assuming an adult in this case.) For a smaller thumb, all fonts should be larger. (We're assuming a younger person in this case.) Summary: This framework is required for creating user-friendly user interfaces programmatically. The framework should be developed in such a way that we can use for as many projects as needed, so it must also be very developer-friendly. I am the developer given this task, so my questions are as follows: How can I explain that these requirements are a little ridiculous? How can I explain that it would be better to concentrate on developing actual projects? How can I explain that even if this were possible, I wouldn't recommended developing such a thing? How do I say NO to this project politely, gently, and respectfully? How can I explain that even for a developer with 3 years of experience, this might not be possible?

    Read the article

  • Install VirtualBox Guest Additions "Feisty Fawn"

    - by codebone
    I am trying to install the virtualbox guest additions into a feisty fawn (7.04) VM. The problem I am running into is that when I Click install guest additions, or place the iso in the drive manually, the disk never shows up in the machine. I even manually mounted and went to the mounted directory and it was empty. When using the file browser, doubling clicking on the cdrom yields Unable to mount the selected volume. mount: special device /dev/hdc does not exist. Secondly, I tried installing via repository... but according to the system, I have no internet connection. But, I can browse the web using firefox just fine (in the guest). Here is my /etc/network/interfaces file... code auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp I also tryed setting a static ip... auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.253 netmask 255.255.255.0 gateway 192.168.1.1 The reason I am trying to get this particular installation running is because it is part of a book, "Hacking: The Art of Exploitation" (Jon Erickson), and is fully loaded wilh tools and such to go along exactly with the book. I appreciate any effort into finding a solution for this! Thanks

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >