Search Results

Search found 13331 results on 534 pages for 'fluent interface'.

Page 128/534 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • NHibernate's automatic (dirty checking) update behaviour - turning it off

    - by Andrew Bullock
    I've just discovered that if I get an object from an NHibernate session and change a property on object, NHibernate will automatically update the object on commit without me calling Session.Update(myObj)! I can see how this could be helpful, but as default behaviour it seems crazy! How can I stop this happening? Is this default NHib behaviour or something coming from Fluent NHibs AutoPersistenceModel? If there's no way to stop this, what do I do? Unless I'm missing the point this behaviour seems to create a right mess, violating my UoW. Im using NHibernate 2.0.1.4 and a Fluent NHib build from 18/3/2009 Edit, is this guy right with his answer? Edit: I've also read that overriding an Event Listener could be a solution to this. However, IDirtyCheckEventListener.OnDirtyCheck isn't called in this situation. Does anyone know which listener I need to override? Thanks Andrew

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • How do I add a custom view to iPhone app's UI?

    - by Dr Dork
    I'm diving into iPad development and I'm still learning how everything works together. I understand how to add standard view (i.e. buttons, tableviews, datepicker, etc.) to my UI using both Xcode and Interface Builder, but now I'm trying to add a custom calendar control (TapkuLibrary) to the left window in my UISplitView application. My question is, if I have a custom view (in this case, the TKCalendarMonthView), how do I programmatically add it to one of the views in my UI (in this case, the RootViewController)? Below are some relevant code snippets from my project... RootViewController interface @interface RootViewController : UITableViewController <NSFetchedResultsControllerDelegate> { DetailViewController *detailViewController; NSFetchedResultsController *fetchedResultsController; NSManagedObjectContext *managedObjectContext; } @property (nonatomic, retain) IBOutlet DetailViewController *detailViewController; @property (nonatomic, retain) NSFetchedResultsController *fetchedResultsController; @property (nonatomic, retain) NSManagedObjectContext *managedObjectContext; - (void)insertNewObject:(id)sender; TKCalendarMonthView interface @class TKMonthGridView,TKCalendarDayView; @protocol TKCalendarMonthViewDelegate, TKCalendarMonthViewDataSource; @interface TKCalendarMonthView : UIView { id <TKCalendarMonthViewDelegate> delegate; id <TKCalendarMonthViewDataSource> dataSource; NSDate *currentMonth; NSDate *selectedMonth; NSMutableArray *deck; UIButton *left; NSString *monthYear; UIButton *right; UIImageView *shadow; UIScrollView *scrollView; } @property (readonly,nonatomic) NSString *monthYear; @property (readonly,nonatomic) NSDate *monthDate; @property (assign,nonatomic) id <TKCalendarMonthViewDataSource> dataSource; @property (assign,nonatomic) id <TKCalendarMonthViewDelegate> delegate; - (id) init; - (void) reload; - (void) selectDate:(NSDate *)date; Thanks in advance for all your help! I still have a ton to learn, so I apologize if the question is absurd in any way. I'm going to continue researching this question right now!

    Read the article

  • Observer pattern used with decorator pattern

    - by icelated
    I want to make a program that does an order entry system for beverages. ( i will probably do description, cost) I want to use the Decorator pattern and the observer pattern. I made a UML drawing and saved it as a pic for easy viewing. This site wont let me upload as a word doc so i have to upload a pic - i hope its easily viewable.... I need to know if i am doing the UML / design patterns correctly before moving on to the coding part. Beverage is my abstract component class. Espresso, houseblend, darkroast are my concrete subject classes.. I also have a condiment decorator class milk,mocha,soy,whip. would be my observer? because they would be interested in data changes to cost? Now, would the espresso,houseblend etc, be my SUBJECT and the condiments be my observer? My theory is that Cost is a changes and that the condiments need to know the changes? So, subject = esspresso,houseblend,darkroast,etc.. // they hold cost() Observer = milk,mocha,soy,whip? // they hold cost() would be the concrete components and the milk,mocha,soy,whip? would be the decorator! So, following good software engineering practices "design to an interface and not implementation" or "identify things that change from those that dont" would i need a costbehavior interface? If you look at the UML you will see where i am going with this and see if i am implementing observer + Decorator pattern correctly? I think the decorator is correct. since, the pic is not very viewable i will detail the classes here: Beverage class(register observer, remove observer, notify observer, description) these classes are the concrete beverage classes espresso, houseblend,darkroast, decaf(cost,getdescription,setcost,costchanged) interface observer class(update) // cost? interface costbehavior class(cost) // since this changes? condiment decorator class( getdescription) concrete classes that are linked to the 2 interface s and decorator are: milk,mocha,soy,whip(cost,getdescription,update) these are my decorator/ wrapper classes. Thank you.. Is there a way to make this picture bigger?

    Read the article

  • Using IoC and Dependency Injection, how do I wrap an existing implementation with a new layer of imp

    - by Dividnedium
    I'm trying to figure out how this would be done in practice, so as not to violate the Open Closed principle. Say I have a class called HttpFileDownloader that has one function that takes a url and downloads a file returning the html as a string. This class implements an IFileDownloader interface which just has the one function. So all over my code I have references to the IFileDownloader interface and I have my IoC container returning an instance of HttpFileDownloader whenever an IFileDownloader is Resolved. Then after some use, it becomes clear that occasionally the server is too busy at the time and an exception is thrown. I decide that to get around this, I'm going to auto-retry 3 times if I get an exception, and wait 5 seconds in between each retry. So I create HttpFileDownloaderRetrier which has one function that uses HttpFileDownloader in a for loop with max 3 loops, and a 5 second wait between each loop. So that I can test the "retry" and "wait" abilities of the HttpFileDownloadRetrier I have the HttpFileDownloader dependency injected by having the HttpFileDownloaderRetrier constructor take an IFileDownloader. So now I want all Resolving of IFileDownloader to return the HttpFileDownloaderRetrier. But if I do that, then HttpFileDownloadRetrier's IFileDownloader dependency will get an instance of itself and not of HttpFileDownloader. So I can see that I could create a new interface for HttpFileDownloader called IFileDownloaderNoRetry, and change HttpFileDownloader to implement that. But that means I'm changing HttpFileDownloader, which violates Open Closed. Or I could implement a new interface for HttpFileDownloaderRetrier called IFileDownloaderRetrier, and then change all my other code to refer to that instead of IFileDownloader. But again, I'm now violating Open Closed in all my other code. So what am I missing here? How do I wrap an existing implementation (downloading) with a new layer of implementation (retrying and waiting) without changing existing code? Here's some code if it helps: public interface IFileDownloader { string Download(string url); } public class HttpFileDownloader : IFileDownloader { public string Download(string url) { //Cut for brevity - downloads file here returns as string return html; } } public class HttpFileDownloaderRetrier : IFileDownloader { IFileDownloader fileDownloader; public HttpFileDownloaderRetrier(IFileDownloader fileDownloader) { this.fileDownloader = fileDownloader; } public string Download(string url) { Exception lastException = null; //try 3 shots of pulling a bad URL. And wait 5 seconds after each failed attempt. for (int i = 0; i < 3; i++) { try { fileDownloader.Download(url); } catch (Exception ex) { lastException = ex; } Utilities.WaitForXSeconds(5); } throw lastException; } }

    Read the article

  • C# Select clause returns system exception instead of relevant object

    - by Kashif
    I am trying to use the select clause to pick out an object which matches a specified name field from a database query as follows: objectQuery = from obj in objectList where obj.Equals(objectName) select obj; In the results view of my query, I get: base {System.SystemException} = {"Boolean Equals(System.Object)"} Where I should be expecting something like a Car, Make, or Model Would someone please explain what I am doing wrong here? The method in question can be seen here: // this function searches the database's table for a single object that matches the 'Name' property with 'objectName' public static T Read<T>(string objectName) where T : IEquatable<T> { using (ISession session = NHibernateHelper.OpenSession()) { IQueryable<T> objectList = session.Query<T>(); // pull (query) all the objects from the table in the database int count = objectList.Count(); // return the number of objects in the table // alternative: int count = makeList.Count<T>(); IQueryable<T> objectQuery = null; // create a reference for our queryable list of objects T foundObject = default(T); // create an object reference for our found object if (count > 0) { // give me all objects that have a name that matches 'objectName' and store them in 'objectQuery' objectQuery = from obj in objectList where obj.Equals(objectName) select obj; // make sure that 'objectQuery' has only one object in it try { foundObject = (T)objectQuery.Single(); } catch { return default(T); } // output some information to the console (output screen) Console.WriteLine("Read Make: " + foundObject.ToString()); } // pass the reference of the found object on to whoever asked for it return foundObject; } } Note that I am using the interface "IQuatable<T>" in my method descriptor. An example of the classes I am trying to pull from the database is: public class Make: IEquatable<Make> { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual IList<Model> Models { get; set; } public Make() { // this public no-argument constructor is required for NHibernate } public Make(string makeName) { this.Name = makeName; } public override string ToString() { return Name; } // Implementation of IEquatable<T> interface public virtual bool Equals(Make make) { if (this.Id == make.Id) { return true; } else { return false; } } // Implementation of IEquatable<T> interface public virtual bool Equals(String name) { if (this.Name.Equals(name)) { return true; } else { return false; } } } And the interface is described simply as: public interface IEquatable<T> { bool Equals(T obj); }

    Read the article

  • PHP 'instanceof' failing with class constant

    - by Nathan Loding
    I'm working on a framework that I'm trying to type as strongly as I possibly can. (I'm working within PHP and taking some of the ideas that I like from C# and trying to utilize them within this framework.) I'm creating a Collection class that is a collection of domain entities/objects. It's kinda modeled after the List<T> object in .Net. I've run into an obstacle that is preventing me from typing this class. If I have a UserCollection, it should only allow User objects into it. If I have a PostCollection, it should only allow Post objects. All Collections in this framework need to have certain basic functions, such as add, remove, iterate. I created an interface, but found that I couldn't do the following: interface ICollection { public function add($obj) } class PostCollection implements ICollection { public function add(Post $obj) {} } This broke it's compliance with the interface. But I can't have the interface strongly typed because then all Collections are of the same type. So I attempted the following: interface ICollection { public function add($obj) } abstract class Collection implements ICollection { const type = 'null'; } class PostCollection { const type = 'Post'; public function add($obj) { if(!($obj instanceof self::type)) { throw new UhOhException(); } } } When I attempt to run this code, I get syntax error, unexpected T_STRING, expecting T_VARIABLE or '$' on the instanceof statement. A little research into the issue and it looks like the root of the cause is that $obj instanceof self is valid to test against the class. It appears that PHP doesn't process the entire self::type constant statement in the expression. Adding parentheses around the self::type variable threw an error regarding an unexpected '('. An obvious workaround is to not make the type variable a constant. The expression $obj instanceof $this->type works just fine (if $type is declared as a variable, of course). I'm hoping that there's a way to avoid that, as I'd like to define the value as a constant to avoid any possible change in the variable later. Any thoughts on how I can achieve this, or have I take PHP to it's limit in this regard? Is there a way of "escaping" or encapsulating self::this so that PHP won't die when processing it?

    Read the article

  • Across process marhalling problem with an array of points

    - by ElMagn
    Hi All, We have what we think is a marshalling problem with a renderer object when called across process boundaries. The renderer is an ATL COM server with a COM object that implements the IPoints interface defined below: typedef [uuid(B0E01719-005A-427c-B9DD-B42A18E969AE)] struct Point { double X; double Y; } Point; [ object, uuid(3BFECFE3-B4FB-4f14-8257-6E065D02E3B3), helpstring("IPoints Interface"), dual, ] interface IPoints : IDispatch { HRESULT DrawPolyLine([in] long hDC, [in] short count, [in, size_is(count)] Point * points ); // many more like DrawLine } The count parameter represents the number of points and the points parameter represents an array of the actual points. We have two process running, a graphical display process (GDP) and a tabular (grid) display process (TDP). A factory in the GDP, written in C#, creates the renderer and the clients of the renderer in the GDP. When the clients call into the renderer, everything displays correctly. The renderer is created at start up BTW. There is another factory in the TDP, written in VB6, that calls into the factory in the GDP to create the clients. When the clients call into the renderer, only the first point in the array is marshaled correctly, all the other points are garbage. Seems that the rendering works only when the client creation is started from the same process as the renderer. Now, i am not sure what the solution to this problem is. It seems that if we can guarantee that the clients are always created from a thread in the same GDP process as the renderer then the points are marshaled correctly. We tried using a background thread from the Thread Pool in C# and it indeed worked. The problem is that Windows Forms created from the clients stopped working because accessing the form's controls from a thread other than the thread that created the control is not allowed. We might change the calls to access the forms but we have quite a few of them and are trying to look into a different solution that might involve making changes to the renderer. The other problem is that the renderer is legacy code and we can't just change the interface. I am wondering what can we do to the renderer's interface that would help with marshalling from across process calls. Any ideas would be greatly appreciated. Regards, ElMagn

    Read the article

  • Minutia on Objective-C Categories and Extensions.

    - by Matt Wilding
    I learned something new while trying to figure out why my readwrite property declared in a private Category wasn't generating a setter. It was because my Category was named: // .m @interface MyClass (private) @property (readwrite, copy) NSArray* myProperty; @end Changing it to: // .m @interface MyClass () @property (readwrite, copy) NSArray* myProperty; @end and my setter is synthesized. I now know that Class Extension is not just another name for an anonymous Category. Leaving a Category unnamed causes it to morph into a different beast: one that now gives compile-time method implementation enforcement and allows you to add ivars. I now understand the general philosophies underlying each of these: Categories are generally used to add methods to any class at runtime, and Class Extensions are generally used to enforce private API implementation and add ivars. I accept this. But there are trifles that confuse me. First, at a hight level: Why differentiate like this? These concepts seem like similar ideas that can't decide if they are the same, or different concepts. If they are the same, I would expect the exact same things to be possible using a Category with no name as is with a named Category (which they are not). If they are different, (which they are) I would expect a greater syntactical disparity between the two. It seems odd to say, "Oh, by the way, to implement a Class Extension, just write a Category, but leave out the name. It magically changes." Second, on the topic of compile time enforcement: If you can't add properties in a named Category, why does doing so convince the compiler that you did just that? To clarify, I'll illustrate with my example. I can declare a readonly property in the header file: // .h @interface MyClass : NSObject @property (readonly, copy) NSString* myString; @end Now, I want to head over to the implementation file and give myself private readwrite access to the property. If I do it correctly: // .m @interface MyClass () @property (readonly, copy) NSString* myString; @end I get a warning when I don't synthesize, and when I do, I can set the property and everything is peachy. But, frustratingly, if I happen to be slightly misguided about the difference between Category and Class Extension and I try: // .m @interface MyClass (private) @property (readonly, copy) NSString* myString; @end The compiler is completely pacified into thinking that the property is readwrite. I get no warning, and not even the nice compile error "Object cannot be set - either readonly property or no setter found" upon setting myString that I would had I not declared the readwrite property in the Category. I just get the "Does not respond to selector" exception at runtime. If adding ivars and properties is not supported by (named) Categories, is it too much to ask that the compiler play by the same rules? Am I missing some grand design philosophy?

    Read the article

  • How to properly mix generics and inheritance to get the desired result?

    - by yamsha
    My question is not easy to explain using words, fortunately it's not too difficult to demonstrate. So, bear with me: public interface Command<R> { public R execute();//parameter R is the type of object that will be returned as the result of the execution of this command } public abstract class BasicCommand<R> { } public interface CommandProcessor<C extends Command<?>> { public <R> R process(C<R> command);//this is my question... it's illegal to do, but you understand the idea behind it, right? } //constrain BasicCommandProcessor to commands that subclass BasicCommand public class BasicCommandProcessor implements CommandProcessor<C extends BasicCommand<?>> { //here, only subclasses of BasicCommand should be allowed as arguments but these //BasicCommand object should be parameterized by R, like so: BasicCommand<R> //so the method signature should really be // public <R> R process(BasicCommand<R> command) //which would break the inheritance if the interface's method signature was instead: // public <R> R process(Command<R> command); //I really hope this fully illustrates my conundrum public <R> R process(C<R> command) { return command.execute(); } } public class CommandContext { public static void main(String... args) { BasicCommandProcessor bcp = new BasicCommandProcessor(); String textResult = bcp.execute(new BasicCommand<String>() { public String execute() { return "result"; } }); Long numericResult = bcp.execute(new BasicCommand<Long>() { public Long execute() { return 123L; } }); } } Basically, I want the generic "process" method to dictate the type of generic parameter of the Command object. The goal is to be able to restrict different implementations of CommandProcessor to certain classes that implement Command interface and at the same time to able to call the process method of any class that implements the CommandProcessor interface and have it return the object of type specified by the parametarized Command object. I'm not sure if my explanation is clear enough, so please let me know if further explanation is needed. I guess, the question is "Would this be possible to do, at all?" If the answer is "No" what would be the best work-around (I thought of a couple on my own, but I'd like some fresh ideas)

    Read the article

  • OpenVPN on Tomato and Vista - can't see my network

    - by Ian
    I followed the instructions here (http://todayguesswhat.blogspot.ca/2011/03/quick-simple-vpn-setup-guide-using.html) to set up a TCP connection to OpenVPN on my Tomato router. Used TCP because the place I usually surf at seems to have the other ports blocked. My Vista laptop is able to connect to the router but I don't appear to be getting an IP address. I'm able to access my router's admin page, but I can't see the network at home. When I browse to Whatsmyip I see my home IP. Here are the results of route print -4 when I'm just connect to the library and when I've fired up the VP connection as well: Library only: =========================================================================== Interface List 22 ...00 ff c4 a0 e7 5c ...... TAP-Win32 Adapter V9 15 ...00 23 4e 20 b3 64 ...... Atheros AR9281 Wireless Network Adapter 10 ...00 23 8b 39 ec 71 ...... Marvell Yukon 88E8040T PCI-E Fast Ethernet Controller 1 ........................... Software Loopback Interface 1 11 ...00 00 00 00 00 00 00 e0 isatap.{834A8A0A-5E2C-47D0-9673-7965DE8B5470} 14 ...02 00 54 55 4e 01 ...... Teredo Tunneling Pseudo-Interface 17 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #3 20 ...00 00 00 00 00 00 00 e0 6TO4 Adapter 18 ...00 00 00 00 00 00 00 e0 6TO4 Adapter 19 ...00 00 00 00 00 00 00 e0 6TO4 Adapter 23 ...00 00 00 00 00 00 00 e0 isatap.{C4A0E75C-765E-4F7D-A55C-77945779816A} 34 ...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #5 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.1.29.1 10.1.29.117 25 10.1.29.0 255.255.255.0 On-link 10.1.29.117 281 10.1.29.117 255.255.255.255 On-link 10.1.29.117 281 10.1.29.255 255.255.255.255 On-link 10.1.29.117 281 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.1.29.117 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.1.29.117 281 =========================================================================== Library and TCP OpenVPN: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.1.29.1 10.1.29.117 25 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.116 30 0.0.0.0 128.0.0.0 192.168.1.1 192.168.1.116 30 10.1.29.0 255.255.255.0 On-link 10.1.29.117 281 10.1.29.117 255.255.255.255 On-link 10.1.29.117 281 10.1.29.255 255.255.255.255 On-link 10.1.29.117 281 24.212.205.68 255.255.255.255 10.1.29.1 10.1.29.117 25 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.0.0.0 128.0.0.0 192.168.1.1 192.168.1.116 30 192.168.1.0 255.255.255.0 On-link 192.168.1.116 286 192.168.1.116 255.255.255.255 On-link 192.168.1.116 286 192.168.1.255 255.255.255.255 On-link 192.168.1.116 286 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.116 286 224.0.0.0 240.0.0.0 On-link 10.1.29.117 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.116 286 255.255.255.255 255.255.255.255 On-link 10.1.29.117 281 =========================================================================== Thanks for any advice. I looked at one of the answers but I'm not sure if it applied to me as it said that 10...* was the vpn connection, but I appear to have 10...* when I connect just to the library.

    Read the article

  • Configuring a PIX 506e for Asterisk

    - by orthogonal3
    Hi all! I'm having problems configuring a old Cisco PIX running 6.3 and wondered if anyone can lend a hand? Simply put I have a PIX 506e that I want to put in my VoIP data path. I can't update it and getting a compat version of Java for that version of PIX is tough so I can't log onto the web interface. The PIX straddles two networks..... 192.168.5.0 on the inside, ...50.0 on the outside both net masks are 255.255.255.0 I have a local Asterisk server cluster with a single service IP (<local asterisk>) SIP is on UDP 5060 and RTP (for the voip data) is on UDP 18000-18999 I know thats a big range but hey may as well. I need the 192.168.5.0 net to have web and ftp access for updates and the like. DHCP, DNS and NTP is already provided on that network so I don't need external DNS access. So I think I want the following rules: SIP or RTP from <my itsp> arriving at <outside voip ip> NATed to <local asterisk> SIP or RTP able to do the reverse route (should be covered by high sec - low sec??) HTTP and FTP access outbound for software update for the servers etc I have the following config at the minute - and I think I'm almost there (I hope)... interface ethernet0 auto interface ethernet1 auto nameif ethernet0 outside security0 nameif ethernet1 inside security100 enable password wouldyouliketobeapeppertoo encrypted passwd wouldyouliketobeapeppertoo encrypted hostname afirewall domain-name adomain fixup protocol dns maximum-length 512 fixup protocol ftp 21 fixup protocol h323 h225 1720 fixup protocol h323 ras 1718-1719 fixup protocol http 80 fixup protocol rsh 514 fixup protocol rtsp 554 fixup protocol sip 5060 fixup protocol sip udp 5060 fixup protocol skinny 2000 fixup protocol smtp 25 fixup protocol sqlnet 1521 fixup protocol tftp 69 access-list acl_ping permit icmp any any access-list voip permit ip host <my itsp> host <local asterisk> mtu outside 1500 mtu inside 1500 ip address outside <outside pix ip> 255.255.255.0 ip address inside <inside pix ip> 255.255.255.0 arp timeout 14400 global (outside) 1 <outside generic ip> nat (inside) 1 192.168.5.0 255.255.255.0 0 0 static (inside,outside) <outside voip ip> <local asterisk> netmask 255.255.255.255 0 0 static (outside,inside) <local asterisk> <outside voip ip> netmask 255.255.255.255 0 0 access-group acl_ping in interface outside access-group acl_ping in interface inside route outside 0.0.0.0 0.0.0.0 <my next hop router> 1 route outside <my itsp> 255.255.255.255 <my next hop router> 1 I think I just need a hand with the access-lists and NAT/static rules. Would anyone be able to help as I've RTFM'd the Cisco docs a few times and they're heavy. Wishing I'd completed my CCNA now! Thanks all for any help, Phil

    Read the article

  • SOA Suite 11g Asynchronous Testing with soapUI

    - by Greg Mally
    Overview The Enterprise Manager test harness that comes bundled with SOA Suite 11g is a great tool for doing smoke tests and some minor load testing. When a more robust testing tool is needed, often times soapUI is leveraged for many reasons ranging from ease of use to cost effective. However, when you want to start doing some more complex testing other than synchronous web services with static content, then the free version of soapUI becomes a bit more challenging. In this blog I will show you how to test asynchronous web services with soapUI free edition. The following assumes that you have a working knowledge of soapUI and will not go into concepts like setting up a project etc. For the basics, please review the documentation for soapUI: http://www.soapui.org/Getting-Started/ Asynchronous Web Service Testing in soapUI When invoking an asynchronous web service, the caller must provide a callback for the response. Since our testing will originate from soapUI, then it is only natural that soapUI would provide the callback mechanism. This mechanism in soapUI is called a MockService. In a nutshell, a soapUI MockService is a simulation of a Web Service (aka, a process listening on a port). We will go through the steps in setting up the MockService for a simple asynchronous BPEL process. After creating your soapUI project based on an asynchronous BPEL process, you will see something like the following: Notice that soapUI created an interface for both the request and the response (i.e., callback). The interface that was created for the callback will be used to create the MockService. Right-click on the callback interface and select the Generate MockService menu item: You will be presented with the Generate MockService dialogue where we will tweak the Path and possibly the port (depends upon what ports are available on the machine where soapUI will be running). We will adjust the Path to include the operation name (append /processResponse in this example) and the port of 8088 is fine: Once the MockService is created, you should have something like the following in soapUI: This window acts as a console/view into the callback process. When the play button is pressed (green triangle in the upper left-hand corner), soapUI will start a process running on the configured Port that will accept web service invocations on the configured Path: At this point we are “almost” ready to try out the asynchronous test. But first we must provide the web service addressing (WS-A) configuration on the request message. We will edit the message for the request interface that was generated when the project was created (SimpleAsyncBPELProcessBinding > process > Request 1 in this example). At the bottom of the request message editor you will find the WS-A configuration by left-clicking on the WS-A label: Here we will setup WS-A by changing the default values to: Must understand: TRUE Add default wsa:Action: Add default wsa:Action (checked) Reply to: ${host where soapUI is running}:${MockService Port}${MockService Path} … in this example: http://192.168.1.181:8088/mockSimpleAsyncBPELProcessCallbackBinding/processResponse We now are ready to run the asynchronous test from soapUI free edition. Make sure that the MockService you created is running and then push the play button for the request (green triangle in the upper left-hand corner of the request editor). If everything is configured correctly, you should see the response show up in the MockService window: To view the response message/payload, just double-click on a response message in the Message Log window of the MockService: At this point you can now expand the project to include a Test Suite for some load balance tests etc. This same topic has been covered in various detail on other sites/blogs, but I wanted to simplify and detail how this is done in the context of SOA Suite 11g. It also serves as a nice introduction to another blog of mine: SOA Suite 11g Dynamic Payload Testing with soapUI Free Edition.

    Read the article

  • View Mobile Websites in Windows with Safari 4 Developer Tools

    - by Matthew Guay
    Want to try out mobile websites designed for the iPhone and other mobile devices on your PC?  Safari 4 for Windows lets you do this easily with their developer tools. By default, Safari will show standard desktop websites.  But by making a simple change, you can switch it to work like Safari Mobile on the iPhone or iPod Touch. Getting Started First make sure you have Safari 4 for Windows installed.  You can download Safari directly (link below) and install it as usual.   Or if you already have another Apple program installed, such as QuickTime or iTunes, then you can install it from Apple Software update.  Simply enter apple software update in the Start menu search box. And then select Safari 4 from the list of new software available.  Click Install to automatically download and install Safari. Accept the license Agreement, and then Safari will automatically install. Once this is finished, Safari will be ready to use. View Mobile Sites in Safari First, we need to enable the developer tools.  Click the gear icon on the toolbar, and select Preferences. Click the Advanced tab, and then check the box that says “Show Develop menu in menu bar”. Once you’ve closed your settings box, click the page icon, select Develop, then User Agent, and then choose one of the Mobile Safari settings.  In our test we chose Mobile Safari 3.1.2 – iPhone. To make your browser emulate a mobile device better, you can hide the bookmarks and tab bar to have a more streamlined interface. Click the Gear icon, and select “Hide Bookmarks Bar”, and then repeat and click “Hide Tab Bar”. You can also shrink your window to be closer to the size of a mobile device screen.  Once you’ve done these things, Safari should look similar to this screenshot.  Here we have loaded Google.com, and you can see it in its iPhone-style interface. Simply enter any website into the address bar, and it will load in its mobile interface if it has one.  Here is Google’s other mobile offerings, right inside Windows. Gmail loads messages with the default iPhone interface. One especially interesting mobile site is Apple’s online iPhone User Guide.  When loaded in Safari with the iPhone setting, it loads with a very nice mobile UI that works just like an iPhone app.  In fact, you can even click and drag to scroll, just like you would with your finger on an iPhone. Conclusion Even if you do not have a Smartphone, you can still preview what websites will look like on them with this trick. Not all sites will work of course, but it’s fun to play around with different sites that have mobile versions. Links: Safari 4 Download Apple iPhone online user guide Similar Articles Productive Geek Tips Make Safari Stop Crashing Every 20 Seconds on Windows VistaCustomize Safari for Windows ToolbarSave Screen Space by Hiding the Bookmarks Toolbar in Safari for WindowsEdit Text in a Webpage with Internet Explorer 8Keep Websites From Using Tiny Fonts in Safari TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet

    Read the article

  • Kanban Tools Review

    - by GeekAgilistMercenary
    The first two sessions on Sunday were Collaboration and why it is so hard and the following, which was a perfect following session was on Kanban.  While in that second session two online Saas Style Tools were mentioned; AgileZen and Leankit.  I decided right then and there that I would throw together some first impressions and setup some sample projects.  I did this by setting up an account and creating the projects. Agile Zen Account Creation Setting up the initial account required an e-mail verification, which is understandable.  Within a few seconds it was mailed out and I was logged in. Setting Up the Kanban Board The initial setup of the board was pretty easy.  I maybe clicked around an extra few times, but overall everything I needed to use the tool was immediately available.  The representation of everything was very similar to what one expects in a real Kanban Board too.  This is a HUGE plus, especially if a team is smart and places this tool in a centrally viewable area to allow for visibility. Each of the board items is just like a post it, being blue, grey, green, pink, or one of another few colors.  Dragging them onto each swim lane on the board was flawless, making changes through the work super easy and intuitive. The other thing I really liked about AgileZen is that the Kanban Board had the swim lanes setup immediately.  One can change them, but when you know you immediately need a Ready Lane, Working Lane, and a Complete Lane it is nice to just have them right in front of you in the interface.  In addition, the Backlog is simply a little tab on the left hand side.  This is perfect for the Backlog Queue.  Out of the way, with the focus on the primary items. Once  I got the items onto the board I was easily able to get back to the actual work at hand versus playing around with the tool.  The fact that it was so easy to use, fast and easy UX, and overall a great layout put me back to work on things I needed to do versus sitting a playing with the tool.  That, in the end is the key to using these tools. LeanKit Kanban Account Creation Setting up the account got me straight into the online tool.  This I thought was pretty cool. Setting Up the Kanban Board Setting up the Kanban Board within Leankit was a bit of trouble.  There were multiple UX issues in regard to process and intuitiveness.  The Leankit basically forces one to design the whole board first, making no assumptions about how the board should look.  The swim lanes in my humble opinion should be setup immediately without any manipulation with the most common lanes;  ready, working, and complete. The other UX hiccup that I had a problem with is that as soon as I managed to get the swim lanes into place, I wanted to remove the redundant Backlog Lane.  The Backlog Lane, or Backlog Bucket should be somewhere that I accidentally added as a lane.  Then on top of that I screwed up and added an item inside the lane, which then prevented me from deleting the lane.  I had to go back out of the lane manipulation, remove the item, and then remove the excess lane.  Summary Leankit wasn't a bad interface, it just wasn't as good as AgileZen.  The AgileZen interface was just better UX design overall.  AgileZen also presents a much better user interface graphical design all together.  It is much closer to what the Kanban Board would look like if it were a physical Kanban Board.  Since one of the HUGE reasons for Kanban is to increase visibility, the fact the design is similar to what a real Kanban Board is actually a pretty big deal. This is an image (click for larger) that shows the two Kanban Boards side by side.  The one on the left is AgileZen and the right is Leankit. Original Entry

    Read the article

  • Using the Onboard VGA output with a PCIe video card. Both nVidia

    - by sebikul
    I have 2 video cards, one On board, a nVidia 6150SE nForce 430 and a PCIe nVidia GeForce GT 220 1GB DDR2 RAM I have already configured the PCIe card to use the dual monitor feature, using the VGA and HDMI ports, but now I want to add a third monitor, using the On board VGA port I have managed to enable the On board graphics processor, which is taking 400MB of ram, but I cant manage to use it, nvidia-settings does not detect it, like it's not usable (but is there) My questions are the following: How can I manage to get the On board VGA display to work together with the PCIe graphics card? If possible, how can I recover those 400 MB the on board card is taking (even without being used) or how can I get it to use the PCIe card available memory? System Details: Linux 2.6.35-28-generic i686 Ubuntu 10.10 (All updates installed) NVIDIA Driver Version: 260.19.06 (Official) If more info is needed please let me know. Here is the lspci output when the On board card is disabled: 00:00.0 RAM memory: nVidia Corporation MCP61 Memory Controller (rev a1) 00:01.0 ISA bridge: nVidia Corporation MCP61 LPC Bridge (rev a2) 00:01.1 SMBus: nVidia Corporation MCP61 SMBus (rev a2) 00:01.2 RAM memory: nVidia Corporation MCP61 Memory Controller (rev a2) 00:01.3 Co-processor: nVidia Corporation MCP61 SMU (rev a2) 00:02.0 USB Controller: nVidia Corporation MCP61 USB Controller (rev a3) 00:02.1 USB Controller: nVidia Corporation MCP61 USB Controller (rev a3) 00:04.0 PCI bridge: nVidia Corporation MCP61 PCI bridge (rev a1) 00:05.0 Audio device: nVidia Corporation MCP61 High Definition Audio (rev a2) 00:06.0 IDE interface: nVidia Corporation MCP61 IDE (rev a2) 00:07.0 Bridge: nVidia Corporation MCP61 Ethernet (rev a2) 00:08.0 IDE interface: nVidia Corporation MCP61 SATA Controller (rev a2) 00:09.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0b.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0c.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0d.0 VGA compatible controller: nVidia Corporation C61 [GeForce 6150SE nForce 430] (rev a2) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 01:09.0 Ethernet controller: Intel Corporation 82557/8/9/0/1 Ethernet Pro 100 (rev 08) 02:00.0 VGA compatible controller: nVidia Corporation GT216 [GeForce GT 220] (rev a2) 02:00.1 Audio device: nVidia Corporation High Definition Audio Controller (rev a1) And this is when both are enabled: 00:00.0 RAM memory: nVidia Corporation MCP61 Memory Controller (rev a1) 00:01.0 ISA bridge: nVidia Corporation MCP61 LPC Bridge (rev a2) 00:01.1 SMBus: nVidia Corporation MCP61 SMBus (rev a2) 00:01.2 RAM memory: nVidia Corporation MCP61 Memory Controller (rev a2) 00:01.3 Co-processor: nVidia Corporation MCP61 SMU (rev a2) 00:02.0 USB Controller: nVidia Corporation MCP61 USB Controller (rev a3) 00:02.1 USB Controller: nVidia Corporation MCP61 USB Controller (rev a3) 00:04.0 PCI bridge: nVidia Corporation MCP61 PCI bridge (rev a1) 00:05.0 Audio device: nVidia Corporation MCP61 High Definition Audio (rev a2) 00:06.0 IDE interface: nVidia Corporation MCP61 IDE (rev a2) 00:07.0 Bridge: nVidia Corporation MCP61 Ethernet (rev a2) 00:08.0 IDE interface: nVidia Corporation MCP61 SATA Controller (rev a2) 00:09.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0b.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0c.0 PCI bridge: nVidia Corporation MCP61 PCI Express bridge (rev a2) 00:0d.0 VGA compatible controller: nVidia Corporation C61 [GeForce 6150SE nForce 430] (rev a2) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 01:09.0 Ethernet controller: Intel Corporation 82557/8/9/0/1 Ethernet Pro 100 (rev 08) 02:00.0 VGA compatible controller: nVidia Corporation GT216 [GeForce GT 220] (rev a2) 02:00.1 Audio device: nVidia Corporation High Definition Audio Controller (rev a1) Output of lshw -class display: *-display description: VGA compatible controller product: GT216 [GeForce GT 220] vendor: nVidia Corporation physical id: 0 bus info: pci@0000:02:00.0 version: a2 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:18 memory:df000000-dfffffff memory:c0000000-cfffffff memory:da000000-dbffffff ioport:ef80(size=128) memory:def80000-deffffff *-display description: VGA compatible controller product: C61 [GeForce 6150SE nForce 430] vendor: nVidia Corporation physical id: d bus info: pci@0000:00:0d.0 version: a2 width: 64 bits clock: 66MHz capabilities: pm msi vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:22 memory:dd000000-ddffffff memory:b0000000-bfffffff memory:dc000000-dcffffff memory:deb40000-deb5ffff If what I'm looking for is not possible, please tell me, so I can disable the On board card and recover those 400MB of wasted RAM Thanks for your help!

    Read the article

  • Networking- Wireless and Wired not working

    - by JJ White
    So everything in Ubuntu has been working great until networking stopped working. I've spent the better part of two days scouring for a fix with no luck. Here is my info...your help would so be appreciated. grep -i eth /var/log/syslog | tail Sep 25 16:31:59 jj-laptop NetworkManager[970]: <info> (eth0): cleaning up... Sep 25 16:31:59 jj-laptop NetworkManager[970]: <info> (eth0): taking down device. Sep 25 16:31:59 jj-laptop kernel: [23403.998837] sky2 0000:02:00.0: eth0: disabling interface Sep 25 16:35:54 jj-laptop NetworkManager[970]: <info> (eth0): now managed Sep 25 16:35:54 jj-laptop NetworkManager[970]: <info> (eth0): device state change: unmanaged -> unavailable (reason 'managed') [10 20 2] Sep 25 16:35:54 jj-laptop NetworkManager[970]: <info> (eth0): bringing up device. Sep 25 16:35:54 jj-laptop kernel: [23413.629424] sky2 0000:02:00.0: eth0: enabling interface Sep 25 16:35:54 jj-laptop kernel: [23413.635703] ADDRCONF(NETDEV_UP): eth0: link is not ready Sep 25 16:35:54 jj-laptop NetworkManager[970]: <info> (eth0): preparing device. Sep 25 16:35:54 jj-laptop NetworkManager[970]: <info> (eth0): deactivating device (reason 'managed') [2] and then ifconfig -a eth0 Link encap:Ethernet HWaddr 00:17:42:14:e9:e1 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:4605 errors:0 dropped:0 overruns:0 frame:0 TX packets:287 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:315161 (315.1 KB) TX bytes:63680 (63.6 KB) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:13018 errors:0 dropped:0 overruns:0 frame:0 TX packets:13018 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:880484 (880.4 KB) TX bytes:880484 (880.4 KB) wlan0 Link encap:Ethernet HWaddr 00:13:02:d0:ee:13 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) also iwconfig lo no wireless extensions. wlan0 IEEE 802.11abg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on eth0 no wireless extensions. and of course sudo lshw -C network *-network description: Ethernet interface product: 88E8055 PCI-E Gigabit Ethernet Controller vendor: Marvell Technology Group Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 12 serial: 00:17:42:14:e9:e1 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=sky2 driverversion=1.30 duplex=full firmware=N/A latency=0 link=no multicast=yes port=twisted pair speed=1Gbit/s resources: irq:44 memory:f0000000-f0003fff ioport:2000(size=256) memory:c0a00000-c0a1ffff *-network DISABLED description: Wireless interface product: PRO/Wireless 3945ABG [Golan] Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:05:00.0 logical name: wlan0 version: 02 serial: 00:13:02:d0:ee:13 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwl3945 driverversion=3.2.0-31-generic firmware=15.32.2.9 latency=0 link=no multicast=yes wireless=IEEE 802.11abg resources: irq:45 memory:c0100000-c0100fff last but not least lsmod Module Size Used by nls_iso8859_1 12617 0 nls_cp437 12751 0 vfat 17308 0 fat 55605 1 vfat usb_storage 39646 0 uas 17828 0 dm_crypt 22528 0 rfcomm 38139 0 parport_pc 32114 0 bnep 17830 2 ppdev 12849 0 bluetooth 158438 10 rfcomm,bnep binfmt_misc 17292 1 snd_hda_codec_idt 60251 1 pcmcia 39791 0 joydev 17393 0 snd_hda_intel 32765 3 snd_hda_codec 109562 2 snd_hda_codec_idt,snd_hda_intel snd_hwdep 13276 1 snd_hda_codec snd_pcm 80845 2 snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 yenta_socket 27428 0 snd_rawmidi 25424 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51567 2 snd_seq_midi,snd_seq_midi_event pcmcia_rsrc 18367 1 yenta_socket wacom_w8001 12906 0 irda 185517 0 arc4 12473 2 iwl3945 73111 0 iwl_legacy 71134 1 iwl3945 mac80211 436455 2 iwl3945,iwl_legacy cfg80211 178679 3 iwl3945,iwl_legacy,mac80211 snd_timer 28931 2 snd_pcm,snd_seq snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq dm_multipath 22710 0 pcmcia_core 21511 3 pcmcia,yenta_socket,pcmcia_rsrc psmouse 96619 0 apanel 12718 0 serport 12808 1 serio_raw 13027 0 crc_ccitt 12595 1 irda mac_hid 13077 0 snd 62064 15 snd_hda_codec_idt,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_t imer,snd_seq_device input_polldev 13648 1 apanel soundcore 14635 1 snd snd_page_alloc 14115 2 snd_hda_intel,snd_pcm lp 17455 0 fujitsu_laptop 18504 0 parport 40930 3 parport_pc,ppdev,lp dm_raid45 76451 0 xor 25987 1 dm_raid45 dm_mirror 21822 0 dm_region_hash 16065 1 dm_mirror dm_log 18193 3 dm_raid45,dm_mirror,dm_region_hash sdhci_pci 18324 0 sdhci 28241 1 sdhci_pci sky2 49545 0 i915 414939 3 drm_kms_helper 45466 1 i915 drm 197692 4 i915,drm_kms_helper i2c_algo_bit 13199 1 i915 video 19068 1 i915

    Read the article

  • Oracle and Partners release CAMP specification for PaaS Management

    - by macoracle
    Cloud Application Management for Platforms The public release of the Cloud Application Management for Platforms (CAMP) specification, an initial draft of what is expected to become an industry standard self service interface specification for Platform as a Service (PaaS) management, represents a significant milestone in cloud standards development. Created by several players in the emerging cloud industry, including Oracle, the specification is being submitted to the OASIS standards organization (draft charter) where it will be finalized in an open development process. CAMP is targeted at application developers and deployers for self service management of their application on a Platform-as-a-Service cloud. It is closely aligned with the application development process where applications are typically developed in an Application Development Environment (ADE) and then deployed into a private or public platform cloud. CAMP standardizes the model behind an application’s dependencies on platform components and provides a standardized format for moving applications between the ADE and the cloud, and if and when desirable, between clouds. Once an application is deployed, CAMP provides users with a standardized self service interface to the PaaS offering, allowing the cloud consumer to manage the lifecycle of the application on that platform and the use of the underlying platform services. The CAMP interface includes a RESTful binding of the CAMP model onto the standard HTTP protocol, using JSON as the encoding for the model resources. The model for CAMP includes resources that represent the Application, its Components and any Platform Components that they depend on. It's important PaaS Cloud consumers understand that for a PaaS cloud, these are the abstractions that the user would prefer to work with, not Virtual Machines and the various resources such as compute power, storage and networking. PaaS cloud consumers would also not like to become system administrators for the infrastructure that is hosting their applications and component services. CAMP works on this more abstract level, and yet still accommodates platforms that are built using an underlying infrastructure cloud. With CAMP, it is up to the cloud provider whether or not this underlying infrastructure is exposed to the consumer. One major challenge addressed by the CAMP specification is that of ensuring that application deployment on a new platform is as seamless and error free as possible. This becomes even more difficult when the application may have been developed for a different platform and is now moving to a new one. In CAMP this is accomplished by matching the requirements of the application and its components to the specific capabilities of the underlying platform. This needs to be done regardless of whether there are existing pools of virtualized platform resources (such as a database pool) which are provisioned(on the basis of a schema for example), or whether the platform component is really just a set of virtual machines drawn from an infrastructure pool. The interoperability between platform clouds that CAMP offers means that a CAMP client such as an ADE can target multiple clouds with a single common interface. Applications can even be spread across multiple platform clouds and then managed without needing to create a specialized adapter to manage the components running in each cloud. The development of CAMP has been an effort by a small set of companies, but there are significant advantages to this approach. For example, the way that each of these companies creates their platforms is different enough, to ensure that CAMP can cover a wide range of actual deployments. CAMP is now entering the next phase of development under the guidance of an open standards organization, OASIS, which will likely broaden it’s capabilities. We hope is to keep it concise and minimal, however, to ease implementation and adoption. Over time there will be many different types of platform components that applications can use and which need management. CAMP at this point only includes one example of this (in an appendix) – DataBase as a Service. I am looking forward to the start of the CAMP Technical Committee in OASIS and will do my best to ensure a successful development process. Hope to see you there.

    Read the article

  • Implementing a modern web application with Web API on top of old services

    - by Gaui
    My company has many WCF services which may or may not be replaced in the near future. The old web application is written in WebForms and communicates straight with these services via SOAP and returns DataTables. Now I am designing a new modern web application in a modern style, an AngularJS client which communicates with an ASP.NET Web API via JSON. The Web API then communicates with the WCF services via SOAP. In the future I want to let the Web API handle all requests and go straight to the database, but because the business logic implemented in the WCF services is complicated it's going to take some time to rewrite and replace it. Now to the problem: I'm trying to make it easy in the near future to replace the WCF services with some other data storage, e.g. another endpoint, database or whatever. I also want to make it easy to unit test the business logic. That's why I have structured the Web API with a repository layer and a service layer. The repository layer has a straight communication with the data storage (WCF service, database, or whatever) and the service layer then uses the repository (Dependency Injection) to get the data. It doesn't care where it gets the data from. Later on I can be in control and structure the data returned from the data storage (DataTable to POCO) and be able to test the logic in the service layer with some mock repository (using Dependency Injection). Below is some code to explain where I'm going with this. But my question is, does this all make sense? Am I making this overly complicated and could this be simplified in any way possible? Does this simplicity make this too complicated to maintain? My main goal is to make it as easy as possible to switch to another data storage later on, e.g. an ORM and be able to test the logic in the service layer. And because the majority of the business logic is implemented in these WCF services (and they return DataTables), I want to be in control of the data and the structure returned to the client. Any advice is greatly appreciated. Update 20/08/14 I created a repository factory, so services would all share repositories. Now it's easy to mock a repository, add it to the factory and create a provider using that factory. Any advice is much appreciated. I want to know if I'm making things more complicated than they should be. So it looks like this: 1. Repository Factory public class RepositoryFactory { private Dictionary<Type, IServiceRepository> repositories; public RepositoryFactory() { this.repositories = new Dictionary<Type, IServiceRepository>(); } public void AddRepository<T>(IServiceRepository repo) where T : class { if (this.repositories.ContainsKey(typeof(T))) { this.repositories.Remove(typeof(T)); } this.repositories.Add(typeof(T), repo); } public dynamic GetRepository<T>() { if (this.repositories.ContainsKey(typeof(T))) { return this.repositories[typeof(T)]; } throw new RepositoryNotFoundException("No repository found for " + typeof(T).Name); } } I'm not very fond of dynamic but I don't know how to retrieve that repository otherwise. 2. Repository and service // Service repository interface // All repository interfaces extend this public interface IServiceRepository { } // Invoice repository interface // Makes it easy to mock the repository later on public interface IInvoiceServiceRepository : IServiceRepository { List<Invoice> GetInvoices(); } // Invoice repository // Connects to some data storage to retrieve invoices public class InvoiceServiceRepository : IInvoiceServiceRepository { public List<Invoice> GetInvoices() { // Get the invoices from somewhere // This could be a WCF, a database, or whatever using(InvoiceServiceClient proxy = new InvoiceServiceClient()) { return proxy.GetInvoices(); } } } // Invoice service // Service that handles talking to a real or a mock repository public class InvoiceService { // Repository factory RepositoryFactory repoFactory; // Default constructor // Default connects to the real repository public InvoiceService(RepositoryFactory repo) { repoFactory = repo; } // Service function that gets all invoices from some repository (mock or real) public List<Invoice> GetInvoices() { // Query the repository return repoFactory.GetRepository<IInvoiceServiceRepository>().GetInvoices(); } }

    Read the article

  • ALC889 - GA-P55-UD4 -no sound - Ubuntu 11.10

    - by george
    I have computer with a Gigabyte P55A-UD4 motherboard. I have on-board audio - Realtek ALC889. I am using Ubuntu 11.10 and have no sound. please please heeeelp :). i have tryed to install high definition audio codecs from realtek but it doesn't work. in bios the azalia codec is turned on. ps : sorry for my english. 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 12) 00:01.0 PCI bridge: Intel Corporation Core Processor PCI Express x16 Root Port (rev 12) 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB Universal Host Controller (rev 06) 00:1a.1 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB Universal Host Controller (rev 06) 00:1a.2 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB Universal Host Controller (rev 06) 00:1a.7 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 06) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 06) 00:1c.4 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 5 (rev 06) 00:1c.5 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 6 (rev 06) 00:1c.6 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 7 (rev 06) 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB Universal Host Controller (rev 06) 00:1d.1 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB Universal Host Controller (rev 06) 00:1d.2 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB Universal Host Controller (rev 06) 00:1d.3 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB Universal Host Controller (rev 06) 00:1d.7 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 06) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a6) 00:1f.0 ISA bridge: Intel Corporation 5 Series Chipset LPC Interface Controller (rev 06) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 6 port SATA AHCI Controller (rev 06) 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 06) 01:00.0 VGA compatible controller: nVidia Corporation GT216 [GeForce GT 220] (rev a2) 01:00.1 Audio device: nVidia Corporation High Definition Audio Controller (rev a1) 03:00.0 SATA controller: JMicron Technology Corp. JMB362/JMB363 Serial ATA Controller (rev 03) 03:00.1 IDE interface: JMicron Technology Corp. JMB362/JMB363 Serial ATA Controller (rev 03) 04:00.0 IDE interface: Marvell Technology Group Ltd. Device 91a3 (rev 11) 05:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) 06:03.0 IDE interface: Integrated Technology Express, Inc. IT8213 IDE Controller 06:07.0 FireWire (IEEE 1394): Texas Instruments TSB43AB23 IEEE-1394a-2000 Controller (PHY/Link) 3f:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 02) 3f:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 02) 3f:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 02) 3f:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 02) 3f:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 02) 3f:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 02) aplay -l karta 0: Intel [HDA Intel], urzadzenie 0: ALC889 Analog [ALC889 Analog] Urzadzenia podrzedne: 1/1 Urzadzenie podrzedne #0: subdevice #0 karta 0: Intel [HDA Intel], urzadzenie 1: ALC889 Digital [ALC889 Digital] Urzadzenia podrzedne: 1/1 Urzadzenie podrzedne #0: subdevice #0 karta 1: NVidia [HDA NVidia], urzadzenie 3: HDMI 0 [HDMI 0] Urzadzenia podrzedne: 1/1 Urzadzenie podrzedne #0: subdevice #0 karta 1: NVidia [HDA NVidia], urzadzenie 7: HDMI 0 [HDMI 0] Urzadzenia podrzedne: 1/1 Urzadzenie podrzedne #0: subdevice #0 karta 1: NVidia [HDA NVidia], urzadzenie 8: HDMI 0 [HDMI 0] Urzadzenia podrzedne: 1/1 Urzadzenie podrzedne #0: subdevice #0 karta 1: NVidia [HDA NVidia], urzadzenie 9: HDMI 0 [HDMI 0] Urzadzenia podrzedne: 1/1 Urzadzenie podrzedne #0: subdevice #0

    Read the article

  • ResourceSerializable: an alternate to ORM and ActiveRecord

    - by Levi Morrison
    A few opinionated reasons I don't like the traditional ORM and ActiveRecord patterns: They work only with a database. Sometimes I'm dealing with objects from an API and other objects from a database. All the implementations I have seen don't allow for that. Feel free to clue me in if I'm wrong on this. They are brittle. Changes in the database will likely break your implemenation. Some implementations can help reduce this, but a few of the ones I've seen don't. Their very design is influenced by the database. If I want to switch to using an API, I'll have to redesign the object to get it to work (likely). It seems to violate the single-responsibility pattern. They know what they are and how they act, but they also know how they are created, destroyed and saved? Seems a bit much. What about an approach that is somewhat more familiar in PHP: implementing an interface? In php 5.4, we'll have the JsonSerializable interface that defines the data to be json_encoded, so users will become accustomed to this type of thing. What if there was a ResourceSerializable interface? This is still an ORM by name, but certainly not by tradition. interface ResourceSerializable { /** * Returns the id that identifies the resource. */ function resourceId(); /** * Returns the 'type' of the resource. */ function resourceType(); /** * Returns the data to be serialized. */ function resourceSerialize(); } Things might be poorly named, I'll take suggestions. Notes: ResourceId will work for API's and databases. As long as your primary key in the database is the same as the resource ID in the API, there is no conflict. All of the API's I've worked with have a unique ID for the resource, so I don't see any issues there. ResourceType is the group or type associated with the resource. You can use this to map the resource to an API call or a database table. If the ResourceType was person, it could map to /api/1/person/{resourceId} and the table persons (or people, if it's smart enough). resourceSerialize() returns the data to be stored. Keys would identify API parameters and database table columns. This also seems easier to test than ActiveRecord / Orm implemenations. I haven't done much automated testing on traditional ActiveRecord/ORM implemenations, so this is merely a guess. But it seems that I being able to create objects independently of the library helps me. I don't have to use load() to get an existing resource, I can simply create one and set all the right properties. This is not so easy in the ActiveRecord / Orm implemenations I've dealt with. Downsides: You need another object to serialize it. This also means you have more code in general as you have to use more objects. You have to map resource types to API calls and database tables. This is even more work, but some ORMs and ActiveRecord implementations require you to map objects to table names anyway. Are there other downsides that you see? Does this seem feasible to you? How would you improve it? Note: I almost asked this on StackOverflow because it might be too vague for their standards, but I'm still not really familiar with programmers.stackexchange.com, so please help me improve my question if it doesn't shape up to standards here.

    Read the article

  • Abstracting entity caching in XNA

    - by Grofit
    I am in a situation where I am writing a framework in XNA and there will be quite a lot of static (ish) content which wont render that often. Now I am trying to take the same sort of approach I would use when doing non game development, where I don't even think about caching until I have finished my application and realise there is a performance problem and then implement a layer of caching over whatever needs it, but wrap it up so nothing is aware its happening. However in XNA the way we would usually cache would be drawing our objects to a texture and invalidating after a change occurs. So if you assume an interface like so: public interface IGameComponent { void Update(TimeSpan elapsedTime); void Render(GraphicsDevice graphicsDevice); } public class ContainerComponent : IGameComponent { public IList<IGameComponent> ChildComponents { get; private set; } // Assume constructor public void Update(TimeSpan elapsedTime) { // Update anything that needs it } public void Render(GraphicsDevice graphicsDevice) { foreach(var component in ChildComponents) { // draw every component } } } Then I was under the assumption that we just draw everything directly to the screen, then when performance becomes an issue we just add a new implementation of the above like so: public class CacheableContainerComponent : IGameComponent { private Texture2D cachedOutput; private bool hasChanged; public IList<IGameComponent> ChildComponents { get; private set; } // Assume constructor public void Update(TimeSpan elapsedTime) { // Update anything that needs it // set hasChanged to true if required } public void Render(GraphicsDevice graphicsDevice) { if(hasChanged) { CacheComponents(graphicsDevice); } // Draw cached output } private void CacheComponents(GraphicsDevice graphicsDevice) { // Clean up existing cache if needed var cachedOutput = new RenderTarget2D(...); graphicsDevice.SetRenderTarget(renderTarget); foreach(var component in ChildComponents) { // draw every component } graphicsDevice.SetRenderTarget(null); } } Now in this example you could inherit, but your Update may become a bit tricky then without changing your base class to alert you if you had changed, but it is up to each scenario to choose if its inheritance/implementation or composition. Also the above implementation will re-cache within the rendering cycle, which may cause performance stutters but its just an example of the scenario... Ignoring those facts as you can see that in this example you could use a cache-able component or a non cache-able one, the rest of the framework needs not know. The problem here is that if lets say this component is drawn mid way through the game rendering, other items will already be within the default drawing buffer, so me doing this would discard them, unless I set it to be persisted, which I hear is a big no no on the Xbox. So is there a way to have my cake and eat it here? One simple solution to this is make an ICacheable interface which exposes a cache method, but then to make any use of this interface you would need the rest of the framework to be cache aware, and check if it can cache, and to then do so. Which then means you are polluting and changing your main implementations to account for and deal with this cache... I am also employing Dependency Injection for alot of high level components so these new cache-able objects would be spat out from that, meaning no where in the actual game would they know they are caching... if that makes sense. Just incase anyone asked how I expected to keep it cache aware when I would need to new up a cachable entity.

    Read the article

  • Bug in Delphi XE RegularExpressions Unit

    - by Jan Goyvaerts
    Using the new RegularExpressions unit in Delphi XE, you can iterate over all the matches that a regex finds in a string like this: procedure TForm1.Button1Click(Sender: TObject); var RegEx: TRegEx; Match: TMatch; begin RegEx := TRegex.Create('\w+'); Match := RegEx.Match('One two three four'); while Match.Success do begin Memo1.Lines.Add(Match.Value); Match := Match.NextMatch; end end; Or you could save yourself two lines of code by using the static TRegEx.Match call: procedure TForm1.Button2Click(Sender: TObject); var Match: TMatch; begin Match := TRegEx.Match('One two three four', '\w+'); while Match.Success do begin Memo1.Lines.Add(Match.Value); Match := Match.NextMatch; end end; Unfortunately, due to a bug in the RegularExpressions unit, the static call doesn’t work. Depending on your exact code, you may get fewer matches or blank matches than you should, or your application may crash with an access violation. The RegularExpressions unit defines TRegEx and TMatch as records. That way you don’t have to explicitly create and destroy them. Internally, TRegEx uses TPerlRegEx to do the heavy lifting. TPerlRegEx is a class that needs to be created and destroyed like any other class. If you look at the TRegEx source code, you’ll notice that it uses an interface to destroy the TPerlRegEx instance when TRegEx goes out of scope. Interfaces are reference counted in Delphi, making them usable for automatic memory management. The bug is that TMatch and TGroupCollection also need the TPerlRegEx instance to do their work. TRegEx passes its TPerlRegEx instance to TMatch and TGroupCollection, but it does not pass the instance of the interface that is responsible for destroying TPerlRegEx. This is not a problem in our first code sample. TRegEx stays in scope until we’re done with TMatch. The interface is destroyed when Button1Click exits. In the second code sample, the static TRegEx.Match call creates a local variable of type TRegEx. This local variable goes out of scope when TRegEx.Match returns. Thus the reference count on the interface reaches zero and TPerlRegEx is destroyed when TRegEx.Match returns. When we call MatchAgain the TMatch record tries to use a TPerlRegEx instance that has already been destroyed. To fix this bug, delete or rename the two RegularExpressions.dcu files and copy RegularExpressions.pas into your source code folder. Make these changes to both the TMatch and TGroupCollection records in this unit: Declare FNotifier: IInterface; in the private section. Add the parameter ANotifier: IInterface; to the Create constructor. Assign FNotifier := ANotifier; in the constructor’s implementation. You also need to add the ANotifier: IInterface; parameter to the TMatchCollection.Create constructor. Now try to compile some code that uses the RegularExpressions unit. The compiler will flag all calls to TMatch.Create, TGroupCollection.Create and TMatchCollection.Create. Fix them by adding the ANotifier or FNotifier parameter, depending on whether ARegEx or FRegEx is being passed. With these fixes, the TPerlRegEx instance won’t be destroyed until the last TRegEx, TMatch, or TGroupCollection that uses it goes out of scope or is used with a different regular expression.

    Read the article

  • EPM 11.1.2.2 Architecture: Financial Performance Management Applications

    - by Marc Schumacher
     Financial Management can be accessed either by a browser based client or by SmartView. Starting from release 11.1.2.2, the Financial Management Windows client does not longer access the Financial Management Consolidation server. All tasks that require an on line connection (e.g. load and extract tasks) can only be done using the web interface. Any client connection initiated by a browser or SmartView is send to the Oracle HTTP server (OHS) first. Based on the path given (e.g. hfmadf, hfmofficeprovider) in the URL, OHS makes a decision to forward this request either to the new Financial Management web application based on the Oracle Application Development Framework (ADF) or to the .NET based application serving SmartView retrievals running on Internet Information Server (IIS). Any requests send to the ADF web interface that need to be processed by the Financial Management application server are send to the IIS using HTTP protocol and will be forwarded further using DCOM to the Financial Management application server. SmartView requests, which are processes by IIS in first row, are forwarded to the Financial Management application server using DCOM as well. The Financial Management Application Server uses OLE DB database connections via native database clients to talk to the Financial Management database schema. Communication between the Financial Management DME Listener, which handles requests from EPMA, and the Financial Management application server is based on DCOM.  Unlike most other components Essbase Analytics Link (EAL) does not have an end user interface. The only user interface is a plug-in for the Essbase Administration Services console, which is used for administration purposes only. End users interact with a Transparent or Replicated Partition that is created in Essbase and populated with data by EAL. The Analytics Link Server deployed on WebLogic communicates through HTTP protocol with the Analytics Link Financial Management Connector that is deployed in IIS on the Financial Management web server. Analytics Link Server interacts with the Data Synchronisation server using the EAL API. The Data Synchronization server acts as a target of a Transparent or Replicated Partition in Essbase and uses a native database client to connect to the Financial Management database. Analytics Link Server uses JDBC to connect to relational repository databases and Essbase JAPI to connect to Essbase.  As most Oracle EPM System products, browser based clients and SmartView can be used to access Planning. The Java based Planning web application is deployed on WebLogic, which is configured behind an Oracle HTTP Server (OHS). Communication between Planning and the Planning RMI Registry Service is done using Java Remote Message Invocation (RMI). Planning uses JDBC to access relational repository databases and talks to Essbase using the CAPI. Be aware of the fact that beside the Planning System database a dedicated database schema is needed for each application that is set up within Planning.  As Planning, Profitability and Cost Management (HPCM) has a pretty simple architecture. Beside the browser based clients and SmartView, a web service consumer can be used as a client too. All clients access the Java based web application deployed on WebLogic through Oracle HHTP Server (OHS). Communication between Profitability and Cost Management and EPMA Web Server is done using HTTP protocol. JDBC is used to access the relational repository databases as well as data sources. Essbase JAPI is utilized to talk to Essbase.  For Strategic Finance, two clients exist, SmartView and a Windows client. While SmartView communicates through the web layer to the Strategic Finance Server, Strategic Finance Windows client makes a direct connection to the Strategic Finance Server using RPC calls. Connections from Strategic Finance Web as well as from Strategic Finance Web Services to the Strategic Finance Server are made using RPC calls too. The Strategic Finance Server uses its own file based data store. JDBC is used to connect to the EPM System Registry from web and application layer.  Disclosure Management has three kinds of clients. While the browser based client and SmartView interact with the Disclosure Management web application directly through Oracle HTTP Server (OHS), Taxonomy Designer does not connect to the Disclosure Management server. Communication to relational repository databases is done via JDBC, to connect to Essbase the Essbase JAPI is utilized.

    Read the article

  • Cannot Mount USB 3.0 Hard Disk ?!!

    - by Tenken
    Hi, I have a USB 3.0 external hard disk which I am unable to mount. The entry appears in the "lsusb" command, but I do not exactly understand how to mount it. This is the output for my lsusb command. "ASMedia Technology Inc." is the USB 3.0 device. I would appreciate some help in mounting and accessing the hard disk. This the relevant output of my "lsusb -v" : Bus 009 Device 002: ID 174c:5106 ASMedia Technology Inc. Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x174c ASMedia Technology Inc. idProduct 0x5106 bcdDevice 0.01 iManufacturer 2 ASMedia iProduct 3 AS2105 iSerial 1 00000000000000000000 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 3 bMaxPacketSize0 9 idVendor 0x1d6b Linux Foundation idProduct 0x0003 3.0 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.35-28-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:04:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection TT think time 8 FS bits bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0503 highspeed power enable connect Port 4: 0000.0503 highspeed power enable connect Device Status: 0x0003 Self Powered Remote Wakeup Enabled This is the error given when I try to mount the hard drive: shinso@shinso-IdeaPad:~$ sudo mount /dev/sdb /mnt [sudo] password for shinso: mount: /dev/sdb: unknown device This the output of "dmesg|tail": [30062.774178] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [30535.800977] usb 9-4: USB disconnect, address 3 [30659.237342] Valid eCryptfs headers not found in file header region or xattr region [30659.237351] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [31259.268310] Valid eCryptfs headers not found in file header region or xattr region [31259.268313] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [31860.059058] Valid eCryptfs headers not found in file header region or xattr region [31860.059062] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [32465.220590] Valid eCryptfs headers not found in file header region or xattr region [32465.220593] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO I am using Ubuntu 10.10 (64 bit). Any help is appreciated.

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >