Search Results

Search found 2995 results on 120 pages for 'logical operators'.

Page 96/120 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • What is a good platform for building a game framework targetting both web and native languages?

    - by fuzzyTew
    I would like to develop (or find, if one is already in development) a framework with support for accelerated graphics and sound built on a system flexible enough to compile to the following: native ppc/x86/x86_64/arm binaries or a language which compiles to them javascript actionscript bytecode or a language which compiles to it (actionscript 3, haxe) optionally java I imagine, for example, creating an API where I can open windows and make OpenGL-like calls and the framework maps this in a relatively efficient manner to either WebGL with a canvas object, 3d graphics in Flash, OpenGL ES 2 with EGL, or desktop OpenGL in an X11, Windows, or Cocoa window. I have so far looked into these avenues: Building the game library in haXe Pros: Targets exist for php, javascript, actionscript bytecode, c++ High level, object oriented language Cons: No support for finally{} blocks or destructors, making resource cleanup difficult C++ target does not allow room for producing highly optimized libraries -- the foreign function interface requires all primitive types be boxed in a wrapper object, as if writing bindings for a scripting language; these feel unideal for real-time graphics and audio, especially exporting low-level functions. Doesn't seem quite yet mature Using the C preprocessor to create a translator, writing programs entirely with macros Pros: CPP is widespread and simple to use Cons: This is an arduous task and probably the wrong tool for the job CPP implementations differ widely in support for features (e.g. xcode cpp has no variadic macros despite claiming C99 compliance) There is little-to-no room for optimization in this route Using llvm's support for multiple backends to target c/c++ to web languages Pros: Can code in c/c++ LLVM is a very mature highly optimizing compiler performing e.g. global inlining Targets exist for actionscript (alchemy) and javascript (emscripten) Cons: Actionscript target is closed source, unmaintained, and buggy. Javascript targets do not use features of HTML5 for appropriate optimization (e.g. linear memory with typed arrays) and are immature An LLVM target must convert from low-level bytecode, so high-level constructs are lost and bloated unreadable code is created from translating individual instructions, which may be more difficult for an unprepared JIT to optimize. "jump" instructions cause problems for languages with no "goto" statements. Using libclang to write a translator from C/C++ to web languages Pros: A beautiful parsing library providing easy access to the code structure Can code in C/C++ Has sponsored developer effort from Apple Cons: Incomplete; current feature set targets IDEs. Basic operators are unexposed and must be manually parsed from the returned AST element to be identified. Translating code prior to compilation may forgo optimizations assumed in c/c++ such as inlining. Creating new code generators for clang to translate into web languages Pros: Can code in C/C++ as libclang Cons: There is no API; code structure is unstable A much larger job than using libclang; the innards of clang are complex Building the game library in Common Lisp Pros: Flexible, ancient, well-developed language Extensive introspection should ease writing translators Translators exist for at least javascript Cons: Unfamiliar language No standardized library functions, widely varying implementations Which of these avenues should I pursue? Do you know of any others, or any systems that might be useful? Does a general project like this exist somewhere already? Thank you for any input.

    Read the article

  • C++0x Overload on reference, versus sole pass-by-value + std::move?

    - by dean
    It seems the main advice concerning C++0x's rvalues is to add move constructors and move operators to your classes, until compilers default-implement them. But waiting is a losing strategy if you use VC10, because automatic generation probably won't be here until VC10 SP1, or in worst case, VC11. Likely, the wait for this will be measured in years. Here lies my problem. Writing all this duplicate code is not fun. And it's unpleasant to look at. But this is a burden well received, for those classes deemed slow. Not so for the hundreds, if not thousands, of smaller classes. ::sighs:: C++0x was supposed to let me write less code, not more! And then I had a thought. Shared by many, I would guess. Why not just pass everything by value? Won't std::move + copy elision make this nearly optimal? Example 1 - Typical Pre-0x constructor OurClass::OurClass(const SomeClass& obj) : obj(obj) {} SomeClass o; OurClass(o); // single copy OurClass(std::move(o)); // single copy OurClass(SomeClass()); // single copy Cons: A wasted copy for rvalues. Example 2 - Recommended C++0x? OurClass::OurClass(const SomeClass& obj) : obj(obj) {} OurClass::OurClass(SomeClass&& obj) : obj(std::move(obj)) {} SomeClass o; OurClass(o); // single copy OurClass(std::move(o)); // zero copies, one move OurClass(SomeClass()); // zero copies, one move Pros: Presumably the fastest. Cons: Lots of code! Example 3 - Pass-by-value + std::move OurClass::OurClass(SomeClass obj) : obj(std::move(obj)) {} SomeClass o; OurClass(o); // single copy, one move OurClass(std::move(o)); // zero copies, two moves OurClass(SomeClass()); // zero copies, one move Pros: No additional code. Cons: A wasted move in cases 1 & 2. Performance will suffer greatly if SomeClass has no move constructor. What do you think? Is this correct? Is the incurred move a generally acceptable loss when compared to the benefit of code reduction?

    Read the article

  • Exporting classes containing std:: objects (vector, map, etc) from a dll

    - by RnR
    I'm trying to export classes from a DLL that contain objects such as std::vectors and std::stings - the whole class is declared as dll export through: class DLL_EXPORT FontManager { The problem is that for members of the complex types I get this warning: warning C4251: 'FontManager::m__fonts' : class 'std::map<_Kty,_Ty' needs to have dll-interface to be used by clients of class 'FontManager' with [ _Kty=std::string, _Ty=tFontInfoRef ] I'm able to remove some of the warnings by putting the following forward class declaration before them even though I'm not changing the type of the member variables themselves: template class DLL_EXPORT std::allocator<tCharGlyphProviderRef>; template class DLL_EXPORT std::vector<tCharGlyphProviderRef,std::allocator<tCharGlyphProviderRef> >; std::vector<tCharGlyphProviderRef> m_glyphProviders; Looks like the forward declaration "injects" the DLL_EXPORT for when the member is compiled but is it safe? Does it realy change anything when the client compiles this header and uses the std container on his side? Will it make all future uses of such a container DLL_EXPORT (and possibly not inline?)? And does it really solve the problem that the warning tries to warn about? Is this warning anything I should be worried about or would it be best to disable it in the scope of these constructs? The clients and the dll will always be built using the same set of libraries and compilers and those are header only classes... I'm using Visual Studio 2003 with the standard STD library. ---- Update ---- I'd like to target you more though as I see the answers are general and here we're talking about std containers and types (such as std::string) - maybe the question really is: Can we disable the warning for standard containers and types available to both the client and the dll through the same library headers and treat them just as we'd treat an int or any other built-in type? (It does seem to work correctly on my side.) If so would should be the conditions under which we can do this? Or should maybe using such containers be prohibited or at least ultra care taken to make sure no assignment operators, copy constructors etc will get inlined into the dll client? In general I'd like to know if you feel designing a dll interface having such objects (and for example using them to return stuff to the client as return value types) is a good idea or not and why - I'd like to have a "high level" interface to this functionality... maybe the best solution is what Neil Butterworth suggested - creating a static library?

    Read the article

  • using CSS to center FLOATED input elements wrapped in a DIV

    - by Tim
    There's no shortage of questions and answers about centering but I've not been able to get it to work given my specific circumstances, which involve floating. I want to center a container DIV that contains three floated input elements (split-button, text, checkbox), so that when my page is resized wider, they go from this: ||.....[ ][v] [ ] [ ] label .....|| to this ||......................[ ][v] [ ] [ ] label.......................|| They float fine, but when the page is made wider, they stay to the left: ||.....[ ][v] [ ] [ ] label .......................................|| If I remove the float so that the input elements are stacked rather than side-by-side: [ ][v] [ ] [ ] label then they DO center correctly when the page is resized. SO it is the float being applied to the elements of the DIV#hbox inside the container that is messing up the centering. Is what I want to do impossible because of the way float is designed to work? Here is my DOCTYPE, and the markup does validate at w3c: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> Here is my markup: <div id="term1-container"> <div class="hbox"> <div> <button id="operator1" class="operator-split-button">equals</button> <button id="operator1drop">show all operators</button> </div> <div><input type="text" id="term1"></input></div> <div><input type="checkbox" id="meta2"></input><label for="meta2" class="tinylabel">meta</label></div> </div> </div> And here's the (not-working) CSS: #term1-container {text-align: center} .hbox {margin: 0 auto;} .hbox div {float:left; } I have also tried applying display: inline-block to the floated button, text-input, and checkbox; and even though I think it applies only to text, I've also tried applying white-space: nowrap to the #term1-container DIV, based on posts I've seen here on SO. And just to be a little more complete, here's the jQuery that creates the split-button: $(".operator-split-button").button().click( function() { alert( "foo" ); }).next().button( { text: false, icons: { primary: "ui-icon-triangle-1-s" } }).click( function(){positionOperatorsMenu();} ) })

    Read the article

  • Are there known problems with >= and <= and the eval function in JS?

    - by Augier
    I am currently writing a JS rules engine which at one point needs to evaluate boolean expressions using the eval() function. Firstly I construct an equation as such: var equation = "relation.relatedTrigger.previousValue" + " " + relation.operator + " " + "relation.value"; relation.relatedTrigger.previousValue is the value I want to compare. relation.operator is the operator (either "==", "!=", <=, "<", "", ="). relation.value is the value I want to compare with. I then simply pass this string to the eval function and it returns true or false as such: return eval(equation); This works absolutely fine (with words and numbers) or all of the operators except for = and <=. E.g. When evaluating the equation: relation.relatedTrigger.previousValue <= 100 It returns true when previousValue = 0,1,10,100 & all negative numbers but false for everything in between. I would greatly appreciate the help of anyone to either answer my question or to help me find an alternative solution. Regards, Augier. P.S. I don't need a speech on the insecurities of the eval() function. Any value given to relation.relatedTrigger.previousValue is predefined. edit: Here is the full function: function evaluateRelation(relation) { console.log("Evaluating relation") var currentValue; //if multiple values if(relation.value.indexOf(";") != -1) { var values = relation.value.split(";"); for (x in values) { var equation = "relation.relatedTrigger.previousValue" + " " + relation.operator + " " + "values[x]"; currentValue = eval(equation); if (currentValue) return true; } return false; } //if single value else { //Evaluate the relation and get boolean var equation = "relation.relatedTrigger.previousValue" + " " + relation.operator + " " + "relation.value"; console.log("relation.relatedTrigger.previousValue " + relation.relatedTrigger.previousValue); console.log(equation); return eval(equation); } } Answer: Provided by KennyTM below. A string comparison doesn't work. Converting to a numerical was needed.

    Read the article

  • Ruby/Rails - Add records to an object with each loop iteration / Object vs Arrays

    - by ChrisWesAllen
    I'm trying to figure out how to add records to an existing object for each iteration of a loop. I'm having a hard time discovering the difference between an object and an array. I have this @events = Event.find(1) @loops = Choices.find(:all, :limit => 5) #so loop for 5 instances of choice model for loop in @loops @events = Event.find(:all,:conditions => ["event.id = ?", loop.event_id ]) end I'm trying to add a new events to the existing @events object based on the id of whatever the loop variable is. But the ( = ) operator just creates a new instance of the @events object. I tried ( += ) and ( << ) as operators but got the error "You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil" I tried created an array events = [] events << Event.find(1) @loops = Choices.find(:all, :limit => 5) #so loop for 5 instances of choice model for loop in @loops events << Event.find(:all,:conditions => ["event.id = ?", loop.event_id ]) end But I dont know how to call that arrays attributes within the view With objects I was able do create a loop within the view and call all the attributes of that object as well... <table> <% for event in @events %> <tr> <td><%= link_to event.title, event %></td> <td><%= event.start_date %></td> <td><%= event.price %></td> </tr> <% end %> </table> How could i do this with an array set? So the questions are 1) Whats the difference between arrays and objects? 2) Is there a way to add into the existing object for each iteration? 3) If I use an array, is there a way to call the attributes for each array record within the view?

    Read the article

  • Implementing default constructors

    - by James
    Implement the default constructor, the constructors with one and two int parameters. The one-parameter constructor should initialize the first member of the pair, the second member of the pair is to be 0. Overload binary operator + to add the pairs as follows: (a, b) + (c, d) = (a + c, b + d); Overload the - analogously. Overload the * on pairs ant int as follows: (a, b) * c = (a * c, b * c). Write a program to test all the member functions and overloaded operators in your class definition. You will also need to write accessor (get) functions for each member. The definition of the class Pairs: class Pairs { public: Pairs(); Pairs(int first, int second); Pairs(int first); // other members and friends friend istream& operator>> (istream&, Pair&); friend ostream& operator<< (ostream&, const Pair&); private: int f; int s; }; Self-Test Exercise #17: istream& operator (istream& ins, Pair& second) { char ch; ins ch; // discard init '(' ins second.f; ins ch; // discard comma ',' ins second.s; ins ch; // discard final '(' return ins; } ostream& operator<< (ostream& outs, const Pair& second) { outs << '('; outs << second.f; outs << ", " ;// I followed the Author's suggestion here. outs << second.s; outs << ")"; return outs; }

    Read the article

  • Is there a programming language with be semantics close to English ?

    - by ivo s
    Most languages allow to 'tweek' to certain extend parts of the syntax (C++,C#) and/or semantics that you will be using in your code (Katahdin, lua). But I have not heard of a language that can just completely define how your code will look like. So isn't there some language which already exists that has such capabilities to override all syntax & define semantics ? Example of what I want to do is basically from the C# code below: foreach(Fruit fruit in Fruits) { if(fruit is Apple) { fruit.Price = fruit.Price/2; } } I want do be able to to write the above code in my perfect language like this: Check if any fruits are Macintosh apples and discount the price by 50%. The advantages that come to my mind looking from a coder's perspective in this "imaginary" language are: It's very clear what is going on (self descriptive) - it's plain English after all even kid would understand my program Hides all complexities which I have to write in C#. But why should I care to learn that if statements, arithmetic operators etc since there are already implemented The disadvantages that I see for a coder who will maintain this program are: Maybe you would express this program differently from me so you may not get all the information that I've expressed in my sentence Programs can be quite verbose and hard to debug but if possible to even proximate this type of syntax above maybe more people would start programming right? That would be amazing I think. I can go to work and just write an essay to draw a square on a winform like this: Create a form called MyGreetingForm. Draw a square with in the middle of MyGreetingFormwith a side of 100 points. In the middle of the square write "Hello! Click here to continue" in Arial font. In the above code the parser must basically guess that I want to use the unnamed square from the previous sentence, it'd be hard to write such a smart parser I guess, yet it's so simple what I want to do. If the user clicks on square in the middle of MyGreetingForm show MyMainForm. In the above code 'basically' the compiler must: 1)generate an event handler 2) check if there is any square in the middle of the form and if there is - 3) hide the form and show another form It looks very hard to do but it doesn't look impossible IMO to me at least approximate this (I can personally generate a parser to perform the 3 steps above np & it's basically the same that it has to do any way when you add even in c# a.MyEvent=+handler; so I don't see a problem here) so I'm thinking maybe somebody already did something like this ? Or is there some practical burden of complexity to create such a 'essay style' programming language which I can't see ? I mean what's the worse that can happen if the parser is not that good? - your program will crash so you have to re-word it:)

    Read the article

  • C++ segmentation error when first parameter is null in comparison operator overload

    - by user1774515
    I am writing a class called Word, that handles a c string and overloads the <, , <=, = operators. word.h: friend bool operator<(const Word &a, const Word &b); word.cc: bool operator<(const Word &a, const Word &b) { if(a == NULL && b == NULL) return false; if(a == NULL) return true; if(b == NULL) return false; return a.wd < b.wd; //wd is a valid c string } main: char* temp = NULL; //EDIT: i was mistaken, temp is a char pointer Word a("blah"); //a.wd = [b,l,a,h] cout << (temp<a); i get a segmentation error before the first line of the operator< method after the last line in the main. I can correct the problem by writing cout << (a>temp); where the operator> is similarly defined and i get no errors. but my assignment requires (temp < a) to work so this is where i ask for help. EDIT: i made a mistake the first time and i said temp was of type Word, but it is actually of type char*. so i assume that the compiler converts temp to a Word using one of my constructors. i dont know which one it would use and why this would work since the first parameter is not Word. here is the constructor i think is being used to make the Word using temp: Word::Word(char* c, char* delimeters=NULL) { char *temporary = "\0"; if(c == NULL) c = temporary; check(stoppers!=NULL, "(Word(char*,char*))NULL pointer"); //exits the program if the expression is false if(strlen(c) == 0) size = DEFAULT_SIZE; //10 else size = strlen(c) + 1 + DEFAULT_SIZE; wd = new char[size]; check(wd!=NULL, "Word(char*,char*))heap overflow"); delimiters = new char[strlen(stoppers) + 1]; //EDIT: changed to [] check(delimiters!=NULL,"Word(char*,char*))heap overflow"); strcpy(wd,c); strcpy(delimiters,stoppers); count = strlen(wd); } wd is of type char* thanks for looking at this big question and trying to help. let me know if you need more code to look at

    Read the article

  • 'Scanner' does not name a type error in g++

    - by Max
    Hi. I'm trying to compile code in g++ and I get the following errors: In file included from scanner.hpp:8, from scanner.cpp:5: parser.hpp:14: error: ‘Scanner’ does not name a type parser.hpp:15: error: ‘Token’ does not name a type Here's my g++ command: g++ parser.cpp scanner.cpp -Wall Here's parser.hpp: #ifndef PARSER_HPP #define PARSER_HPP #include <string> #include <map> #include "scanner.hpp" using std::string; class Parser { // Member Variables private: Scanner lex; // Lexical analyzer Token look; // tracks the current lookahead token // Member Functions <some function declarations> }; #endif and here's scanner.hpp: #ifndef SCANNER_HPP #define SCANNER_HPP #include <iostream> #include <cctype> #include <string> #include <map> #include "parser.hpp" using std::string; using std::map; enum { // reserved words BOOL, ELSE, IF, TRUE, WHILE, DO, FALSE, INT, VOID, // punctuation and operators LPAREN, RPAREN, LBRACK, RBRACK, LBRACE, RBRACE, SEMI, COMMA, PLUS, MINUS, TIMES, DIV, MOD, AND, OR, NOT, IS, ADDR, EQ, NE, LT, GT, LE, GE, // symbolic constants NUM, ID, ENDFILE, ERROR }; class Token { public: int tag; int value; string lexeme; Token() {tag = 0;} Token(int t) {tag = t;} }; class Num : public Token { public: Num(int v) {tag = NUM; value = v;} }; class Word : public Token { public: Word() {tag = 0; lexeme = "default";} Word(int t, string l) {tag = t; lexeme = l;} }; class Scanner { private: int line; // which line the compiler is currently on int depth; // how deep in the parse tree the compiler is map<string,Word> words; // list of reserved words and used identifiers // Member Functions public: Scanner(); Token scan(); string printTag(int); friend class Parser; }; #endif anyone see the problem? I feel like I'm missing something incredibly obvious.

    Read the article

  • Friendly way to parse XDocument

    - by Oli
    I have a class that various different XML schemes are created from. I create the various dynamic XDocuments via one (Very long) statement using conditional operators for optional elements and attributes. I now need to convert the XDocuments back to the class but as they are coming from different schemes many elements and sub elements may be optional. The only way I know of doing this is to use a lot of if statements. This approach doesn't seem very LINQ and uses a great deal more code than when I create the XDocument so I wondered if there is a better way to do this? An example would be to get <?xml version="1.0"?> <root xmlns="somenamespace"> <object attribute1="This is Optional" attribute2="This is required"> <element1>Required</element1> <element1>Optional</element1> <List1> Optional List Of Elements </List1> <List2> Required List Of Elements </List2> </object> </root> Into public class Object() { public string Attribute1; public string Attribute2; public string Element1; public string Element2; public List<ListItem1> List1; public List<ListItem2> List2; } In a more LINQ friendly way than this: public bool ParseXDocument(string xml) { XNamespace xn = "somenamespace"; XDocument document = XDocument.Parse(xml); XElement elementRoot = description.Element(xn + "root"); if (elementRoot != null) { //Get Object Element XElement elementObject = elementRoot.Element(xn + "object"); if(elementObject != null) { if(elementObject.Attribute(xn + "attribute1") != null) { Attribute1 = elementObject.Attribute(xn + "attribute1"); } if(elementObject.Attribute(xn + "attribute2") != null) { Attribute2 = elementObject.Attribute(xn + "attribute2"); } else { //This is a required Attribute so return false return false; } //If, If/Elses get deeper and deeper for the next elements and lists etc.... } else { //Object is a required element so return false return false; } } else { //Root is a required element so return false return false; } return true; } Update: Just to clarify the ParseXDocument method is inside the "Object" class. Every time an xml document is received the Object class instance has some or all of it's values updated.

    Read the article

  • JavaOne Latin America 2012 is a wrap!

    - by arungupta
    Third JavaOne in Latin America (2010, 2011) is now a wrap! Like last year, the event started with a Geek Bike Ride. I could not attend the bike ride because of pre-planned activities but heard lots of good comments about it afterwards. This is a great way to engage with JavaOne attendees in an informal setting. I highly recommend you joining next time! JavaOne Blog provides a a great coverage for the opening keynotes. I talked about all the great set of functionality that is coming in the Java EE 7 Platform. Also shared the details on how Java EE 7 JSRs are willing to take help from the Adopt-a-JSR program. glassfish.org/adoptajsr bridges the gap between JUGs willing to participate and looking for areas on where to help. The different specification leads have identified areas on where they are looking for feedback. So if you are JUG is interested in picking a JSR, I recommend to take a look at glassfish.org/adoptajsr and jump on the bandwagon. The main attraction for the Tuesday evening was the GlassFish Party. The party was packed with Latin American JUG leaders, execs from Oracle, and local community members. Free flowing food and beer/caipirinhas acted as great lubricant for great conversations. Some of them were considering the migration from Spring -> Java EE 6 and replacing their primary app server with GlassFish. Locaweb, a local hosting provider sponsored a round of beer at the party as well. They are planning to come with Java EE hosting next year and GlassFish would be a logical choice for them ;) I heard lots of positive feedback about the party afterwards. Many thanks to Bruno Borges for organizing a great party! Check out some more fun pictures of the party! Next day, I gave a presentation on "The Java EE 7 Platform: Productivity and HTML 5" and the slides are now available: With so much new content coming in the plaform: Java Caching API (JSR 107) Concurrency Utilities for Java EE (JSR 236) Batch Applications for the Java Platform (JSR 352) Java API for JSON (JSR 353) Java API for WebSocket (JSR 356) And JAX-RS 2.0 (JSR 339) and JMS 2.0 (JSR 343) getting major updates, there is definitely lot of excitement that was evident amongst the attendees. The talk was delivered in the biggest hall and had about 200 attendees. Also spent a lot of time talking to folks at the OTN Lounge. The JUG leaders appreciation dinner in the evening had its usual share of fun. Day 3 started with a session on "Building HTML5 WebSocket Apps in Java". The slides are now available: The room was packed with about 150 attendees and there was good interaction in the room as well. A collaborative whiteboard built using WebSocket was very well received. The following tweets made it more worthwhile: A WebSocket speek, by @ArunGupta, was worth every hour lost in transit. #JavaOneBrasil2012, #JavaOneBr @arungupta awesome presentation about WebSockets :) The session was immediately followed by the hands-on lab "Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket". The lab covers JAX-RS 2.0, Jersey-specific features such as Server-Sent Events, and a WebSocket endpoint using JSR 356. The complete self-paced lab guide can be downloaded from here. The lab was planned for 2 hours but several folks finished the entire exercise in about 75 mins. The wonderfully written lab material and an added incentive of Java EE 6 Pocket Guide did the trick ;-) I also spoke at "The Java Community Process: How You Can Make a Positive Difference". It was really great to see several JUG leaders talking about Adopt-a-JSR program and other activities that attendees can do to participate in the JCP. I shared details about Adopt a Java EE 7 JSR as well. The community keynote in the evening was looking fun but I had to leave in between to go through the peak Sao Paulo traffic time :) Enjoy the complete set of pictures in the album:

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • Installer Reboots at "Detecting hardware" (disks and other hardware) on all recent Server Installs

    - by Ryan Rosario
    I have a very frustrating problem with my PC. I cannot install any recent version of Ubuntu Server (or even Desktop) since 9.04 even using the text-based installer. I boot from a USB stick created by Unetbootin (I also tried other methods such as startup disk creator with no difference). On the Server installer, it gets to "Detecting Hardware" (the second one about disks and all other hardware, not network hardware) and then either hangs at 0% (waited 24 hours), or reboots after a minute or two. My system (late 2007): ASUS P5NSLI motherboard Intel Core 2 Duo E6600 2.4Ghz 2 x 1GB Corsair 667MHz RAM nVidia GeForce 6600 I have unplugged everything (including the only hard disk, CD-ROMs and floppy). I have only one stick of RAM (tried each one to no avail) and am booting the installer from a USB stick (booting from CD-ROM yields the same problem). I also tried several of the boot options (nomodeset, nousb, acpi=off, noapic, i915.modeset=1/0, xforcevesa) in all combinations) to no avail. The only active parts of my system are the video card, mouse, keyboard and USB stick. I have also updated the BIOS to the most recent version. (FWIW, on the Desktop installer, I get a black screen after hitting the Install option.) Even after removing "quiet" I am unable to see what kernel panic is occurring (or not occurring) to cause the install to crash. I am only able to save the debug logs via a simple webserver in the installer. After the last line (I repeatedly refreshed), the server stops responding and the installer hangs or reboots: Jan 2 01:04:03 main-menu[302]: INFO: Menu item 'disk-detect' selected Jan 2 01:04:04 kernel: [ 309.154372] sata_nv 0000:00:0e.0: version 3.5 Jan 2 01:04:04 kernel: [ 309.154409] sata_nv 0000:00:0e.0: Using SWNCQ mode Jan 2 01:04:04 kernel: [ 309.154531] sata_nv 0000:00:0e.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.164442] scsi0 : sata_nv Jan 2 01:04:04 kernel: [ 309.167610] scsi1 : sata_nv Jan 2 01:04:04 kernel: [ 309.167762] ata1: SATA max UDMA/133 cmd 0x9f0 ctl 0xbf0 bmdma 0xd400 irq 10 Jan 2 01:04:04 kernel: [ 309.167774] ata2: SATA max UDMA/133 cmd 0x970 ctl 0xb70 bmdma 0xd408 irq 10 Jan 2 01:04:04 kernel: [ 309.167948] sata_nv 0000:00:0f.0: Using SWNCQ mode Jan 2 01:04:04 kernel: [ 309.168071] sata_nv 0000:00:0f.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.171931] scsi2 : sata_nv Jan 2 01:04:04 kernel: [ 309.173793] scsi3 : sata_nv Jan 2 01:04:04 kernel: [ 309.173943] ata3: SATA max UDMA/133 cmd 0x9e0 ctl 0xbe0 bmdma 0xe800 irq 11 Jan 2 01:04:04 kernel: [ 309.173954] ata4: SATA max UDMA/133 cmd 0x960 ctl 0xb60 bmdma 0xe808 irq 11 Jan 2 01:04:04 kernel: [ 309.174061] pata_amd 0000:00:0d.0: version 0.4.1 Jan 2 01:04:04 kernel: [ 309.174160] pata_amd 0000:00:0d.0: setting latency timer to 64 Jan 2 01:04:04 kernel: [ 309.177045] scsi4 : pata_amd Jan 2 01:04:04 kernel: [ 309.178628] scsi5 : pata_amd Jan 2 01:04:04 kernel: [ 309.178801] ata5: PATA max UDMA/133 cmd 0x1f0 ctl 0x3f6 bmdma 0xf000 irq 14 Jan 2 01:04:04 kernel: [ 309.178811] ata6: PATA max UDMA/133 cmd 0x170 ctl 0x376 bmdma 0xf008 irq 15 Jan 2 01:04:04 net/hw-detect.hotplug: Detected hotpluggable network interface eth0 Jan 2 01:04:04 net/hw-detect.hotplug: Detected hotpluggable network interface lo Jan 2 01:04:04 kernel: [ 309.485062] ata3: SATA link down (SStatus 0 SControl 300) Jan 2 01:04:04 kernel: [ 309.633094] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Jan 2 01:04:04 kernel: [ 309.641647] ata1.00: ATA-8: ST31000528AS, CC38, max UDMA/133 Jan 2 01:04:04 kernel: [ 309.641658] ata1.00: 1953525168 sectors, multi 1: LBA48 NCQ (depth 31/32) Jan 2 01:04:04 kernel: [ 309.657614] ata1.00: configured for UDMA/133 Jan 2 01:04:04 kernel: [ 309.657969] scsi 0:0:0:0: Direct-Access ATA ST31000528AS CC38 PQ: 0 ANSI: 5 Jan 2 01:04:04 kernel: [ 309.658482] sd 0:0:0:0: Attached scsi generic sg0 type 0 Jan 2 01:04:04 kernel: [ 309.658588] sd 0:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) Jan 2 01:04:04 kernel: [ 309.658812] sd 0:0:0:0: [sda] Write Protect is off Jan 2 01:04:04 kernel: [ 309.658823] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jan 2 01:04:04 kernel: [ 309.658918] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 2 01:04:04 kernel: [ 309.675630] sda: sda1 sda2 Jan 2 01:04:04 kernel: [ 309.676440] sd 0:0:0:0: [sda] Attached SCSI disk Jan 2 01:04:05 kernel: [ 309.969102] ata2: SATA link down (SStatus 0 SControl 300) Jan 2 01:04:05 kernel: [ 310.281137] ata4: SATA link down (SStatus 0 SControl 300) Anybody have any additional ideas I could try? I am getting ready to just toss the motherboard.

    Read the article

  • Principles of Big Data By Jules J Berman, O&rsquo;Reilly Media Book Review

    - by Compudicted
    Originally posted on: http://geekswithblogs.net/Compudicted/archive/2013/11/04/principles-of-big-data-by-jules-j-berman-orsquoreilly-media.aspx A fantastic book! Must be part, if not yet, of the fundamentals of the Big Data as a field of science. Highly recommend to those who are into the Big Data practice. Yet, I confess this book is one of my best reads this year and for a number of reasons: The book is full of wisdom, intimate insight, historical facts and real life examples to how Big Data projects get conceived, operate and sadly, yes, sometimes die. But not only that, the book is most importantly is filled with valuable advice, accurate and even overwhelming amount of reference (from the positive side), and the author does not event stop there: there are numerous technical excerpts, links and examples allowing to quickly accomplish many daunting tasks or make you aware of what one needs to perform as a data practitioner (excuse my use of the word practitioner, I just did not find a better substitute to it to trying to reference all who face Big Data). Be aware that Jules Berman’s background is in medicine, naturally, this book discusses this subject a lot as it is very dear to the author’s heart I believe, this does not make this book any less significant however, quite the opposite, I trust if there is an area in science or practice where the biggest benefits can be ripped from Big Data projects it is indeed the medical science, let’s make Cancer history! On a personal note, for me as a database, BI professional it has helped to understand better the motives behind Big Data initiatives, their underwater rivers and high altitude winds that divert or propel them forward. Additionally, I was impressed by the depth and number of mining algorithms covered in it. I must tell this made me very curious and tempting to find out more about these indispensable attributes of Big Data so sure I will be trying stretching my wallet to acquire several books that go more in depth on several most popular of them. My favorite parts of the book, well, all of them actually, but especially chapter 9: Analysis, it is just very close to my heart. But the real reason is it let me see what I do with data from a different angle. And then the next - “Special Considerations”, they are just two logical parts. The writing language is of this book is very acceptable for all levels, I had no technical problem reading it in ebook format on my 8” tablet or a large screen monitor. If I would be asked to say at least something negative I have to state I had a feeling initially that the book’s first part reads like an academic material relaxing the reader as the book progresses forward. I admit I am impressed with Jules’ abilities to use several programming languages and OSS tools, bravo! And I agree, it is not too, too hard to grasp at least the principals of a modern programming language, which seems becomes a defacto knowledge standard item for any modern human being. So grab a copy of this book, read it end to end and make yourself shielded from making mistakes at any stage of your Big Data initiative, by the way this book also helps build better future Big Data projects. Disclaimer: I received a free electronic copy of this book as part of the O'Reilly Blogger Program.

    Read the article

  • POP Forums v9 Beta 1 for ASP.NET MVC 3 posted to CodePlex!

    - by Jeff
    As promised, I posted a beta build of my forum app for ASP.NET MVC 3. Get the new goodies here: http://popforums.codeplex.com/releases/view/58228 This is the first beta for the ASP.NET MVC 3 version of POP Forums. It is nearly feature complete, and ready for testing and feedback. For previous release notes, look here, here and here.Check out the live preview: http://preview.popforums.com/ForumsSetup instructions are on the home page of this project. The new hotness in the beta, or what has been done since the last preview: All views converted to use Razor E-mail subscription/notification of new posts New post indicators/mark read buttons Permalinks to posts Jump to newest post (from new post indicators) Recent topics Favorite topics Moderator functions for topics (pin/close/delete, plus move and rename) Search, ported from v8. Not a ton of optimization here, or new unit testing, but the old version worked pretty well User posts (topics the user posted in) Forgot password Vanity items (signatures and avatars) Hide vanity items per user preference Some minor data caching where appropriate A little bit of UI refinement Lots-o-bug fixes Lots-o-unit tests What's next? The plan between now and the next beta is as follows: Continue working through features/tasks, and fix bugs as they're reported Integrate the forum into a real, production site Refine the UI Refactor as much as possible... the code organization is not entirely logical in some places After the second beta, a release candidate will follow, with a real "final" release after that. Subsequent releases should come relatively frequently and without a lot of risk. The trick in building this thing has been that it mostly tossed the previous WebForms version, which was all full of crusties. The time table for this is a little harder to pin down, as day jobs and families will have their effect. Other notes Refactoring will be a priority. As the features of MVC have evolved, so have my desires to use it in a fashion that makes things clear and easy to follow. I don't even know if anyone will ever start mucking around in the code, but on the off chance they do, I'd like what they find to not suck. Other nice-to-haves are builds to target Windows Azure and SQL CE. A nice setup UI would be super too. I think the ASP.NET MVC world has gone long enough without a decent forum.The biggest challenge that I've found is making the forum something that can be dropped in any app. While it does rope its views into an area, areas are mostly just routing details. I haven't thought of a clever way yet to limit dependency injection, for example, to just the forum bits. I mean, everyone should be using Ninject, but how realistic is that? ;)How much time and effort should you spend on POP Forums in its current state? Change is inevitable, but at this point I'm reasonably committed to not changing the database schema. I really think it will stay as-is. All bets are off for the various interfaces throughout the app, but the data should generally resist change. It's not even that different from v8, which was one of the original goals because I didn't want to rewrite SQL or introduce a new ORM or whatever. My point is that if you wanted to build a site around this today, even though it's not entirely functional, I think it's low risk in terms of data loss. I can't vouch for whether or not you know what you're doing.I've been having some chats with people lately about quoting posts, and honestly there has to be something better and straight forward. That continues to be a holy grail of mine, and some day, I hope to find it.Enjoy... it's starting to feel more real every day!

    Read the article

  • SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – Microsoft SQL Server 2005/2008

    - by pinaldave
    After successfully delivering many corporate trainings as well as the private training Solid Quality Mentors, India is launching the Public Training in Hyderabad for SQL Server 2008 and SharePoint 2010. This is going to be one of the most unique and one-of-a-kind events in India where Solid Quality Mentors are offering public classes. I will be leading the training on Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning. This intensive, 3-day course intends to give attendees an in-depth look at Query Optimization and Performance Tuning in SQL Server 2005 and 2008. Designed to prepare SQL Server developers and administrators for a transition into SQL Server 2005 or 2008, the course covers the best practices for a variety of essential tasks in order to maximize the performance. At the end of the course, there would be daily discussions about your real-world problems and find appropriate solutions. Note: Scroll down for course fees, discount, dates and location. Do not forget to take advantage of Discount code ‘SQLAuthority‘. The training premises are very well-equipped as they will be having 1:1 computers. Every participant will be provided with printed course materials. I will pick up your entire lunch tab and we will have lots of SQL talk together. The best participant will receive a special gift at the end of the course. Even though the quality of the material to be delivered together with the course will be of extremely high standard, the course fees are set at a very moderate rate. The fee for the course is INR 14,000/person for the whole 3-day convention. At the rate of 1 USD = 44 INR, this fee converts to less than USD 300. At this rate, it is totally possible to fly from anywhere from the world to India and take the training and still save handsome pocket money. It would be even better if you register using the discount code “SQLAuthority“, for you will instantly get an INR 3000 discount, reducing the total cost of the training to INR 11,000/person for whole 3 days course. This is a onetime offer and will not be available in the future. Please note that there will be a 10.3% service tax on course fees. To register, either send an email to [email protected] or call +91 95940 43399. Feel free to drop me an email at [email protected] for any additional information and clarification. Training Date and Time: May 12-14, 2010 10 AM- 6 PM. Training Venue: Abridge Solutions, #90/B/C/3/1, Ganesh GHR & MSY Plaza, Vittalrao Nagar, Near Image Hospital, Madhapur, Hyderabad – 500 081. The details of the course is as listed below. Day 1 : Strengthen the basics along with SQL Server 2005/2008 New Features Module 01: Subqueries, Ranking Functions, Joins and Set Operations Module 02: Table Expressions Module 03: TOP and APPLY Module 04: SQL Server 2008 Enhancements Day 2: Query Optimization & Performance Tuning 1 Module 05: Logical Query Processing Module 06: Query Tuning Module 07:  Introduction to the Query Processor Module 08:  Review of common query coding which causes poor performance Day 3: Query Optimization & Performance Tuning 2 Module 09:  SQL Server Indexing and index maintenance Module 10:  Plan Guides, query hints, UDFs, and Computed Columns Module 11:  Understanding SQL Server Execution Plans Module 12: Real World Index and Optimization Tips Download the complete PDF brochure. We are also going to have SharePoint 2010 training by Joy Rathnayake on 10-11 May. All the details for discount applies to the same as well. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – Top 10 “Ease of Use” Features of expressor Studio

    - by pinaldave
    expressor Studio is new data integration platform that is being marketed as the most easy to use tool of its kind.  But “easy to use” can be a relative term – an expert can find a very complex system easy, but a beginner might be stumped.  A recent article online discussed exactly what makes expressor Studio so easy use, and here is my view on this subject. Simple Installation There is one pop-up for one .exe file, and nothing more.  You can’t get much simpler than this.  It is also in the familiar Windows design, so there should be no surprises. No 3rd party software dependency Have you ever tried to download software, only to be slowed down by the need to download a compatible system to run the program, and another to read the user manual, and so on?  expressor Studio was designed specifically to avoid this problem. Microsoft Office Like Ribbon Bar and Menus As mentioned before, everything is in the familiar Windows design, from the pop up windows to the tool bars and menus.  There should be no learning curve for using this program, or even simply trying to navigate around a new system. General Development Design Interface This software has been designed to be simple and straightforward.  Projects can be arranged in a simple “tree” design, that is totally collapsible and can easy be added to or “trimmed” with a click of a button.  It was meant to be logical and easy to follow. Integrated Contextual Help This is a fancy way of saying that you can practically yell “help!” if you do get stuck on something.  Solving a problem is as simple as highlighting and hitting F1 for contextual help. Visual Indicators and Messages Wouldn’t it be nice to know exactly where something has gone wrong before trying to complete a project.  expressor Studio has a built in system to catch mistakes and highlight them in a bright color, flash a warning message, and even disable functions before you can continue – and possibly lose hours of work. Property Inputs and Selectors Every operator will have a list of requirements that need to be filled in.  But don’t worry; you won’t have to make stuff up to fill in the boxes.  Each one will have a drop-down menu with options to choose from – but not too many as to be confusing. Connection Wizards Configuring connections can be the hardest part of a project.  But not with the expressor Studioconnection wizard.  A familiar, Windows-style menu will walk you through connections so quickly you’ll forget what trouble it used to be. Templates With large, complex projects, a majority of your time is often spent simply setting up the files and inputting data.  But expressor Studio allows you to create one file and then save it as a template, saving you hours of boring data input. Extension Manager Let’s say that you need a little more functionality or some new features on your program. A lot of software requires you to download complex plug-ins that need to be decompressed and installed.  However, expressor Studio has extended its system to an Extension Manager, which allows for quick and easy installation of the functionality you need, without the need to download and decompress. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • OSI Model

    - by kaleidoscope
    The Open System Interconnection Reference Model (OSI Reference Model or OSI Model) is an abstract description for layered communications and computer network protocol design. In its most basic form, it divides network architecture into seven layers which, from top to bottom, are the Application, Presentation, Session, Transport, Network, Data Link, and Physical Layers. It is therefore often referred to as the OSI Seven Layer Model. A layer is a collection of conceptually similar functions that provide services to the layer above it and receives service from the layer below it. Description of OSI layers: Layer 1: Physical Layer ·         Defines the electrical and physical specifications for devices. In particular, it defines the relationship between a device and a physical medium. ·         Establishment and termination of a connection to a communications medium. ·         Participation in the process whereby the communication resources are effectively shared among multiple users. ·         Modulation or conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a communications channel. Layer 2: Data Link Layer ·         Provides the functional and procedural means to transfer data between network entities. ·         Detect and possibly correct errors that may occur in the Physical Layer. The error check is performed using Frame Check Sequence (FCS). ·         Addresses is then sought to see if it needs to process the rest of the frame itself or whether to pass it on to another host. ·         The Layer is divided into two sub layers: The Media Access Control (MAC) layer and the Logical Link Control (LLC) layer. ·         MAC sub layer controls how a computer on the network gains access to the data and permission to transmit it. ·         LLC layer controls frame synchronization, flow control and error checking.   Layer 3: Network Layer ·         Provides the functional and procedural means of transferring variable length data sequences from a source to a destination via one or more networks. ·         Performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors. ·         Network Layer Routers operate at this layer—sending data throughout the extended network and making the Internet possible.   Layer 4: Transport Layer ·         Provides transparent transfer of data between end users, providing reliable data transfer services to the upper layers. ·         Controls the reliability of a given link through flow control, segmentation/de-segmentation, and error control. ·         Transport Layer can keep track of the segments and retransmit those that fail. Layer 5: Session Layer ·         Controls the dialogues (connections) between computers. ·         Establishes, manages and terminates the connections between the local and remote application. ·         Provides for full-duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. ·         Implemented explicitly in application environments that use remote procedure calls. Layer 6: Presentation Layer ·         Establishes a context between Application Layer entities, in which the higher-layer entities can use different syntax and semantics, as long as the presentation service understands both and the mapping between them. The presentation service data units are then encapsulated into Session Protocol data units, and moved down the stack. ·         Provides independence from differences in data representation (e.g., encryption) by translating from application to network format, and vice versa. The presentation layer works to transform data into the form that the application layer can accept. This layer formats and encrypts data to be sent across a network, providing freedom from compatibility problems. It is sometimes called the syntax layer. Layer 7: Application Layer ·         This layer interacts with software applications that implement a communicating component. ·         Identifies communication partners, determines resource availability, and synchronizes communication. o       When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. o       When determining resource availability, the application layer must decide whether sufficient network or the requested communication exists. o       In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer. Technorati Tags: Kunal,OSI,Networking

    Read the article

  • When to implement: Together with or after the source product?

    - by Jeremy Oosthuizen
    Somebody recently relayed a prospect's question to me: How hard would it be to implement OUBI after the source product (CC&B, WAM or NMS) has already been implemented? Fact is that MOST non-OUBI Data Warehouse / Business Intelligence implementations take place after the source application(s) are in place and hopefully stable. If an organization decides that they need better reporting and management information, then the logical path (see The Data Warehouse Institute's Data Warehouse Maturity Model) is to a Data Warehouse -- no matter when their last applications were implemented. If there is a pre-built Data Warehouse for their specific application, or even for the desired business process in their industry, they're in luck. Else they have to design and build from scratch, using a toolset. The implementation of a toolset is unlike the implementation of OUBI which, like OBI Apps, contain pre-built ETL routines and user content. Much has been written before about the advantages of that. So, because OUBI is designed specifically for Oracle Utilities transactional products, we often implement them in parallel -- with OUBI lagging a little behind by necessity, like Reporting. Customers know from the start they're going to need the solution, and therefore purchase the products at the same time. My biggest argument FOR a parallel installation/implementation of OUBI with the source product is two-fold: - There could be things (which is the technical term for data elements) that customers figure out they need when implementing OUBI, which are often easier added to the source product's implementation project, than to add later; - OUBI's ETL often points out errors (severe or not) with converted data, which are easier to fix during the source product's implementation project, or it may even be impossible to fix afterwards. The Conversion routines sometimes miss these errors, because the source system can live with the not-quite-perfect converted data. If the data can't be properly extracted, i.e. the proper Dimensions linked to the Facts, then it can't get into OUBI. That means it can't be analyzed effectively along with the rest of the organization's data. Then there is also the throw-away-work argument, which may be significant. The operational / transactional system cannot go live without reports on Day 1. A lot of those reports would be taken care of by the implementation of OUBI. If OUBI is implemented after go-live, those reports STILL have to be built during the source product's implementation project, but they become throw-away after the OUBI implementation. I have sometimes been told that it is better to implement OUBI after the source product, because it cuts down on scope and risk for the source product's implementation project. All I can say to that, is bah humbug. No, seriously, given the arguments above, planning has to include the OUBI implementation and it has to be managed properly -- just like any other implementation. If so, it should not add any risk and it should be included in the scope from the start. The answer to the prospect's question is therefore that it is not that much more difficult; after all, most DW/BI implemenations are done like that. They just have to consider the points above.

    Read the article

  • The New Social Developer Community: a Q&A

    - by Mike Stiles
    In our last blog, we introduced the opportunities that lie ahead for social developers as social applications reach across every aspect and function of the enterprise. Leading the upcoming JavaOne Social Developer Program October 2 at the San Francisco Hilton is Roland Smart, VP of Social Marketing at Oracle. I got to ask Roland a few of the questions an existing or budding social developer might want to know as social extends beyond interacting with friends and marketing and into the enterprise. Why is it smart for developers to specialize as social developers? What opportunities lie in the immediate future that’s making this a critical, in-demand position? Social has changed the way we interact with brands and with each other across the web. As we acclimate to a new social paradigm we also look to extend its benefits into new areas of our lives. The workplace is a logical next step, and we're starting to see social interactions more and more in this context. But unlocking the value of social interactions requires technical expertise and knowledge of developing social apps that tap into the social graph. Developers focused on integrating social experiences into enterprise applications must be familiar with popular social APIs and must understand how to build enterprise social graphs of their own. These developers are part of an emerging community of social developers and are key to socially enabling the enterprise. Facebook rebranded their Preferred Developer Consultant Group (PDC) and the Preferred Marketing Developers (PMD) to underscore the fact developers are required inside marketing organizations to unlock the full potential of their platform. While this trend is starting on the marketing side with marketing developers, this is just an extension of the social developer concept that will ultimately drive social across the enterprise. What are some of the various ways social will be making its way into every area of enterprise organizations? How will it be utilized and what kinds of applications are going to be needed to facilitate and maximize these changes? Check out Oracle’s vision for the social-enabled enterprise. It’s a high-level overview of how social will impact across the enterprise. For example: HR can leverage social in recruiting and retentionSales can leverage social as a prospecting toolMarketing can use social to gain market insightCustomer support can use social to leverage community support to improve customer satisfaction while reducing service costOperations can leverage social improve systems That’s only the beginning. Once sleeves get rolled up and social developers and innovators get to work, still more social functions will no doubt emerge. What makes Java one of, if not the most viable platform on which to build these new enterprise social applications? Java is certainly one of the best platforms on which to build social experiences because there’s such a large existing community of Java developers. This means you can affordably recruit talent, and it's possible to effectively solicit advice from the community through various means, including our new Social Developer Community. Beyond that, there are already some great proof points Java is the best platform for creating social experiences at scale. Consider LinkedIn and Twitter. Tell us more about the benefits of collaboration and more about what the Oracle Social Developer Community is. What opportunities does that offer up and what are some of the ways developers can actively participate in and benefit from that community? Much has been written about the overall benefits of collaborating with other developers. Those include an opportunity to introduce yourself to the community of social developers, foster a reputation, establish an expertise, contribute to the advancement of the space, get feedback, experiment with the latest concepts, and gain inspiration. In short, collaboration is a tool that must be applied properly within a framework to get the most value out of it. The OSDC is a place where social developers can congregate to discuss the opportunities/challenges of building social integrations into their applications. What “needs” will this community have? We don't know yet. But we wanted to create a forum where we can engage and understand what social developers are thinking about, excited about, struggling with, etc. The OSDL can then step in if we can help remove barriers and add value in a serious and committed way so Oracle can help drive practice development.

    Read the article

  • How to prepare for an interview presentation!

    - by Tim Koekkoek
    During an interview process you might be asked to prepare a presentation as one of the steps in the recruitment process. Below, we want to give you some tips to help you prepare for what might be considered a daunting aspect of a recruitment process. Main purpose of the presentation Always keep in mind the main purpose of what the presentation is meant to convey. Generally speaking, an interview presentation is for the company to check if you have the ability to represent and sell the organization (and yourself), to the internal and external stakeholders in the position you are applying for. A presentation is often also part of the recruitment process to check whether you can structure and explain your experience and thoughts in a convincing manner. If you are unsure about the purpose of the presentation, feel free to ask your recruiter for more information. Preparation As with every task you do, preparation is key, so is the case with an interview presentation. It is important to know who your audience is. You have to adapt your presentation to your audience, ensuring that you are presenting the facts which they would want to hear. Furthermore, make sure you practice your presentation beforehand; this will make you more confident in your presenting skills. Also, estimate the length of your presentation as presentations or pitches during the recruitment process are often capped to a certain time limit. Structure Every presentation should have a beginning, middle and an end. Make sure you give an overview of your presentation and tell the audience what they can expect. Your presentation should have a logical order and a clear message. Always build up to your key message with strong arguments and evidence. When speaking about the topic, make sure you convey your points with conviction. Also be sure you believe the message you bring forward, if you don’t believe it yourself, then the audience definitely won’t! Delivery When you think back on successful presentations you have seen, the presenter was most likely always standing up. So if asked to do a presentation, follow this example and make sure you stand up as well. Standing up when you are doing your presentation shows confidence and control. Another important aspect in the delivery of the presentation is to relax and speak slowly and with enough volume for the audience to hear you. Speaking slowly allows the audience the time to absorb the information you are providing to them. Visual Using PowerPoint, or an equivalent, makes it very easy to have a visually attractive presentation. Make sure however that you take into account that the visual aids you use are there to support you, not for you to read word for word what is on the slides! Questions When starting your presentation, it is a good idea to tell your audience that you will deal with any questions they might have at the end of the presentation. This way it doesn’t interrupt your train of thought and the flow of your presentation. Answering questions at the end may give you additional opportunities to further expand on your facts in the topic. Some people might play the “devil’s advocate” role and confront you with opposing opinions, if this is the case, take your time to reiterate your points and remain professional in your response. A good way to deal with this is to start interacting with other members of the audience and ask for their opinions, so it will become a group discussion. This will also shows strong leadership skills, as you are open to discuss and ask for other opinions. If you are interested in more tips and tricks for your interview process, have a look at Competency Based Interview Tips and How to prepare for a Telephone Interview. For more information about Oracle and our vacancies, please visit our Facebook community and our careers website.

    Read the article

  • WEB203 &ndash; Jump into Silverlight!&hellip; and Become Effective Immediately with Tim Huckaby, Fou

    - by Robert Burger
    Getting ready for the good stuff. Definitely wish there were more Silverlight and WCF RIA sessions, but this is a start.  Was lucky to get a coveted power-enabled seat.  Luckily, due to my trustily slow Verizon data card, I can get these notes out amidst a total Internet outage here.  This is the second breakout session of the day, and is by far standing-room only.  I stepped out before the session started to get a cool Diet COKE and wouldn’t have gotten back in if I didn’t already have a seat. Tim says this is an intro session and that he’s been begging for intro sessions at TechEd for years and that by looking at this audience, he thinks the demand is there.  Admittedly, I didn’t know this was an intro session, or I might have gone elsewhere.  But, it was the very first Silverlight session, so I had to be here. Tim says he will be providing a very good comprehensive reference application at the end of the presentation.  He has just demoed it, and it is a full CRUD-based Sales Manager application based on…  AdventureWorks! Session Agenda What it is / How to get started Declarative Programming Layout and Controls, Events and Commands Working with Data Adding Style to Your Application   Silverlight…  “WPF Light” Why is the download 4.2MB?  Because the direct competitor is a 4.2MB download.  There is no technical reason it is not the entire framework.  It is purely to “be competitive”.   Getting Started Get all of the following downloads from www.silverlight.net/getstarted Install VS2010 or Visual Web Developer Express 2010 Install Silverlight 4 Tools for VS2010 Install Expression Blend 4 Install the Silverlight 4 Toolkit   Reference Application Features Uses MVVM pattern – a way to move data access code that would normally be inline within the UI and placing it in nice data access libraries Images loaded dynamically from the database, converting GIF to PNG because Silverlight does not support GIF. LINQ to SQL is the data access model WCF is the data provider and is using binary message encoding   Declarative Programming XAML replaces code for UI representation Attributes control Layout and Style Event handlers wired-up in XAML Declarative Data Binding   Layout Overview Content rendering flows inside of parent Fixed positioning (Canvas) is seldom used Panels are used to house content Margins and Padding over fixed size   Panels StackPanel – Arranges child elements into a single line oriented horizontally or vertically Grid – A flexible grid are that consists of rows and columns Canvas – An are where positions are specifically fixed WrapPanel (in Toolkit) – Positions child elements in sequential position left to right and top to bottom. DockPanel (in Toolkit) – Positions child controls within a dockable area   Positioning Horizontal and Vertical Alignment Margin – Separates an element from neighboring elements Padding – Enlarges the effective size of an element by a thickness   Controls Overview Not all controls created equal Silverlight, as a subset of WPF, so many WPF controls do not exist in the core Siverlight release Silverlight Toolkit continues to add controls, but are released in different quality bands Plenty of good 3rd party controls to fill the gaps Windows Phone 7 is to have 95% of controls available in Silverlight Core and Toolkit.   Events and Commands Standard .NET Events Routed Events Commands – based on the ICommand interface – logical action that can be invoked in several ways   Adding Style to Your Application Resource Dictionaries – Contains a hash table of key/value pairs.  Silverlight can only use Static Resources whereas WPF can also use Dynamic Resources Visual State Manager Silverlight 4 supports Implicit styles ResourceDictionary.MergedDictionaries combines many different file-based resources   Downloads

    Read the article

  • Converting an Oracle VM VirtualBox VM into an Oracle VM Server image

    - by wim.coekaerts
    As we are working on tighter seemless moving of VM's between the 2 products, here are a few simple steps to convert an existing Oracle VM VirtualBox image over. Steps involved to make it easy/straightforward : (1) When creating a VM in Virtualbox, using Oracle Linux as an example, make sure that /etc/fstab only uses labels. Do not use hardcoded device names. instead of an entry /dev/sda1 /u01 ext3 defaults 1 1 use LABEL=foo /u01 ext3 defaults 1 1 for more info on labels : man e2label or use a logical volume /dev/VolGroup00/LVfoo /u01 ext3 defaults 1 1 Doing so will make it easier to have an OS boot up on a different hypervisor with potentially different device names. For instance, the VirtualBox VM might expose a scsi driver while in Oracle VM Server you might end up with an ide disk, this then changes /dev/sda to /dev/hda. (2) If you have a VM created that you want to convert, then shut down the VM in VirtualBox and convert the image files : go the the directory that contains your HardDisk image files (.VirtualBox/HardDisks/* as an example) for each of the virtual disks run the following command : VBoxManage clonehd virtualdiskfilename.vdi system.img --format raw where virtualdiskfilename.vdi is the original VBox VM file (this can also be a vmdk file) and system.img is the name of the virtualdisk for Oracle VM. this can be any filename as well, I typically use system.img to specify the boot disk (as is common for Oracle VM template creation) (3) create a vm.cfg To run a VM converted from VirtualBox, you have to create a vm.cfg for Oracle VM server that creates an HVM guest. The easiest is to use a simple hvm vm.cfg and change it for your vm. I have an example here : acpi = 1 apic = 1 builder = 'hvm' device_model = '/usr/lib/xen/bin/qemu-dm' disk = ['file:system.img,hda,w', 'file:oracle.img,hdb,w',',hdc:cdrom,r',] kernel = '/usr/lib/xen/boot/hvmloader' memory = '1024' name = 'vmname' on_crash = 'restart' on_reboot = 'restart' pae = 1 serial = 'pty' timer_mode = '0' usbdevice = 'tablet' vcpus = 1 vif = ['bridge=xenbr0,type=ioemu'] vif_other_config = [] vnc = 1 vncconsole = 1 vnclisten = '0.0.0.0' vncpasswd = '' vncunused = 1 If you take the above vm.cfg, all you need to do - modify disk = (add your virtual disks in there) - modify memory = (amount of memory your VM needs) - modify name = (enter a name for your VM here) - modify vif = (might want to replace bridge=xenbr0 to the bridge you want to use) if you want more than 1 vcpu or other changes of course you have to make those as well. (4) copy this set of files onto your Oracle VM server or onto a webserver in a subdirectory and import the template through Oracle VM Manager. You can also just start the vm using xm create vm.cfg if you like. And that's it. As I said, we are working on automation around all this but it is relatively trivial to convert VM's over as long as you take the basic issues into account. Primarily the set up of the filesystems and the use of labels in /etc/fstab. There are other potential things to look at, such as network config. If you want to make that part clean then prior to shutting down the VM change /etc/modprobe.conf and/or add the mac address of the VM into the vm.cfg in the vifs line. The good thing, at least with Linux, is that even tho the virtual hardware changes, Linux will deal with it just fine (e1000 vs 8139 realtek, ide vs scsi etc). hope this helps.

    Read the article

  • Who should ‘own’ the Enterprise Architecture?

    - by Michael Glas
    I recently had a discussion around who should own an organization’s Enterprise Architecture. It was spawned by an article titled “Busting CIO Myths” in CIO magazine1 where the author interviewed Jeanne Ross, director of MIT's Center for Information Systems Research and co-author of books on enterprise architecture, governance and IT value.In the article Jeanne states that companies need to acknowledge that "architecture says everything about how the company is going to function, operate, and grow; the only person who can own that is the CEO". "If the CEO doesn't accept that role, there really can be no architecture."The first question that came up when talking about ownership was whether you are talking about a person, role, or organization (there are pros and cons to each, but in general, I like to assign accountability to as few people as possible). After much thought and discussion, I came to the conclusion that we were answering the wrong question. Instead of talking about ownership we were talking about responsibility and accountability, and the answer varies depending on the particular role of the organization’s Enterprise Architecture and the activities of the enterprise architect(s).Instead of looking at just who owns the architecture, think about what the person/role/organization should do. This is one possible scenario (thanks to Bob Covington): The CEO should own the Enterprise Strategy which guides the business architecture. The Business units should own the business processes and information which guide the business, application and information architectures. The CIO should own the technology, IT Governance and the management of the application and information architectures/implementations. The EA Governance Team owns the EA process.  If EA is done well, the governance team consists of both IT and the business. While there are many more roles and responsibilities than listed here, it starts to provide a clearer understanding of ‘ownership’. Now back to Jeanne’s statement that the CEO should own the architecture. If you agree with the statement about what the architecture is (and I do agree), then ultimately the CEO does need to own it. However, what we ended up with was not really ownership, but more statements around roles and responsibilities tied to aspects of the enterprise architecture. You can debate the semantics of ownership vs. responsibility and accountability, but in the end the important thing is to come to a clearer understanding that is easily communicated (and hopefully measured) around the question “Who owns the Enterprise Architecture”.The next logical step . . . create a RACI matrix that details the findings . . . but that is a step that each organization needs to do on their own as it will vary based on current EA maturity, company culture, and a variety of other factors. Who ‘owns’ the Enterprise Architecture in your organization? 1 CIO Magazine Article (Busting CIO Myths): http://www.cio.com/article/704943/Busting_CIO_Myths Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >