Search Results

Search found 1849 results on 74 pages for 'martin north'.

Page 67/74 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Help with a loop to return UIImage from possible matches

    - by Canada Dev
    I am parsing a list of locations and would like to return a UIImage with a flag based on these locations. I have a string with the location. This can be many different locations and I would like to search this string for possible matches in an NSArray, and when there's a match, it should find the appropriate filename in an NSDictionary. Here's an example of the NSDictionary and NSArray: NSDictionary *dict = [NSDictionary dictionaryWithObjectsAndKeys: @"franceFlag", @"france", @"greeceFlag", @"greece", @"spainFlag", @"spain", @"norwayFlag", @"norway", nil]; NSArray *array = [NSArray arrayWithObjects: @"france" @"greece" @"spain" @"portugal" @"ireland" @"norway", nil]; Obviously I'll have a lot more countries and flags in both. Here's what I have got to so far: -(UIImage *)flagFromOrigin:(NSString *)locationString { NSRange range; for (NSString *arrayString in countryArray) { range = [locationString rangeOfString:arrayString]; if (range.location != NSNotFound) { return [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:[dictionary objectForKey: arrayString] ofType:@"png"]]; } } return nil; } Now, the above doesn't actually work. I am missing something (and perhaps not even doing it right in the first place) The issue is, the locationString could have several locations in the same country, described something like this "Barcelona, Spain", "Madrid, Spain", "North Spain", etc., but I just want to retrieve "Spain" in this case. (Also, notice caps for each country). Basically, I want to search the locationString I pass into the method for a possible match with one of the countries listed in the NSArray. If/When one is found, it should continue into the NSDictionary and grab the appropriate flag based on the correct matched string from the array. I believe the best way would then to take the string from the array, as this would be a stripped-out version of the location. Any help to point me in the right direction for the last bit is greatly appreciated.

    Read the article

  • How To Read A Remote Text File

    - by XcodeDev
    Hi, I would like to read a remote text file called posts.txt on my website. An example of the insides of the posts.txt file would be this: <div style="width : 300px; position : relative"><font face="helvetica, geneva, sans serif" size="6"><b>2</b></font><font face="helvetica, geneva, sans serif" size="4"><i> scored by iSDK</i></font><br><img src="Bar.png" /></div><div style="width : 300px; position : relative"><font face="helvetica, geneva, sans serif" size="6"><b>2</b></font><font face="helvetica, geneva, sans serif" size="4"><i> scored by martin</i></font><br><img src="Bar.png" /></div> What I wanted to know is how can I get the score, and scored by text from the .txt file? The score is (in this case) the: <b>2</b>, and the scored by text in this case would be: "scored by iSDK". Any code telling me how to do this is twice as helpful! Thanks in advanced XcodeDev

    Read the article

  • How to make BoxLayout vertical but children flow top-aligned?

    - by user291701
    I'm trying to get get a vertical, top-aligned layout to work. This is what I have: JPanel pane = new JPanel(); pane.setLayout(new BoxLayout(pane, BoxLayout.Y_AXIS)); MyImagePanel panelImage = new MyImagePanel(); panelImage.setSize(400, 400); pane.add(new JComboBox()); pane.add(panelImage); pane.add(new JButton("1")); pane.add(new JButton("2")); pane.add(new JButton("3")); JFrame frame = new JFrame("Title"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setSize(800, 600); frame.add(pane); frame.pack(); frame.setVisible(true); All the controls appear, but it looks like padding is being applied at run time between their tops and bottoms so they're somewhat vertically centered. This is what I'm going for: ----------------------------------------------------- | ComboBox | | ------------ | | | | | Image | | | | | ------------ | | Button 1 | Any additional space fills the right | ------------ | | Button 2 | | ------------ | | Button 3 | | ------------ | | | | Any additional space fills the bottom | | | ----------------------------------------------------- How do I get BoxLayout to do that? Thanks ------------------------- Update ------------------------- Was able to use this: Box vertical = Box.createVerticalBox(); frame.add(vertical, BorderLayout.NORTH); vertical.add(new JComboBox()); vertical.add(new JButton("1")); ... to get what I wanted.

    Read the article

  • Xml Conversion "Type mismatch" Error

    - by prema
    I am selecting a query in sql server 2005 SELECT 'Region' AS ELEMENT,(SELECT GeographyName,GeoID from @tmpTable FOR XML RAW, TYPE) FOR XML RAW('Root') This will give the output in xml as <Root ELEMENT="Region"> <row GeographyName="East" GeoID="2" /> <row GeographyName="West" GeoID="3" /> <row GeographyName="North" GeoID="4" /> <row GeographyName="South" GeoID="5" /> </Root> In aspx page, i want to get this function Populatedata(obj, val) { var xmlDom = new JXmlDom(obj, false); --> at this point i am getting error var nodeHeader = xmlDom.selectNodes("//row"); // my code goes here } function JXmlDom (xml,isFile) { this.load=load; this.loadXML=loadXML; this.selectNodes=selectNodes; this.text=text; this.selectSingleNode=selectSingleNode; this.documentElement=documentElement; this.transformNode=transformNode; if (isFile) { this.dom=this.load (xml); }else { this.dom=this.loadXML (xml); } function loadXML (xml) { if (window.ActiveXObject) { var dom=new ActiveXObject("Microsoft.XMLDOm"); dom.async=false; dom.loadXML (xml); } if (document.implementation && document.implementation.createDocument) { var domParser=new DOMParser(); var dom=domParser.parseFromString (xml,"text/xml"); } return dom; } But when i am calling this i am getting an error as Type mismatch.Can anyone help me

    Read the article

  • How to use accessors within the same class in Objective C?

    - by Azeworai
    Hi, I have a few properties defined in my header file like so @property (assign) bool connectivity_N; @property (assign) bool isConnected_N; In my implementation file I have an init and the synthesized properties like so @implementation Map @synthesize connectivity_N; @synthesize isConnected_N; a init to set the initial values like so -(id) init { if( (self=[super init]) ) { //initialise default properties self.connectivity_N=NO; self.isConnected_N=NO; } return self; } I'm running into an error that states Error: accessing unknown 'connectivity_N' class method. In this public method within the class +(bool) isConnectable:(directions) theDirection{ bool isTheDirectionConnectable= NO; switch (theDirection) { case north: isTheDirectionConnectable= self.connectivity_N; break; I'm not sure why is this so as I'm trying to grab the value of the property. According to the apple developer documentation "The default names for the getter and setter methods associated with a property are propertyName and setPropertyName: respectively—for example, given a property “foo”, the accessors would be foo and setFoo:" That has given me a clue that I've done something wrong here, I'm fairly new to objective C so would appreciate anyone who spends the time to explain this to me. Thanks!

    Read the article

  • Rate my C# code (~300 SLOC) using GDI+/Backgroundworker

    - by sebastianlarsson
    Hi, I want to get some feedback on my code! Below is some background info. I am taking a pre-certification course in C# (Sweden, 15 ECTS). The focus of the course is theoretical and only limited practical work. I dont find the assignments very hard at all to tell you the truth, but since I only have very limited work experience as a developer (I have worked 15h/week at Ericsson since November) I think I would benefit from having the certificate (70-536 and more probably). I am currently reading Martin Fowler's "Refactoring: Improving the design of existing code" and I tried to apply his techniques to my latest lab in the course. I have been on the lookout for a website which have the idea of providing feedback on code, but so far I have yet to discover any. Please take a look on my code and tell me what you think. It is only roughly 300 lines of code divided into a couple of classes. GDI+, backgroundworker and user controls are what the lab is about. I reckon you may have to spend as little as a couple of minutes on looking on the solution. Link to solution: http://www.filefactory.com/file/b18h7d5/n/Lab4_Lab5_SebastianLarsson.zip Regards and thank you, Sebastian

    Read the article

  • Saving and restoring OpenGL model-view

    - by Tom
    I am a new-comer to OpenGL, and much of it remains mysterious to my feeble brain. I have been studying the NeHe demos as well as the Red Book. I am writing an Android application that displays the Earth in the center of the screen. The user can rotate the Earth about any axis (much like a very simple "Google Earth"). This code is working (I based it on the NeHe examples). Now I want to add a feature; the user should be able to save the current model orientation, then later return to that same orientation. For example, the user may save the Earth orientation such that the viewer is looking down at her hometown, and north-east is "up". How do I do this with OpenGL-ES? To capture and save the current orientation, my code could get the current model-view transformation matrix - I think I understand how to do that. But later on how do I apply that saved matrix to restore the view?

    Read the article

  • Best method to search heriarachal data

    - by WDuffy
    I'm looking at building a facility which allows querying for data with hierarchical filtering. I have a few ideas how I'm going to go about it but was wondering if there are any recommendations or suggestions that might be more efficient. As an example imagine that a user is searching for a job. The job areas would be as follows. 1: Scotland 2: --- West Central 3: ------ Glasgow 4: ------ Etc 5: --- North East 6: ------ Ayrshire 7: ------ Etc A user can search specific (ie Glasgow) or in a larger area (ie Scotland). The two approaches I am considering are 1: keep a note of children in the database for each record (ie cat 1 would have 2, 3, 4 in its children field) and query against that record with a SELECT * FROM Jobs WHERE Category IN Areas.childrenField. 2: Use a recursive function to find all results who have a relation to the selected area The problems I see from both are 1: holding this data in the db will mean having to keep track of all changes to structure 2: Recursion is slow and inefficent Any ideas, suggestion or recommendations on the best approach? I'm using C# ASP.NET with MSSQL 2005 DB.

    Read the article

  • Processor, OS : 32bit, 64 bit

    - by Sandbox
    I am new to programming and come from a non-CS background (no formal degree). I mostly program winforms using C#. I am confused about 32 bit and 64 bit.... I mean, have heard about 32 bit OS, 32 bit processor and based on which a program can have maximum memory. How it affects the speed of a program. There are lot more questions which keep coming to mind. I tried to go through some Computer Organization and Architecture books. But, either I am too dumb to understand what is written in there or the writers assume that the reader has some CS background. Can someone explain me these things in a plain simple English or point me to something which does that. EDIT: I have read things like In 32-bit mode, they can access up to 4GB memory; in 64-bit mode, they can access much much more....I want to know WHY to all such things. BOUNTY: Answers below are really good....esp one by Martin. But, I am looking at a thorough explanation, but in plain simple English.

    Read the article

  • Access to SQL Server 2005 from a non-domain machine using Windows authentication

    - by user304582
    Hi, I have a Windows domain within which a machine is running SQL Server 2005 and which is configured to support only Windows authentication. I would like to run a C# client application on a machine on the same network, but which is NOT on the domain, and access a database on the SQL Server 2005 instance. I thought that it would be a simple matter of doing something like this: string connectionString = "Data Source=server;Initial Catalog=database;User Id=domain\user;Password=password"; SqlConnection connection = new SqlConnection(connectionString); connection.Open(); However, this fails: the client-side error is: System.Data.SqlClient.SqlException: Login failed for user 'domain\user' and the server-side error is: Error 18456, Severity 14, State 5 I have tried various things including setting integrated security to true and false, and \ instead of \ in the User Id, but without success. In general, I know that it possible to connect to the SQL Server 2005 instance from a non-domain machine (for example, I am working with a Linux-based application which happily does this), but I don't seem to be able to work out how to do it from a Windows machine. Help would be appreciated! Thanks, Martin

    Read the article

  • Emacs - nxhtml-mode - memory full

    - by mbutz
    working with nxhtml-mode in emacs, I get problems since a few weeks. While working emacs pauses unexpectingly until showing a message in the mode line "!MEM FULL!"; obviously nxhtml-mode is filling up the memory until emacs stopps to work. I am working with html, php and css files. I have no idea how I could debug this problem in a meaningfull way. Also I seem to be the only one to have this problem, because googling did not deliver any answers to this question. I am using emacs 2.32 on an Linux Mint 11 system. I can not find out the verson of nxhtml, it says revision 829 downloaded from http://bazaar.launchpad.net/~nxhtml/nxhtml/main/revision/829. I set up a test scenario with a minimal dot-emacs just to test the nxhtml-mode. It seemed to be alright, but it does not reflect my productive set up. It would probably take a week or so to gradually include everything I used to use within emacs (e.g. org-mode) while testing whether nxhtml-mode does not like anything, which is called in my dot-emacs file. Is there another way? Can I find out, what causes the memory overload? Does anyone has similar problems using nxhtml-mode? Greetings Martin

    Read the article

  • SQL inner join from field defined table?

    - by Wolftousen
    I have a, currently, a total of 6 tables that are part of this question. The primary table, tableA, contains columns that all the entries in the other 5 tables have in common. The other 5 tables have columns which define the entry in tableA in more detail. For example: TableA ID|Name|Volumn|Weight|Description 0 |T1 |0.4 |0.1 |Random text 1 |R1 |5.3 |25 |Random text TableB ID|Color|Shape 0 |Blue |Sphere TableC ID|Direction|Velocity 1 |North |3.4 (column names are just examples don't take them for what they mean...) The ID field in Table A is unique to all other tables (i.e. TableB will have 0, but TableC will not, nor any other Tables). What I would like to do is select all the fields from TableA and the corresponding (according to ID field) detail Table (TableB-F). What I have currently done and not tested is added a field to TableA so it looks like this: TableA ID|Name|Volumn|Weight|Description|Table 0 |T1 |0.4 |0.1 |Random text|TableB 1 |R1 |5.3 |25 |Random text|TableC I have a few questions about this: 1.Is it proper to do such a thing to TableA, as foreign keys wont work in this situation since they all need to link to different tables? 2.If this is proper, would the SQL query look like this (ID would be input by the user)? SELECT * FROM TableA AS a INNER JOIN a.Table AS t ON a.ID = ID; 3.Is there a better way to do this? Thanks for the help.

    Read the article

  • Our Oracle Recruitment Team is Growing - Multiple Job Opportunities in Bangalore, India

    - by david.talamelli
    DON"T GET STUCK IN THE MATRIXSEE YOUR FUTUREVISIT THE ORACLE The position(s): CORPORATE RECRUITING RESEARCH ANALYST(S) ABOUT ORACLE Oracle's business is information--how to manage it, use it, share it, protect it. For three decades, Oracle, the world's largest enterprise software company, has provided the software and services that allow organizations to get the most up-to-date and accurate information from their business systems. Only Oracle powers the information-driven enterprise by offering a complete, integrated solution for every segment of the process industry. When you run Oracle applications on Oracle technology, you speed implementation, optimize performance, and maximize ROI. Great hiring doesn't happen by accident; it's the culmination of a series of thoughtfully planned and well executed events. At the core of any hiring process is a sourcing strategy. This is where you come in... Do you want to be a part of a world-class recruiting organization that's on the cutting edge of technology? Would you like to experience a rewarding work environment that allows you to further develop your skills, while giving you the opportunity to develop new skills? If you answered yes, you've taken your first step towards a future with Oracle. We are building a Research Team to support our North America Recruitment Team, and we need creative, smart, and ambitious individuals to help us drive our research department forward. Oracle has a track record for employing and developing the very best in the industry. We invest generously in employee development, training and resources. Be a part of the most progressive internal recruiting team in the industry. For more information about Oracle, please visit our Web site at http://www.oracle.com Escape the hum drum job world matrix, visit the Oracle and be a part of a winning team, apply today. POSITION: Corporate Recruiting Research Analyst LOCATION: Bangalore, India RESPONSIBILITIES: •Develop candidate pipeline using Web 2.0 sourcing strategies and advanced Boolean Search techniques to support U.S. Recruiting Team for various job functions and levels. •Engage with assigned recruiters to understand the supported business as well as the recruiting requirements; partner with recruiters to meet expectations and deliver a qualified pipeline of candidates. •Source candidates to include both active and passive job seekers to provide a strong pipeline of qualified candidates for each recruiter; exercise creativity to find candidates using Oracle's advanced sourcing tools/techniques. •Fully evaluate candidate's background against the requirements provided by recruiter, and process leads using ATS (Applicant Tracking System). •Manage your efforts efficiently; maintain the highest levels of client satisfaction as well as strong operations and reporting of research activities. PREFERRED QUALIFICATIONS: •Fluent in English, with excellent written and oral communication skills. •Undergraduate degree required, MBA or Masters preferred. •Proficiency with Boolean Search techniques desired. •Ability to learn new software applications quickly. •Must be able to accommodate some U.S. evening hours. •Strong organization and attention to detail skills. •Prior HR or corporate in-house recruiting experiences a plus. •The fire in the belly to learn new ideas and succeed. •Ability to work in team and individual environments. This is an excellent opportunity to join Oracle in our Bangalore Offices. Interested applicants can send their resume to [email protected] or contact David on +61 3 8616 3364

    Read the article

  • How to prepare for a telephone interview: ‘Develop an Interview Cheat Sheet’

    - by Maria Sandu
    At Oracle we often do telephone interviews in different stages of the process with candidates, due to the fact that we hire native speakers into other countries. On this blog we already have an article with tips and tricks for phone interviews that can help you during the telephone interviews. To help you prepare even better for a telephone interview we would like to introduce you the basics of developing a cheat sheet. The benefit of a telephone interview is that you will be sitting at home, at your table or desk, during the interview, and not in front of someone. So use this to your advantage. The Monster website has some useful and interesting tips and tricks for developing a cheat sheet. Carole Martin, who wrote this article, says that a cheat sheet will help you feel more prepared and confident when speaking to managers over the phone. Important to keep in mind is that you shouldn't memorise what's on the sheet or check it off during the interview. Only use your cheat sheet to remind you of key facts. Here are some suggestions to include on it: • Divide a piece of paper in 2 by drawing a line. Write on one side of the paper a list of requirements as mentioned in the job description. On the other side list your qualities to fulfill the requirements of the employer. This will help you in answering questions about why you are the best candidate for the job and how you fit the role. • Do research on the company, the industry sector and the competitors, so you will get a feeling for the company’s business and can ask more in-depth questions. • Be prepared for the most used introduction question: “Tell me a bit about yourself”. Prepare a 60-second personal statement or pitch in which you summarise who you are and what you can offer, so you will be able to sell yourself from on the very beginning. • Write down a minimum of 5 good examples to answer behavioral interview questions ("Tell me about a time when..." or "Give me an example of a time..." ). These questions are used by interviewers to see how you deal with similar situations as you might encounter in the job. Interviewers use this question as past behaviour is scientifically proven to be the best predictor for future behaviour. • List five questions to ask the interviewer about the job, the company and the industry to help you get a good understanding if the role and company really fit your needs and wants. To get some inspiration check this article on inc.com • Find out how much you are worth on the job market and determine your needs based on your living expenses, especially when moving abroad. • Ask for permission from the people you plan to use as a reference. Also make sure you have your CV at hand and an overview of your grades. Feel free to comment on this article and let us know what your experience is with developing a cheat sheet for a telephone interview. Good luck with the preparation of your sheet.

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • XNA Notes 008

    - by George Clingerman
    This week has been a rough one. I’ve been sick and then in some kind of slump for my afternoon coding sessions. It could be from the cold, could be I’m still tired from writing that Windows Phone 7 game development book (which is out now!) or it could just be I’m tired of winter and want some sunshine. All I know is that even while I’m stick, the XNA world keeps going along at it’s whirlwind pace. Below are the things I caught in between my coughing fits.. Time Critical XNA News: The 2011 MVP summit is almost here so pass along your feelings and thoughts so the MVPs can take them and share them with the team in person http://forums.create.msdn.com/forums/p/76317/464136.aspx#464136 Dream Build Play - there’s no new announcement yet, but you can’t get much more to the end of February than this! http://www.dreambuildplay.com/Main/Home.aspx XNA Team: Dean Johnson from the XNA team shares an excellent way of handling Guide.IsTrialMode on WP7 http://blogs.msdn.com/b/dejohn/archive/2011/02/21/calling-guide-istrialmode-on-windows-phone-7.aspx Nick Gravelyn tries a new tactic in deciding if there’s enough interest to develop a sequel or not. Don’t YOU want Pixel Man 2 to come out? http://nickgravelyn.com/pixelman2/ XNA MVPs: Andy “The ZMan” Dunn finally shares what he’s been secretly working on these past 4 months http://twitter.com/#!/The_Zman/status/40590269392887808 http://www.youtube.com/watch?v=Rg8Z0ZdYbvg&feature=youtu.be Joel Martinez lets developers around NYC know they should by signing up for Game Hack Day http://twitter.com/joelmartinez/statuses/41118590862102528 http://gamehackday.org/71fdk XNA Developers: Michael McLaughlin shares an XNA RenderTarget2D Sample http://geekswithblogs.net/mikebmcl/archive/2011/02/18/xna-rendertarget2d-sample.aspx Martin Caine starts a new series on Deferred Rendering in XNA 4.0 http://twitter.com/#!/MartinCaine/status/39735221339291648 http://martincaine.com/xna/deferred_rendering_in_xna_4_introduction ElemenyCy posts about his fun time with the IntermediateSerializer http://www.ubergamermonkey.com/xna/holy-bloated-xml-batman/ Ben Kane releases a narrated dev diary video for Project Splice. Let him know if you’d like to see more! (I know I do!) http://twitter.com/#!/benkane/status/39846959498002432 http://www.youtube.com/watch?v=1EmziXZUo08&feature=youtu.be Jason Swearingen (of Novaleaf) posts his part 1 of Spatial Partitioning solutions http://altdevblogaday.org/2011/02/21/spatial-partitioning-part-1-survey-of-spatial-partitioning-solutions/ Brian Lawson of Dark Flow Studios shares what his been up to lately with lots of pretty screenshots and hints of announcements from Microsoft... http://www.darkflowstudios.com/entry/short-and-sweet-part-1 Luke Avery starts a new blog where he plans on making XNA tutorials for beginners (and he’s got a few started already!) http://programmingwithovery.wordpress.com/ Xbox LIVE Indie Games (XBLIG): GameMarx Episode 10 http://www.gamemarx.com/video/the-show/24/ep-10-february-18-2010.aspx Minecraft clone FortressCraft coming to XBLIG http://www.eurogamer.net/articles/2011-02-23-minecraft-clone-fortresscraft-hits-xblig ezMuze+ starts an IndieGoGo fundraiser campaign to help fund their second game and get it onto even more devices! http://www.indiegogo.com/ezmuze Gamergeddon XBLIG round up http://www.gamergeddon.com/2011/02/20/xbox-indie-game-round-up-february-20th/?utm_campaign=twitter&utm_medium=twitter&utm_source=twitter JForce Games loses their Ego http://jforcegames.com/blog/index.php?itemid=121&catid=4 XNA Game Development: @BallerIndustry reminds all XNA developers that the Maths are important ;) http://twitter.com/#!/BallerIndustry/status/39317618280243200 http://www.youtube.com/watch?v=MjV3XDFsjP4&feature=player_embedded#at=106 @suhinini stumbles on an older but extremely useful post on XNA Content Pipeline debugging http://twitter.com/#!/suhinini/status/39270189476352000 http://badcorporatelogo.wordpress.com/2010/10/31/xna-content-pipeline-debugging-4-0/ XNA Game Development Workshops at Singapore Universities http://innovativesingapore.com/2011/02/xna-game-development-workshops-at-singapore-universities/ Indiefreaks announces that IGF v0.3 is out with Xbox 360 support, SunBurn 2.0.12 and it’s now Open Source! http://twitter.com/#!/indiefreaks/status/39391953971982336 @liotral announces a new series on properly designing a game http://twitter.com/#!/liortal53/status/39466905081217024 http://liortalblog.wordpress.com/2011/02/20/hello-cosmos/ Indies and XNA at CodeStock 2011 http://www.gamemarx.com/news/2011/02/20/indies-and-xna-at-codestock-2011.aspx Train Frontier Express posts about XNA Content Hotloading http://trainfrontierexpress.blogspot.com/2011/02/xna-content-hotloading-overview.html Slyprid announces a new character editor in Transmute http://twitter.com/#!/slyprid/status/40146992818696192 http://www.youtube.com/watch?v=OKhFAc78LDs&feature=youtu.be The XNA 2D from the ground up tutorial series http://xna-uk.net/blogs/darkgenesis/archive/2011/02/23/recap-the-xna-2d-from-the-ground-up-tutorial-series.aspx Sgt.Conker posts a “Clingerman” (hey that’s me!) to stay relevant http://www.sgtconker.com/2011/02/posting-a-clingerman-to-stay-relevant/

    Read the article

  • SQL SERVER – Merge Operations – Insert, Update, Delete in Single Execution

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by Jorge Segarra (aka SQLChicken). I have been very active using these Merge operations in my development. However, I have found out from my consultancy work and friends that these amazing operations are not utilized by them most of the time. Here is my attempt to bring the necessity of using the Merge Operation to surface one more time. MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. One of the most important advantages of MERGE statement is that the entire data are read and processed only once. In earlier versions, three different statements had to be written to process three different activities (INSERT, UPDATE or DELETE); however, by using MERGE statement, all the update activities can be done in one pass of database table. I have written about these Merge Operations earlier in my blog post over here SQL SERVER – 2008 – Introduction to Merge Statement – One Statement for INSERT, UPDATE, DELETE. I was asked by one of the readers that how do we know that this operator was doing everything in single pass and was not calling this Merge Operator multiple times. Let us run the same example which I have used earlier; I am listing the same here again for convenience. --Let’s create Student Details and StudentTotalMarks and inserted some records. USE tempdb GO CREATE TABLE StudentDetails ( StudentID INTEGER PRIMARY KEY, StudentName VARCHAR(15) ) GO INSERT INTO StudentDetails VALUES(1,'SMITH') INSERT INTO StudentDetails VALUES(2,'ALLEN') INSERT INTO StudentDetails VALUES(3,'JONES') INSERT INTO StudentDetails VALUES(4,'MARTIN') INSERT INTO StudentDetails VALUES(5,'JAMES') GO CREATE TABLE StudentTotalMarks ( StudentID INTEGER REFERENCES StudentDetails, StudentMarks INTEGER ) GO INSERT INTO StudentTotalMarks VALUES(1,230) INSERT INTO StudentTotalMarks VALUES(2,255) INSERT INTO StudentTotalMarks VALUES(3,200) GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Merge Statement MERGE StudentTotalMarks AS stm USING (SELECT StudentID,StudentName FROM StudentDetails) AS sd ON stm.StudentID = sd.StudentID WHEN MATCHED AND stm.StudentMarks > 250 THEN DELETE WHEN MATCHED THEN UPDATE SET stm.StudentMarks = stm.StudentMarks + 25 WHEN NOT MATCHED THEN INSERT(StudentID,StudentMarks) VALUES(sd.StudentID,25); GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Clean up DROP TABLE StudentDetails GO DROP TABLE StudentTotalMarks GO The Merge Join performs very well and the following result is obtained. Let us check the execution plan for the merge operator. You can click on following image to enlarge it. Let us evaluate the execution plan for the Table Merge Operator only. We can clearly see that the Number of Executions property suggests value 1. Which is quite clear that in a single PASS, the Merge Operation completes the operations of Insert, Update and Delete. I strongly suggest you all to use this operation, if possible, in your development. I have seen this operation implemented in many data warehousing applications. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Joins, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Merge

    Read the article

  • Don’t miss this very popular presentation on Punchout in iProcurement on June 26th 2012

    - by user793553
    Don’t miss this very popular presentation on Punchout in iProcurement on June 26th.  See Doc ID 1448447.1 for the Webcast details. ADVISOR WEBCAST: Punchout in iProcurement PRODUCT FAMILY: EBZs- Procurement   June 26, 2012 at 14:00 UK / 15:00 Cairo / 6:00 am Pacific / 7:00 am Mountain / 9:00 am Eastern This one-hour session is recommended for technical and functional users who are maintaining and/or implementing the Punchout from iProcurement. The session will provide an overview of the different Punchout model, setup, and the Punchout to PO xml/cxml cycle. Also, it will provide tips in troubleshooting the common issues when new supplier is added to Punchout or the existing one stops working. TOPICS WILL INCLUDE: Overview of the Punchout Models. Provide the knowledge in the Punchout to PO Process cycle. Demo - Punchout. Certificates and setup. Learn the common issues and how to address in an efficient way. (Documentation and Notes) A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Current Schedule can be found on Note 740966.1 Post Presentation Recordings can be found on Note 740964.1 WebEx Conference Details Topic: Advisor Webcast - Punchout in iProcuremen Date and Time: Tuesday, June 26, 2012 3:00 pm, Egypt Time (Cairo, GMT+02:00) Tuesday, June 26, 2012 2:00 pm, GMT Summer Time (London, GMT+01:00) Tuesday, June 26, 2012 9:00 am, Eastern Daylight Time (New York, GMT-04:00) Tuesday, June 26, 2012 7:00 am, Mountain Daylight Time (Denver, GMT-06:00) Event number: 597 373 155 -------------------------------------------------------  To register for this meeting  -------------------------------------------------------  1. Event address for attendees: https://oracleaw.webex.com/oracleaw/onstage/g.php?d=597373155&t=a 2. Register for the meeting.  Once the host approves your request, you will receive a confirmation email with instructions for joining the meeting. InterCall Audio Instructions A list of Toll-Free Numbers can be found below. VOICESTREAMING IS AVAILABLE teleconference ID: 70528713 UK standard International:+44 1452 562 665 US Free Call: 1866 230 1938 US Local call: 1845 608 8023 Global Toll-Free Numbers MOS doc#:  https://metalink3.oracle.com/od/faces/secure/km/DocumentDisplay.jspx?id=1148600.1 Designation Number Argentina Free Call 0800 444 1009 Australia Free Call 1800 763 650 Austria Free Call 0800 111 956 Austria Local Call 0192 865 72 Belgium Free Call 0800 724 46 Belgium Local Call 0817 000 60 Brazil Free Call 0800 761 0835 Bulgaria Free Call 0080 011 511 76 Canada Free Call 1866 984 6577 Columbia Free Call 0180 091 562 17 Croatia Free Call 0800 222 305 Cyprus Free Call 8009 6341 Czech Republic Free Call 8007 007 95 Denmark Free Call 8088 8467 Denmark Local Call 3272 7506 Finland Free Call 0800 112 398 Finland Local Call 0923 114 014 France Free Call 0805 110 463 France Local Call 0359 580 290 Germany Free Call 0800 101 4918 Germany Local Call 0692 222 161 19 Greece Free Call 0080 012 8135 Hong Kong Free Call 8009 661 55 Hungary Free Call 0680 018 839 Hungary Local Call 0180 889 97 India Free Call 0008 001 006 600 Ireland Free Call 1800 300 170 Ireland Local Call 0143 198 35 Israel Free Call 1809 431 440 Italy Free Call 8007 840 87 Italy Local Call 0236 009 700 Japan Free Call 0066 338 124 31 Latvia Free Call 8000 3680 Luxembourg Free Call 8002 7941 Malaysia Free Call 1800 814 528 Mexico Free Call 0018 666 864 905 Monaco Free Call 8009 3655 Netherlands Free Call 0800 949 4596 Netherlands Local Call 0207 168 000 New Zealand Free Call 0800 451 190 North China Free Call 1080 074 413 29 Norway Free Call 8001 8057 Norway Local Call 2151 0847 Poland Free Call 0080 012 135 73 Portugal Free Call 8007 894 20 Romania Free Call 0800 895 558 Russia Free Call 8108 002 385 2044 Slovenia Free Call 0800 804 55 South Africa Free Call 0800 982 794 South China Free Call 1080 044 111 82 South Korea Free Call 0079 814 800 7887 Spain Free Call 9009 389 85 Spain Local Call 9111 421 10 Sweden Free Call 0200 214 344 Sweden Local Call 0850 596 375 Switzerland Free Call 0800 835 040 Switzerland Local Call 0445 804 280 Thailand Free Call 0018 004 421 98 UK Free Call 0800 073 1830 UK Local Call 0844 871 9364 UK National Call 0871 700 0309 UK Standard International +44 (0) 1452 562 665 USA Free Call 1866 230 1938   Back to the top   Copyright? 2010, Oracle. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • Expression Studio 4 - without SketchFlow&hellip;

    - by mbcrump
    is kinda like an explosion with no “Ka-Boom”… I was excited to hear the news yesterday at Microsoft Teched that Expression Studio 4 had officially launched. MSDN subscribers could log in and download the full release. So, I logged into my MSDN account and started downloading Expression Studio 4 Premium thinking that I was only minutes away from trying out SketchFlow 4. To my dismay, I launched Blend 4 and noticed it did not say SketchFlow on the splash screen. So, I went to New Project and the template was not available. After some digging around on the net, I learned my premium MSDN subscription did not include SketchFlow and I would need to purchase the Ultimate Edition. Below is a excerpt directly from Microsoft: Q: What products are included in the Microsoft Expression Studio 4 Ultimate? A: Expression Studio 4 Ultimate is comprised of 4 products, Expression Web 4, Microsoft Expression Blend® 4 + SketchFlow, Expression Encoder 4 Pro and Expression Design 4. Expression Blend 4 includes SketchFlow in Expression Studio 4 Ultimate product only. Q: What products are included in the Microsoft Expression Studio 4 Premium? A: Expression Studio 4 Premium is comprised of 4 products, Expression Web 4, Microsoft Expression Blend 4, Expression Encoder 4 and Expression Design 4. Expression Studio 4 Premium is not available for retail purchase. Q: What products are included in the Microsoft Expression Studio 4 Web Professional? A: Expression Studio 4 Web Professional is comprised of 3 products, Expression Web 4, Expression Encoder 4 and Expression Design 4. As you can see, we got screwed on this deal and plenty of people are complaining: Kiran Says: 6.07.2010 at 5:07 PM No SketchFlow for Expression Studio 4 Premium? What a bumper for Microsoft Partners!! Martin Says: 6.07.2010 at 6:18 PM Why does Expression Professional Subscription not include upgrades and new releases of Expression Studio. Good question hey. I bought my subscription 5 days ago thinking I would get what i purchased but no Expression upgrades or new releases for me, what a waste of money. I think I am not the only long term user of this software that feels disgruntled. Sorry john just had to tell someone. shaggygi Says: 6.07.2010 at 7:31 PM SketchFlow NOT included in Studio 4? WTF! I repeat.... WT...Freaking.... F! This is totally unacceptable. My development team purchased VS 2010 Premium w/ MSDN with the impression by Adam Kinney, Scott Guthrie, etc. that this would be included in the Premium package or some sort of free upgrade. I understand this is a Marketing thing, but come on! I believe, at very least, this should have been explained in detail before this release. John Papa... as a rep to give feedback to the team... Please please and please.... tell powers-at-be to fix this problem. Sorry for the rant. Besides this issue, I believe it is a very good product:) Thanks Vaclav Elias Says: 6.08.2010 at 4:30 AM Well, I am also not happy that SketchFlow is only for the chosen ones :-) It is very nice product. Actually, kind of foundation for web development so they could really support any MSDN subscribers.. :-( I am hoping that Microsoft will make this right for all of us with MSDN premier subscriptions. In the meantime,  you can check out the 5 day training series available here.

    Read the article

  • Ask the Readers: Backing Your Files Up – Local Storage versus the Cloud

    - by Asian Angel
    Backing up important files is something that all of us should do on a regular basis, but may not have given as much thought to as we should. This week we would like to know if you use local storage, cloud storage, or a combination of both to back your files up. Photo by camknows. For some people local storage media may be the most convenient and/or affordable way to back up their files. Having those files stored on media under your control can also provide a sense of security and peace of mind. But storing your files locally may also have drawbacks if something happens to your storage media. So how do you know whether the benefits outweigh the disadvantages or not? Here are some possible pros and cons that may affect your decision to use local storage to back up your files: Local Storage Pros You are in control of your data Your files are portable and can go with you when needed if using external or flash drives Files are accessible without an internet connection You can easily add more storage capacity as needed (additional drives, etc.) Cons You need to arrange room for your storage media (if you have multiple externals drives, etc.) Possible hardware failure No access to your files if you forget to bring your storage media with you or it is too bulky to bring along Theft and/or loss of home with all contents due to circumstances like fire If you are someone who is always on the go and needs to travel as lightly as possible, cloud storage may be the perfect way for you to back up and access your files. Perhaps your laptop has a hard-drive failure or gets stolen…unhappy events to be sure, but you will still have a copy of your files available. Perhaps a company wants to make sure their records, files, and other information are backed up off site in case of a major hardware or system failure…expensive and/or frustrating to fix if it happens, but once again there is a nice backup ready to go once things are fixed. As with local storage, here are some possible pros and cons that may influence your choice of cloud storage to back up your files: Cloud Storage Pros No need to carry around flash or bulky external drives All of your files are accessible wherever there is an internet connection No need to deal with local storage media (or its’ upkeep) Your files are still safe if your home is broken into or other unfortunate circumstances occur Cons Your files and data are not 100% under your control Possible hardware failure or loss of files on the part of your cloud storage provider (this could include a disgruntled employee wreaking havoc) No access to your files if you do not have an internet connection The cloud storage provider may eventually shutdown due to financial hardship or other unforeseen circumstances The possibility of your files and data being stolen by hackers due to a security breach on the part of your cloud storage provider You may also prefer to try and cover all of the possibilities by using both local and cloud storage to back up your files. If something happens to one, you always have the other to fall back on. Need access to those files at or away from home? As long as you have access to either your storage media or an internet connection, you are good to go. Maybe you are getting ready to choose a backup solution but are not sure which one would work better for you. Here is your chance to ask your fellow HTG readers which one they would recommend. Got a great backup solution already in place? Then be sure to share it with your fellow readers! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know Winter Sunset by a Mountain Stream Wallpaper Add Sleek Style to Your Desktop with the Aston Martin Theme for Windows 7 Awesome WebGL Demo – Flight of the Navigator from Mozilla Sunrise on the Alien Desert Planet Wallpaper Add Falling Snow to Webpages with the Snowfall Extension for Opera [Browser Fun] Automatically Keep Up With the Latest Releases from Mozilla Labs in Firefox 4.0

    Read the article

  • Session Update from IASA 2010

    - by [email protected]
    Below: Tom Kristensen, senior vice president at Marsh US Consumer, and Roger Soppe, CLU, LUTCF, senior director of insurance strategy, Oracle Insurance. Tom and Roger participated in a panel discussion on policy administration systems this week at IASA 2010. This week was the 82nd Annual IASA Educational Conference & Business Show held in Grapevine, Texas. While attending the conference, I had the pleasure of serving as a panelist in one of many of the outstanding sessions conducted this year. The session - entitled "Achieving Business Agility and Promoting Growth with a Modern Policy Administration System" - included industry experts Steve Forte from OneShield, Mike Sciole of IFG Companies, and Tom Kristensen, senior vice president at Marsh US Consumer. The session was conducted as a panel discussion and focused on how insurers can leverage best practices to mitigate risk while enabling rapid product innovation through a modern policy administration system. The panelists offered insight into business and technical challenges for both Life & Annuity and Property & Casualty carriers. The session had three primary learning objectives: Identifying how replacing a legacy system with a more modern policy administration solution can deliver agility and growth Identifying how processes and system should be re-engineered or replaced in order to improve speed-to-market and product support Uncovering how to leverage best practices to mitigate risk during a migration to a new platform Tom Kristensen, who is an industry veteran with over 20 years of experience, was able was able to offer a unique perspective as a business process outsourcer (BPO). Marsh US Consumer is currently implementing both the Oracle Insurance Policy Administration solution and the Oracle Revenue Management and Billing platform while at the same time implementing a new BPO customer. Tom offered insight on the need to replace their aging systems and Marsh's ability to drive new products and processes with a modern solution. As a best practice, their current project has empowered their business users to play a major role in both the requirements gathering and configuration phases. Tom stated that working with a modern solution has also enabled his organization to use a more agile implementation methodology and get hands-on experience with the software earlier in the project. He also indicated that Marsh was encouraged by how quickly it will be able to implement new products, which is another major advantage of a modern rules-based system. One of the more interesting issues was raised by an audience member who asked, "With all the vendor solutions available in North American and across Europe, what is going to make some of them more successful than others and help ensure their long term success?" Panelist Mike Sciole, IFG Companies suggested that carriers do their due diligence and follow a structured evaluation process focusing on vendors who demonstrate they have the "cash to invest in long term R&D" and evaluate audited annual statements for verification. Other panelists suggested that the vendor space will continue to evolve and those with a strong strategy focused on the insurance industry and a solid roadmap will likely separate themselves from the rest. The session concluded with the panelists offering advice about not being afraid to evaluate new modern systems. While migrating to a new platform can be challenging and is typically only undertaken every 15+ years by carriers, the ability to rapidly deploy and manage new products, create consistent processes to better service customers, and the ability to manage their business more effectively, transparently and securely are well worth the effort. Roger A.Soppe, CLU, LUTCF, is the Senior Director of Insurance Strategy, Oracle Insurance.

    Read the article

  • Top Tier, A-Game Talent - How to Land em'

    - by GeekAgilistMercenary
    Recently the question came up from a close friend of mine, "will my PhD help me attain a higher income in the north west?"  I had to tell him, that it might get him a little more, but it won't get him in the top income brackets for the occupation.  Another time, a few days later, someone else asked this too.  Then again, I see a job posting that requires a Bachelors Degree and some other nonsense.  The job posting even states they want "A-Game" talent. I am almost shocked at how poorly part of this industry doesn't realize how unimportant a degree is to getting real top tier, a-game talent.  (and yes, I get a little riled up about this matter) You Can't Make Good Software Developers.  No college out there is going to train someone to be in the top 10%, and absolutely not to be in the top 5% of skill levels.  Colleges can NOT do this.  It is up to the individual, and the individual alone.  If top tier talent seems to come from a college, one should check their premise and look at the motivations the individuals have to go to that school.  There is most likely a reason that top tier talent appears to be made there.  The college however, can only guide or assist, but I repeat that "top tier talent is a very individualistic endeavor". Some might say, well a group is needed, support is needed, this and that are needed.  True, an individual needs a support system and a college can provide that, but it generally ends there.  The support group helps, provides a sounding wall, and provides correlation to good ideas for the a-game top tier geek.  But again, the endeavor is the individuals desire. top tier talent is a very individualistic endeavor - Me Hiring Top Tier, A-Game Talent There are a few things when trying to hire this level of game player. The first thing is to not require a degree of any sort.  Sure, it looks good, but it won't dictate anything other than the individual was able to go through the regimented steps of college. List the skills and ideas that you would like to find in an individual.  Think of two people meeting for the first time, what do you want to know about the other individual.  Team fit is absolutely fundamental for top tier talent.  That support group that I mentioned above, top tier talent works best with a solid group of players. Keep your technology up to date, moving forward, and don't bore your top talent if you manage to get it.  If the company slows down, they will leave.  The more valuable they find out they are, the lower tolerance they'll have for this.  For managers, directors, and leaders in an organization this is THE challenge for them. Provide opportunities not just for advancement, but ways for them to advance their knowledge such as training, a book budget, or other means.  Even if some software they want to use isn't used ton the project, get it for them (within reason of course ? couple $100 or even a few $1000 for a good software license to MSDN, Tellerik, or other suite of software is ideal). Don't push them to, and don't let them overwork themselves into burnout.  This, as a leader in an organization is easy to do if one finds themselves actually hiring top talent.  Because top talent just provides results and more results.  But they are human, they will break, don't be the cause of that or you'll lose your talent. For now, that is it from me on this topic, back to the revenue, code, projects, and pushing forward. For the original entry, check out my personal blog with other juicy tech tidbits, rants, raves, and the like. Agilist Mercenary

    Read the article

  • Reconciling the Boy Scout Rule and Opportunistic Refactoring with code reviews

    - by t0x1n
    I am a great believer in the Boy Scout Rule: Always check a module in cleaner than when you checked it out." No matter who the original author was, what if we always made some effort, no matter how small, to improve the module. What would be the result? I think if we all followed that simple rule, we'd see the end of the relentless deterioration of our software systems. Instead, our systems would gradually get better and better as they evolved. We'd also see teams caring for the system as a whole, rather than just individuals caring for their own small little part. I am also a great believer in the related idea of Opportunistic Refactoring: Although there are places for some scheduled refactoring efforts, I prefer to encourage refactoring as an opportunistic activity, done whenever and wherever code needs to cleaned up - by whoever. What this means is that at any time someone sees some code that isn't as clear as it should be, they should take the opportunity to fix it right there and then - or at least within a few minutes Particularly note the following excerpt from the refactoring article: I'm wary of any development practices that cause friction for opportunistic refactoring ... My sense is that most teams don't do enough refactoring, so it's important to pay attention to anything that is discouraging people from doing it. To help flush this out be aware of any time you feel discouraged from doing a small refactoring, one that you're sure will only take a minute or two. Any such barrier is a smell that should prompt a conversation. So make a note of the discouragement and bring it up with the team. At the very least it should be discussed during your next retrospective. Where I work, there is one development practice that causes heavy friction - Code Review (CR). Whenever I change anything that's not in the scope of my "assignment" I'm being rebuked by my reviewers that I'm making the change harder to review. This is especially true when refactoring is involved, since it makes "line by line" diff comparison difficult. This approach is the standard here, which means opportunistic refactoring is seldom done, and only "planned" refactoring (which is usually too little, too late) takes place, if at all. I claim that the benefits are worth it, and that 3 reviewers will work a little harder (to actually understand the code before and after, rather than look at the narrow scope of which lines changed - the review itself would be better due to that alone) so that the next 100 developers reading and maintaining the code will benefit. When I present this argument my reviewers, they say they have no problem with my refactoring, as long as it's not in the same CR. However I claim this is a myth: (1) Most of the times you only realize what and how you want to refactor when you're in the midst of your assignment. As Martin Fowler puts it: As you add the functionality, you realize that some code you're adding contains some duplication with some existing code, so you need to refactor the existing code to clean things up... You may get something working, but realize that it would be better if the interaction with existing classes was changed. Take that opportunity to do that before you consider yourself done. (2) Nobody is going to look favorably at you releasing "refactoring" CRs you were not supposed to do. A CR has a certain overhead and your manager doesn't want you to "waste your time" on refactoring. When it's bundled with the change you're supposed to do, this issue is minimized. The issue is exacerbated by Resharper, as each new file I add to the change (and I can't know in advance exactly which files would end up changed) is usually littered with errors and suggestions - most of which are spot on and totally deserve fixing. The end result is that I see horrible code, and I just leave it there. Ironically, I feel that fixing such code not only will not improve my standings, but actually lower them and paint me as the "unfocused" guy who wastes time fixing things nobody cares about instead of doing his job. I feel bad about it because I truly despise bad code and can't stand watching it, let alone call it from my methods! Any thoughts on how I can remedy this situation ?

    Read the article

  • STOP PRESS: FY15 Q1 Oracle ZS3 Contest for Partners

    - by Cinzia Mascanzoni
    04 JUNE 2014 Oracle EMEA Partners Stop Press Stay Connected Oracle Media Network   OPN on PartnerCast   STOP PRESS: FY15 Q1 Oracle ZS3 Contest for PartnersShare an unforgettable experience at the Teatro Alla Scala in Milan Dear valued Partner, We are pleased to launch a partner contest exclusive to our partners dedicated to promoting and selling Oracle Systems! You are essential to the success of Oracle and we want to recognize your contribution and effort in driving Oracle Storage to the market. To show our appreciation we are delighted to announce a contest, giving the winners the opportunity to attend a roundtable chaired by Senior Oracle Executives and spend an unforgettable evening at the magnificent Teatro Alla Scala in Milan, followed by a stay at the Grand Hotel et de Milan, courtesy of Oracle. Recognition will be given to 12 partner companies (10 VARs & 2 VADs) who will be recognized for their ZFS storage booking achievement in the broad market between June 1st and July 18th 2014. Criteria of Eligibility A minimum deal value of $30k is required for qualification Partners who are wholly or partially owned by a public sector organization are not eligible for participation Winners The winning VARs will be: The highest ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region (1) The highest Oracle on Oracle (2) ZS3 or ZBA bookings achievers by COB on July 18th, 2014 in each Oracle EMEA region The winning VADs (3) will be: The highest ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA The highest Oracle on Oracle (2) ZS3 or ZBA bookings achiever by COB on July 18th 2014 in EMEA (1) Two VAR winners for each EMEA region – Eastern Europe & CIS, Middle East & Africa, South Europe, North Europe, UK/Ireland & Israel - as per the criteria outlined above(2) Oracle on Oracle, in this instance, means ZS3 or ZBA storage attached to DB or DB options, Engineered Systems or Sparc servers sold to the same customer by the same partner within the contest timelines.(3) Two VAD winners, one for each of the criteria outlined above, will be selected from across EMEA. Oracle shall be the final arbiter in selecting the winners. All winners will be notified via their Oracle account manager. Full details about the contest, expenses covered by Oracle and timetable of events can be found on the Oracle EMEA Hardware (Servers & Storage) Partner Community workspace (FY15 Q1 ZFS Partner Contest). Access to the community workspace requires membership. If you are not a member please register here. The Prize Winners will be invited to participate to a roundtable chaired by Oracle on Monday September 8th 2014 in Milan and to be guests of Oracle in the evening of September 8th, 2014 at the Teatro Alla Scala. The evening will comprise of a private tour of the Scala museum, cocktail reception at the elegant museum rooms and attending the performance by the renowned Soprano, Maria Agresta. Our guests will then retire for the evening to the Grand Hotel et de Milan, courtesy of Oracle. Good Luck!! For more information, please contact Sasan Moaveni. Regards, Olivier TordoSenior Director - Systems Business DevelopmentOracle EMEA Alliances & Channels Resources EMEA Hardware Partner Community EMEA Oracle Partner Days Find Partner Events EMEA Partner News Blog EMEA Partner Enablement Blog Oracle PartnerNetwork Copyright © 2014, Oracle and/or its affiliates.All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • LibGDX Box2D Body and Sprite AND DebugRenderer out of sync

    - by Free Lancer
    I am having a couple issues with Box2D bodies. I have a GameObject holding a Sprite and Body. I use a ShapeRenderer to draw an outline of the Body's and Sprite's bounding boxes. I also added a Box2DDebugRenderer to make sure everything's lining up properly. My problem is the Sprite and Body at first overlap perfectly, but as I turn the Body moves a bit off the sprite then comes back when the Car is facing either North or South. Here's an image of what I mean: (Not sure what that line is, first time to show up) BLUE is the Body, RED is the Sprite, PURPLE is the Box2DDebugRenderer. Also, you probably noticed a purple square in the top right corner. Well that's the Car drawn by the Box2D Debug Renderer. I thought it might be the camera but I've been playing with the Cameras for hours and nothing seems to work. All give me weird results. Here's my code: Screen: public void show() { // --------------------- SETUP ALL THE CAMERA STUFF ------------------------------ // battleStage = new Stage( 720, 480, false ); // Setup the camera. In Box2D we operate on a meter scale, pixels won't do it. So we use // an Orthographic camera with a Viewport of 24 meters in width and 16 meters in height. battleStage.setCamera( new OrthographicCamera( CAM_METER_WIDTH, CAM_METER_HEIGHT ) ); battleStage.getCamera().position.set( CAM_METER_WIDTH / 2, CAM_METER_HEIGHT / 2, 0 ); // The Box2D Debug Renderer will handle rendering all physics objects for debugging debugger = new Box2DDebugRenderer( true, true, true, true ); //debugCam = new OrthographicCamera( CAM_METER_WIDTH, CAM_METER_HEIGHT ); } public void render(float delta) { // Update the Physics World, use 1/45 for something around 45 Frames/Second for mobile devices physicsWorld.step( 1/45.0f, 8, 3 ); // 1/45 for devices // Set the Camera matrices and clear the screen Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); battleStage.getCamera().update(); // Draw game objects here battleStage.act(delta); battleStage.draw(); // Again update the Camera matrices and call the debug renderer debugCam.update(); debugger.render( physicsWorld, debugCam.combined); // Vehicle handles its own interaction with the HUD // update all Actors movements in the game Stage hudStage.act( delta ); // Draw each Actor onto the Scene at their new positions hudStage.draw(); } Car: (extends Actor) public Car( Texture texture, float posX, float posY, World world ) { super( "Car" ); mSprite = new Sprite( texture ); mSprite.setSize( WIDTH * Consts.PIXEL_METER_RATIO, HEIGHT * Consts.PIXEL_METER_RATIO ); mSprite.setOrigin( mSprite.getWidth()/2, mSprite.getHeight()/2); // set the origin to be at the center of the body mSprite.setPosition( posX * Consts.PIXEL_METER_RATIO, posY * Consts.PIXEL_METER_RATIO ); // place the car in the center of the game map FixtureDef carFixtureDef = new FixtureDef(); mBody = Physics.createBoxBody( BodyType.DynamicBody, carFixtureDef, mSprite ); } public void draw() { mSprite.setPosition( mBody.getPosition().x * Consts.PIXEL_METER_RATIO, mBody.getPosition().y * Consts.PIXEL_METER_RATIO ); mSprite.setRotation( MathUtils.radiansToDegrees * mBody.getAngle() ); // draw the sprite mSprite.draw( batch ); } Physics: (Create the Body) public static Body createBoxBody( final BodyType pBodyType, final FixtureDef pFixtureDef, Sprite pSprite ) { float pRotation = 0; float pWidth = pSprite.getWidth(); float pHeight = pSprite.getHeight(); final BodyDef boxBodyDef = new BodyDef(); boxBodyDef.type = pBodyType; boxBodyDef.position.x = pSprite.getX() / Consts.PIXEL_METER_RATIO; boxBodyDef.position.y = pSprite.getY() / Consts.PIXEL_METER_RATIO; // Temporary Box shape of the Body final PolygonShape boxPoly = new PolygonShape(); final float halfWidth = pWidth * 0.5f / Consts.PIXEL_METER_RATIO; final float halfHeight = pHeight * 0.5f / Consts.PIXEL_METER_RATIO; boxPoly.setAsBox( halfWidth, halfHeight ); // set the anchor point to be the center of the sprite pFixtureDef.shape = boxPoly; final Body boxBody = BattleScreen.getPhysicsWorld().createBody(boxBodyDef); boxBody.createFixture(pFixtureDef); } Sorry for all the code and long description but it's hard to pin down what exactly might be causing the problem.

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >