Search Results

Search found 4634 results on 186 pages for 'oscar red carpet'.

Page 162/186 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • LA SPÉCIALISATION POUR SE DIFFÉRENCIER ET ÊTRE VALORISÉ

    - by michaela.seika(at)oracle.com
    Software. Hardware. Complete. inside the Click Here The order you must follow to make the colored link appear in browsers. If not the default window link will appear 1. Select the word you want to use for the link 2. Select the desired color, Red, Black, etc 3. Select bold if necessary ___________________________________________________________________________________________________________________ Templates use two sizes of fonts and the sans-serif font tag for the email. All Fonts should be (Arial, Helvetica, sans-serif) tags Normal size reading body fonts should be set to the size of 2. Small font sizes should be set to 1 !!!!!!!DO NOT USE ANY OTHER SIZE FONT FOR THE EMAILS!!!!!!!! ___________________________________________________________________________________________________________________ --     LA SPÉCIALISATION  POUR SE DIFFÉRENCIER ET ÊTRE VALORISÉ       Le marché nous demande de plus en plus de solutions et d’engagements. Pour bâtir ces solutions nous nous appuyons sur vous, Partenaires Oracle. En matière d’engagements, Oracle se doit de communiquer auprès du marché quant à la spécialisation de ses partenaires, sur leurs compétences en fonction des projets que les clients nous demandent d’adresser. Plus de 50 spécialisations sont à ce jour disponibles pour les partenaires Gold, Platinum et Diamond : • Sur les produits Technologiques tels que la Base de Données, les options de la Base, la SOA, la Business Intelligence, … • Sur les produits Applicatifs, tels que l’ERP, le CRM, … • Sur les produits Hardware, les Systèmes d’exploitation. Afin de vous aider à vous spécialiser et donc à vous certifier, nos 2 distributeurs à valeur ajoutée, Altimate et Arrow ECS, vous assistent dans cette démarche. ALTIMATE vous propose de participer Lunch & Spécialisation tour Profitez de ces dispositifs qui sont mis en place pour vous afin de vous spécialiser et profiter de tous les bénéfices auxquels vous donne accès la spécialisation. ARROW ECS vous propose de participer : L'Ecole de la spécialisation Oracle by Arrow Profitez de ces dispositifs qui sont mis en place pour vous afin de vous spécialiser et profiter de tous les bénéfices auxquels vous donne accès la spécialisation. Oracle Solutions Tour Découvrez la solution Oracle lors de ce tour de France. Au programme :  roadmaps, ateliers produits et solutions, certifications     BÉNÉFICES en savoir + • l’engagement d’Oracle aux côtés des partenaires pour adresser les grands dossiers • la visibilité auprès des clients pour être identifié comme Expert sur une offre, reconnu et validé par Oracle • le support (accès support gratuit), Oracle University (vouchers pour certifier gratuitement vos équipes de Consultants Implementation) • les budgets Marketing (lead generation, création de campagnes Marketing, être sponsor d’événements clients)   Différenciez-vous en vous spécialisant sur votre domaine d’expertise et accélérez votre succès ! Oracle et ses Distributeurs à Valeur Ajoutée     Eric Fontaine Directeur Alliances & Channel Technologie Europe du Sud vous présente en vidéo la spécialisation et ses avantages.                                         CONTACTS : ORACLE Jean-Jacques PanissiéOracle Partner Development A&C Technology +33 157 60 28 52 ALTIMATE Sophie Daval +33 1 34 58 47 68 ARROW [email protected] +33 1 49 97 59 63          

    Read the article

  • Official and unofficial apps in the iOS, WP7, and Android marketplaces

    - by Bil Simser
    The last few months have seen people complaining about the lack of "official" apps in the Windows Phone marketplace. In fact a couple of months ago I wrote about this very thing here and if we really needed these official apps or could get by with third-party solutions. Recently a list of "Top 100 Mobile Apps" crossed my desk and it was curious. 40 iPhone apps, 40 Android apps, 10 WP7 apps, and 10 BlackBerry apps. Really? 10 for WP7? So I wondered if the media was just playing this up and maybe continuing to do what I think most vendors are doing which is treating Windows Phone as the red-headed step-child you keep in the basement while all along there's nothing wrong with them. I put together the list and went digging to see how many of the top 40 iOS and Android apps were also on the Windows Phone platform (sorry BlackBerry, you should just shut your doors right now). Here's the results. Note, these are all *free* apps. There might be other pay apps that have official representation across all mobile devices, I just chose to hunt these ones down because I'm cheap. In the top 40, I easily plucked out 20 that had official apps on all three platforms. These were: Amazon Mobile, ESPN Score Centre, Evernote, Facebook, Foursquare, Google Search, IMDB, Kindle, Shazam, Skype (yes, I know, in beta on WP7), SlackerRadio, The Weather Channel, TripIt, Twitter, Yelp, Flixster, Netflix, TuneIn Radio, Dictionary.com, Angry Birds, and Groupon. Hey, that's pretty good IMHO. 20 or so apps, all free, and all fully functional and supported (and in some cases, even better looking on the Windows Phone platform than the other platforms). A dozen or so more apps had official apps on some platforms but not all, so yes, there are gaps here. Here's a rundown of the hangers-on: Adobe Photoshop Express This looks great on the iOS platform and there's even an official version on droid. Hope Adobe brings this to WP7. There are other photo editing programs though if you go looking (maybe we can get Paint.NET to be ported to the phone?) BBC News A few apps offer news feeds but nothing official on the Windows Phone. The feeds are good but without video this app needs some WP7 love. Dropbox Again Windows Phone looses out here with no official app. There are a few third party ones that will help you along and offer most of the functionality that you need but no integration that an official app might bring. Epicurious Droid seems to be the trailer here as there are apps for it but nothing official (from what I can tell). Both iOS and WP7 have them. Flipboard It's sad with Flipboard as it's such a great newsreader. The only offiical app is for iOS but frankly the iPhone version looks horrible so without a tablet the experience here isn't that hot. Maybe with WP8. Currently there's nothing even remotely similar to this on the other platforms. Google+ Is anyone still using this? No official app for WP7 but some clones. Apparently there's no API so people are just screen scraping. Ugh. Mint.com This app has all kinds of buzz and a lot of votes on the application requests site. Official apps for iOS and droid. No WP7 love (yet). TED Quite a few TED apps on WP7 but nothing official. I think the third party ones suffice and some are pretty nice looking, taking advantage of the Metro interface and making for a good show. WebMD There's a third party app on WP7 here but nothing official. It seems to contain all the same information and functionality the official apps do so not sure if an official one is needed but its here for inclusion. The other apps in the top 40 were either very specific to the platform (for example all three of them have a "Find my Phone" app). There are others that are missing out on the WP7 platform like ooVoo, Words With Friends, and some of the Google apps (Google Voice for example). Since you can integrate your GMail account right into the Windows Phone (via linked inboxes) I'm not sure if there's a need for an official GMail app here. Looking at the numbers Windows Phone still gets the worst of the deal here with half a dozen highly popular "offical" apps that exist on the other mobile platforms and in some cases, nothing even remotely similar to the official app to compare. This doesn't include things like Instagram, PInterest, and others (don't get me started on those). Still, with over 20+ highly popular free apps all represented on all three mobile platforms I don't think it's a bad place to be in. The Windows Phone platform could get a little more love from the vendors missing here, or at least open up your APIs so the third party crowd can step in and take up the slack. P.S. these are just my observations and I might have got a few items wrong. Feel free to chime in with missing or incorrect information. I am after all human. Well, most of me is.

    Read the article

  • How to Customize Fonts and Colors for Gnome Panels in Ubuntu Linux

    - by The Geek
    Earlier this week we showed you how to make the Gnome Panels totally transparent, but you really need some customized fonts and colors to make the effect work better. Here’s how to do it. This article is the first part of a multi-part series on how to customize the Ubuntu desktop, written by How-To Geek reader and ubergeek, Omar Hafiz. Changing the Gnome Colors the Easy Way You’ll first need to install Gnome Color Chooser which is available in the default repositories (the package name is gnome-color-chooser). Then go to System > Preferences > Gnome Color Chooser to launch the program. When you see all these tabs you immediately know that Gnome Color Chooser does not only change the font color of the panel, but also the color of the fonts all over Ubuntu, desktop icons, and many other things as well. Now switch to the panel tab, here you can control every thing about your panels. You can change font, font color, background and background color of the panels and start menus. Tick the “Normal” option and choose the color you want for the panel font. If you want you can change the hover color of the buttons on the panel by too. A little below the color option is the font options, this includes the font, font size, and the X and Y positioning of the font. The first two options are pretty straight forward, they change the typeface and the size. The X-Padding and Y-Padding may confuse you but they are interesting, they may give a nice look for your panels by increasing the space between items on your panel like this: X-Padding:   Y-Padding:   The bottom half of the window controls the look of your start menus which is the Applications, Places, and Systems menus. You can customize them just the way you did with the panel.   Alright, this was the easy way to change the font of your panels. Changing the Gnome Theme Colors the Command-Line Way The other hard (not so hard really) way will be changing the configuration files that tell your panel how it should look like. In your Home Folder, press Ctrl+H to show the hidden files, now find the file “.gtkrc-2.0”, open it and insert this line in it. If there are any other lines in the file leave them intact. include “/home/<username>/.gnome2/panel-fontrc” Don’t forget to replace the <user_name> with you user account name. When done close and save the file. Now navigate the folder “.gnome2” from your Home Folder and create a new file and name it “panel-fontrc”. Open the file you just created with a text editor and paste the following in it: style “my_color”{fg[NORMAL] = “#FF0000”}widget “*PanelWidget*” style “my_color”widget “*PanelApplet*” style “my_color” This text will make the font red. If you want other colors you’ll need to replace the Hex value/HTML Notation (in this case #FF0000) with the value of the color you want. To get the hex value you can use GIMP, Gcolor2 witch is available in the default repositories or you can right-click on your panel > Properties > Background tab then click to choose the color you want and copy the Hex value. Don’t change any other thing in the text. When done, save and close. Now press Alt+F2 and enter “killall gnome-panel” to force it to restart or you can log out and login again. Most of you will prefer the first way of changing the font and color for it’s ease of applying and because it gives you much more options but, some may not have the ability/will to download and install a new program on their machine or have reasons of their own for not to using it, that’s why we provided the two way. Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Splendiferous Array of Culinary Tools [Infographic] Add a Real-Time Earth Wallpaper App to Ubuntu with xplanetFX The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker

    Read the article

  • Must-see sessions at TCUK11

    - by Roger Hart
    Technical Communication UK is probably the best professional conference I've been to. Last year, I spoke there on content strategy, and this year I'll be co-hosting a workshop on embedded user assistance. Obviously, I'd love people to come along to that; but there are some other sessions I'd like to flag up for anybody thinking of attending. Tuesday 20th Sept - workshops This will be my first year at the pre-conference workshop day, and I'm massively glad that our workshop hasn't been scheduled along-side the one I'm really interested in. My picks: It looks like you're embedding user assistance. Would you like help? My colleague Dom and I are presenting this one. It's our paen to Clippy, to the brilliant idea he represented, and the crashing failure he was. Less precociously, we'll be teaching embedded user assistance, Red Gate style. Statistics without maths: acquiring, visualising and interpreting your data This doesn't need to do anything apart from what it says on the tin in order to be gold dust. But given the speakers, I suspect it will. A data-informed approach is a great asset to technical communications, so I'd recommend this session to anybody event faintly interested. The speakers here have a great track record of giving practical, accessible introductions to big topics. Go along. Wednesday 21st Sept - day one There's no real need to recommend the keynote for a conference, but I will just point out that this year it's Google's Patrick Hofmann. That's cool. You know what else is cool: Focus on the user, the rest follows An intro to modelling customer experience. This is a really exciting area for tech comms, and potentially touches on one of my personal hobby-horses: the convergence of technical communication and marketing. It's all part of delivering customer experience, and knowing what your users need lets you help them, sell to them, and delight them. Content strategy year 1: a tale from the trenches It's often been observed that content strategy is great at banging its own drum, but not so hot on compelling case studies. Here you go, folks. This is the presentation I'm most excited about so far. On a mission to communicate! Skype help their users communicate, but how do they communicate with them? I guess we'll find out. Then there's the stuff that I'm not too excited by, but you might just be. The standards geeks and agile freaks can get together in a presentation on the forthcoming ISO standards for agile authoring. Plus, there's a session on VBA for tech comms. I do have one gripe about day 1. The other big UK tech comms conference, UA Europe, have - I think - netted the more interesting presentation from Ellis Pratt. While I have no doubt that his TCUK case study on producing risk assessments will be useful, I'd far rather go to his talk on game theory for tech comms. Hopefully UA Europe will record it. Thursday 22nd Sept - day two Day two has a couple of slots yet to be confirmed. The rumour is that one of them will be the brilliant "Questions and rants" session from last year. I hope so. It's not ranting, but I'll be going to: RTFMobile: beyond stating the obvious Ultan O'Broin is an engaging speaker with a lot to say, and mobile is one of the most interesting and challenging new areas for tech comms. Even if this weren't a research-based presentation from a company with buckets of technology experience, I'd be going. It is, and you should too. Pattern recognition for technical communicators One of the best things about TCUK is the tendency to include sessions that tackle the theoretical and bring them towards the practical. Kai and Chris delivered cracking and well-received talks last year, and I'm looking forward to seeing what they've got for us on some of the conceptual underpinning of technical communication. Developing an interactive non-text learning programme Annoyingly, this clashes with Pattern Recognition, so I hope at least one of the streams is recorded again this year. The idea of communicating complex information without words us fascinating and this sounds like a great example of this year's third stream: "anything but text". For the localization and DITA crowds, there's rich pickings on day two, though I'm not sure how many of those sessions I'm interested in. In the 13:00 - 13:40 slot, there's an interesting clash between Linda Urban on re-use and training content, and a piece on minimalism I'm sorely tempted by. That's my pick of #TCUK11. I'll be doing a round-up blog after the event, and probably talking a bit more about it beforehand. I'm also reliably assured that there are still plenty of tickets.

    Read the article

  • Customizing UPK outputs (Part 2 - Player)

    - by [email protected]
    There are a few things that can be done to give the Player output a personalized look to match your corporate branding. In my previous post, I talked about changing the logo. In addition to the logo, you can change the graphic in the heading, button colors, border colors and many other items. Prior to making any customizations, I strongly recommend making a copy of the existing Player style. This will give you a backup in case things go wrong. I'd also recommend that you create your own brand. This way, when you install the newest updates from us, your brand will remain intact. Creating your own brand is pretty easy. Make sure you have modify permissions on the publishing styles directory, if you are using a multi-user installation. Under the Publishing/Styles folder, create a new folder with your company name. Copy all the publishing styles from the UPK folder to your newly created folder. Now, when you go through the Publishing wizard, you will have two categories to choose from: the UPK category or your custom category. Now, for updating the Player output. First, the graphic that appears on the right hand side of the Player. If you're using a multi-user installation, check out the player style from your custom brand. Open the player style. Open the img folder. The file named "banner_image.png" represents the graphic that appears on the right hand side of the player. It is currently sized at 425 x 54. Try to keep your graphic about the same size. Rename your graphic file to be "banner_image.png", and drag it into the img folder. Save the package. Check in the package if you are in a multi-user installation. You've just updated the banner heading! Next, let's work on updating some of the other colors in the player. All the customizable areas are located in the skin.css file which is in the root of the Player style. Many of our customers update the colors to match their own theme. You don't have to be a programmer to make these changes, honest. :) To change the colors in the player: Make a copy of the original skin.css file. (This is to make sure you have a working version to revert to, in case something goes wrong.) Open the skin.css file from the Player package. You can edit it using Notepad. Make the desired changes. Save the file. Save the package. Publish to view your new changes. When you open the skin.css, you will see groupings like this: .headerDivbar { height: 21px; background-color: #CDE2FD; } Change the value of the background-color to the color of your choice. Note that you cannot use "red" as a color, but rather you should enter the hexadecimal color code. If you don't know the color code, search the web for "hexadecimal colors" and you'll find many sites to provide the information. Here are a few of the variables that you can update. Heading: .headerDivbar -this changes the color of the banner that appears under the graphic Button colors: .navCellOn - changes the color of the mode buttons when your mouse is hovering on them. .navCellOff - changes the color of the mode buttons when the mouse is not over them Lines: .thorizontal - this is the color of the horizontal lines surrounding the outline .tvertical - this is the color of the vertical lines on the left and right margin in the outline. .tsep - this is the color of the line that separates the outline from the content area Search frame: .tocSearchColor - this is the color of the search area .tocFrameText - this is the background color of the TOC tree. Hint: If you want to try out the changes prior to updating the style, you can update the skin.css in some content you've already published for the player (it's located in the css folder of the player package). This way, you can immediately see the changes without going through publishing. Once you're happy with the changes, update the skin.css in player style. Want to customize more? Refer to the "Customizing the Player" section of the Content Development manual for more details on all the options in the skin.css that can be changed, and pictures of what each variable controls. I'd love to see how you've customized the player for your corporate needs. Also, if there are other areas of the player you'd like to modify but have not been able to, let us know. Feel free to share your thoughts in the comments. --Maria Cozzolino, Manager of Requirements & UI Design for UPK

    Read the article

  • VB Myth - Case Insensitivity is Awesome!

    - by Damon
    I was reading Andy Brown's article 10 Reasons Why Visual Basic is Better than C# and the first claim is that VB is superior because of case insensitivity.  I think the reasons he outlines are basically as follows: Your fingers get tired finding the shift key (e.g. typing PascalCase and camelCase members) You are much more likely to make mistakes while typing names When you accidentally leave caps lock on, it really matters These three arguments culminate in the conclusion: "It doesn't matter if you disagree with everything else in this article: case-sensitivity alone is sufficient reason to ditch C#!" Righto.  I've been using Visual Basic since version 5.0, I wrote a book about ASP.NET in Visual Basic, so I want everyone to know I'm definitely not a VB.NET hater.  I had to converted to C# because it was the language of preference at the places I've worked, so I'm used to both languages.  I love me some case sensitivity.  So first, let's debunk the claims. First, your fingers do not get tired of finding the shift key unless you are writing code in notepad and compiling everything on the command line.  Visual Studio pretty much takes away the need to use the shift key at all. For the most part, any programmer worth a damn doesn't have to type more than about 3-5 characters of any variable or method name before IntelliSense kicks in to help.  VB or C#, if you are not using the tab key for autocomplete then you are typing too much anyway, regardless of whether the shift key is involved or not.  Also, you've got to be a pretty hard-core candy ass if you're complaining at the end of the day that your little fingers are hurting from hitting the shift key. Second, I cannot logically refute the fact that if there are more stringent rules about case sensitivity it will lead to more mistakes.  As such, know that you will be more prone to mistakes in C#.  However, lets talk about the magnitude of the problem.  If you are using IntelliSense then you have auto-correction built in so you probably won't have much of a problem in the first place.  If you manage to bypass IntelliSense and type something wrong you normally are immediately presented with a red-squiggly to let you know something is amiss.  Normally, a person would look at the problem, figure out what the heck went wrong, and then avoid that problem again in the future.  Granted, I have met people who seem to lack this capability, but their problem is deeper than a decision between VB.NET and C#.  So let's make sure that we're all on the same page about this problem.  If you have two teams of developers, one that uses VB.NET and one that uses C#, do not expect to see the VB.NET team drinking beer at the end of the project in festive revelry while the C# team is crying over what the hell to do because their code is riddled with case-sensitivity problems that nobody can resolve. Lastly, if you leave your caps lock key on, turn it off.  Really, what kind of ass-hat would write an entire VB.NET application ENTIRELY IN CAPS?  I happen to be a fan of case sensitivity because it encourages precision and uniformity.  The last thing I need is a code base that looks like it was ransacked by LeEt HacKors wHo Can uSe wHateVer cASe tHey wanT.  I mean really, if you saw someone write this: PuBLIc Sub MyMethod . End Sub And upon asking them why BL was upper case, they responded "Oh, I accidentally hit the shift key there.  Fortunately for me VB.NET is a case insensitive language so I saved a couple of keystrokes by leaving it in there."  Or if you saw: PUBLIC SUB ANOTHERMETHOD . END SUB And the response to why it was uppercased was "Yeah, I accidentally had caps locks on, fortunately for me VB.NET doesn't care.  Really dodged a bullet there, glad I wasn't using C#."  Would you not think that a bit ridiculous?  If you want to convince C# developers that C# sucks, go for it.  But the case insensitivity argument is crap.

    Read the article

  • PASS: Board Q&amp;A at the Summit

    - by Bill Graziano
    The last two years we’ve put the Board in front of the members and taken questions.  We’re going to do that again this year.  It will be in Room 307/308 from 12:15 to 1:30 on Friday. Yes, this time overlaps with the Birds of a Feather Lunch and the start of afternoon sessions – but only partially.  You can attend the Q&A and still get to parts of both of those.  There just isn’t a great time to do this.  Every time overlaps with something. We can’t do it after the last session on Friday.  We can’t fit it between the last session and the evening events on Wednesday or Thursday.  We had some discussion around breakfast time but I didn’t think that was realistic.  This is the least bad time we could come up with. Last year we had 60-70 people attend.  These are the items that were specific things that I could work on: The first question was whether to increase transparency around individual votes of Board members.  We approved this at the Board meeting the following day.  The only caveat was that if the Board is given confidential information as a basis for their vote then we may not be able to disclose individual votes.  Putting a Director in a position where they can’t publicly defend the reason for their vote is a difficult situation.  Thanks Kendal! Can we have a Board member discretionary fund?  As background, I took a couple of people to lunch so we could have a quiet place to talk.  I bought lunch but wasn’t able to expense it back to PASS.  We just don’t have a budget item for things like this.  I think we should.  I would guess the entire Board would like it also.  It was in an earlier version of the budget but came out as part of a cost-cutting move to balance the budget.  I’d like to see it added back in but we’ll have to see. I know there were a comments about the elections.  At this point we had created the Election Review Committee.  I’ve already written at length about this process. Where does IT work go?  PASS started to publish our internal management reports starting in December 2010.  You can find them on our Governance page.  These aren’t filtered at all and include a variety of information about IT projects.  The most recent update had roughly a page of updates related to IT.  Lots of the work was related to Summit and the Orator tool that we use to manage speaker submissions. There were numerous requests that Tina Turner not be repeated.  Done.  I don’t think we’ll do anything quite like that again.  We had a request for a payment plan for Summit.  We looked into this briefly but didn’t take any action.  We didn’t think the effort was worth the small number of people that would use it.  If you disagree, submit this on our Summit Feedback site and get some votes. There were lots of suggestions around the first-timers events – especially from first timers.  You can find all our current activities related to first-timers at the First Timers page on the Summit web site.  Plus links to 34 (!) blog posts on suggestions for first-timers.  And a big THANK YOU to Confio and Red Gate for sponsoring this. I hope you get the chance to attend.  These events are very helpful to me as a Board member.  I like being able to look around the room as comments are being made and see the audience reaction.  It helps me gauge the interest in an idea. I’d also like to direct you to the Summit Feedback site.  You can submit and vote on ideas to make the Summit a better experience.  As of right now we have the suggestions from last year still up.  We may reset these prior to the Summit though.

    Read the article

  • Win32 and Win64 programming in C sources?

    - by Nick Rosencrantz
    I'm learning OpenGL with C and that makes me include the windows.h file in my project. I'd like to look at some more specific windows functions and I wonder if you can cite some good sources for learning the basics of Win32 and Win64 programming in C (or C++). I use MS Visual C++ and I prefer to stick with C even though much of the Windows API seems to be C++. I'd like my program to be portable and using some platform-indepedent graphics library like OpenGL I could make my program portable with some slight changes for window management. Could you direct me with some pointers to books or www links where I can find more info? I've already studied the OpenGL red book and the C programming language, what I'm looking for is the platform-dependent stuff and how to handle that since I run both Linux and Windows where I find the development environment Visual Studio is pretty good but the debugger gdb is not available on windows so it's a trade off which environment i'll choose in the end - Linux with gcc or Windows with MSVC. Here is the program that draws a graphics primitive with some use of windows.h This program is also runnable on Linux without changing the code that actually draws the graphics primitive: #include <windows.h> #include <gl/gl.h> LRESULT CALLBACK WindowProc(HWND, UINT, WPARAM, LPARAM); void EnableOpenGL(HWND hwnd, HDC*, HGLRC*); void DisableOpenGL(HWND, HDC, HGLRC); int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { WNDCLASSEX wcex; HWND hwnd; HDC hDC; HGLRC hRC; MSG msg; BOOL bQuit = FALSE; float theta = 0.0f; /* register window class */ wcex.cbSize = sizeof(WNDCLASSEX); wcex.style = CS_OWNDC; wcex.lpfnWndProc = WindowProc; wcex.cbClsExtra = 0; wcex.cbWndExtra = 0; wcex.hInstance = hInstance; wcex.hIcon = LoadIcon(NULL, IDI_APPLICATION); wcex.hCursor = LoadCursor(NULL, IDC_ARROW); wcex.hbrBackground = (HBRUSH)GetStockObject(BLACK_BRUSH); wcex.lpszMenuName = NULL; wcex.lpszClassName = "GLSample"; wcex.hIconSm = LoadIcon(NULL, IDI_APPLICATION);; if (!RegisterClassEx(&wcex)) return 0; /* create main window */ hwnd = CreateWindowEx(0, "GLSample", "OpenGL Sample", WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, 256, 256, NULL, NULL, hInstance, NULL); ShowWindow(hwnd, nCmdShow); /* enable OpenGL for the window */ EnableOpenGL(hwnd, &hDC, &hRC); /* program main loop */ while (!bQuit) { /* check for messages */ if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { /* handle or dispatch messages */ if (msg.message == WM_QUIT) { bQuit = TRUE; } else { TranslateMessage(&msg); DispatchMessage(&msg); } } else { /* OpenGL animation code goes here */ glClearColor(0.0f, 0.0f, 0.0f, 0.0f); glClear(GL_COLOR_BUFFER_BIT); glPushMatrix(); glRotatef(theta, 0.0f, 0.0f, 1.0f); glBegin(GL_TRIANGLES); glColor3f(1.0f, 0.0f, 0.0f); glVertex2f(0.0f, 1.0f); glColor3f(0.0f, 1.0f, 0.0f); glVertex2f(0.87f, -0.5f); glColor3f(0.0f, 0.0f, 1.0f); glVertex2f(-0.87f, -0.5f); glEnd(); glPopMatrix(); SwapBuffers(hDC); theta += 1.0f; Sleep (1); } } /* shutdown OpenGL */ DisableOpenGL(hwnd, hDC, hRC); /* destroy the window explicitly */ DestroyWindow(hwnd); return msg.wParam; } LRESULT CALLBACK WindowProc(HWND hwnd, UINT uMsg, WPARAM wParam, LPARAM lParam) { switch (uMsg) { case WM_CLOSE: PostQuitMessage(0); break; case WM_DESTROY: return 0; case WM_KEYDOWN: { switch (wParam) { case VK_ESCAPE: PostQuitMessage(0); break; } } break; default: return DefWindowProc(hwnd, uMsg, wParam, lParam); } return 0; } void EnableOpenGL(HWND hwnd, HDC* hDC, HGLRC* hRC) { PIXELFORMATDESCRIPTOR pfd; int iFormat; /* get the device context (DC) */ *hDC = GetDC(hwnd); /* set the pixel format for the DC */ ZeroMemory(&pfd, sizeof(pfd)); pfd.nSize = sizeof(pfd); pfd.nVersion = 1; pfd.dwFlags = PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER; pfd.iPixelType = PFD_TYPE_RGBA; pfd.cColorBits = 24; pfd.cDepthBits = 16; pfd.iLayerType = PFD_MAIN_PLANE; iFormat = ChoosePixelFormat(*hDC, &pfd); SetPixelFormat(*hDC, iFormat, &pfd); /* create and enable the render context (RC) */ *hRC = wglCreateContext(*hDC); wglMakeCurrent(*hDC, *hRC); } void DisableOpenGL (HWND hwnd, HDC hDC, HGLRC hRC) { wglMakeCurrent(NULL, NULL); wglDeleteContext(hRC); ReleaseDC(hwnd, hDC); }

    Read the article

  • Source Control and SQL Development &ndash; Part 3

    - by Ajarn Mark Caldwell
    In parts one and two of this series, I have been specifically focusing on the latest version of SQL Source Control by Red Gate Software.  But I have been doing source-controlled SQL development for years, long before this product was available, and well before Microsoft came out with Database Projects for Visual Studio.  “So, how does that work?” you may wonder.  Well, let me share some of the details of how we do it where I work… The key to this approach is that everything is done via Transact-SQL script files; either natively written T-SQL, or generated.  My preference is to write all my code by hand, which forces you to become better at your SQL syntax.  But if you really prefer to use the Management Studio GUI to make database changes, you can still do that, and then you use the Generate Scripts feature of the GUI to produce T-SQL scripts afterwards, and store those in your source control system.  You can generate scripts for things like stored procedures and views by right-clicking on the database in the Object Explorer, and Choosing Tasks, Generate Scripts (see figure 1 to the left).  You can also do that for the CREATE scripts for tables, but that does not work when you have a table that is already in production, and you need to make just a simple change, such as adding a new column or index.  In this case, you can use the GUI to make the table changes, and then instead of clicking the Save button, click the Generate Change Script button (). Then, once you have saved the change script, go ahead and execute it on your development database to actually make the change.  I believe that it is important to actually execute the script rather than just click the Save button because this is your first test that your change script is working and you didn’t somehow lose a portion of the change. As you can imagine, all this generating of scripts can get tedious and tempting to skip entirely, so again, I would encourage you to just get in the habit of writing your own Transact-SQL code, and then it is just a matter of remembering to save your work, just like you are in the habit of saving changes to a Word or Excel document before you exit the program. So, now that you have all of these script files, what do you do with them?  Well, we organize ours into folders labeled ChangeScripts, Functions, Views, and StoredProcedures, and those folders are loaded into our source control system.  ChangeScripts contains all of the table and index changes, and anything else that is basically a one-time-only execution.  Of course you want to write your scripts with qualifying logic so that if a script were accidentally run more than once in a database, it would not crash nor corrupt anything; but these scripts are really intended to be run only once in a database. Once you have your initial set of scripts loaded into source control, then making changes, such as altering a stored procedure becomes a simple matter of checking out your CREATE PROCEDURE* script, editing it in SSMS, saving the change, executing the script in order to effect the change in your database, and then checking the script back in to source control.  Of course, this is where the lack of integration for source control systems within SSMS becomes an irritation, because this means that in addition to SSMS, I also have my source control client application running to do the check-out and check-in.  And when you have 800+ procedures like we do, that can be quite tedious to locate the procedure I want to change in source control, check it out, then locate the script file in my working folder, open it in SSMS, do the change, save it, and the go back to source control to check in.  Granted, it is not nearly as burdensome as, say, losing your source code and having to rebuild it from memory, or losing the audit trail that good source control systems provide.  It is worth the effort, and this is how I have been doing development for the last several years. Remember that everything that the SQL Server Management Studio does in modifying your database can also be done in plain Transact-SQL code, and this is what you are storing.  And now I have shown you how you can do it all without spending any extra money.  You already have source control, or can get free, open-source source control systems (almost seems like an oxymoron, doesn’t it) and of course Management Studio is free with your SQL Server database engine software. So, whether you spend the money on tools to make it easier, or not, you now have no excuse for not using source control with your SQL development. * In our current model, the scripts for stored procedures and similar database objects are written with an IF EXISTS…DROP… at the top, followed by the CREATE PROCEDURE… section, and that followed by a section that assigns permissions.  This allows me to run the same script regardless of whether the procedure previously existed in the database.  If the script was only an ALTER PROCEDURE, then it would fail the first time that procedure was deployed to a database, unless you wrote other code to stub it if it did not exist.  There are a few different ways you could organize your scripts for deployment, each with its own trade-offs, but I think it is absolutely critical that whichever way you organize things, you ensure that the same script is run throughout the deployment cycle, and do not allow customizations to creep in between TEST and PROD.  If you do, then you have broken the integrity of your deployment process because what you deployed to PROD was not exactly the same as what was tested in TEST, so you effectively have now released untested code into PROD.

    Read the article

  • WPF Databinding- Part 2 of 3

    - by Shervin Shakibi
    This is a follow up to my previous post WPF Databinding- Not your fathers databinding Part 1-3 you can download the source code here  http://ssccinc.com/wpfdatabinding.zip Example 04   In this example we demonstrate  the use of default properties and also binding to an instant of an object which is part of a collection bound to its container. this is actually not as complicated as it sounds. First of all, lets take a look at our Employee class notice we have overridden the ToString method, which will return employees First name , last name and employee number in parentheses, public override string ToString()        {            return String.Format("{0} {1} ({2})", FirstName, LastName, EmployeeNumber);        }   in our XAML we have set the itemsource of the list box to just  “Binding” and the Grid that contains it, has its DataContext set to a collection of our Employee objects. DataContext="{StaticResource myEmployeeList}"> ….. <ListBox Name="employeeListBox"  ItemsSource="{Binding }" Grid.Row="0" /> the ToString in the method for each instance will get executed and the following is a result of it. if we did not have a ToString the list box would look  like this: now lets take a look at the grid that will display the details when someone clicks on an Item, the Grid has the following DataContext DataContext="{Binding ElementName=employeeListBox,            Path=SelectedItem}"> Which means its bound to a specific instance of the Employee object. and within the gird we have textboxes that are bound to different Properties of our class. <TextBox Grid.Row="0" Grid.Column="1" Text="{Binding Path=FirstName}" /> <TextBox Grid.Row="1" Grid.Column="1" Text="{Binding Path=LastName}" /> <TextBox Grid.Row="2" Grid.Column="1" Text="{Binding Path=Title}" /> <TextBox Grid.Row="3" Grid.Column="1" Text="{Binding Path=Department}" />   Example 05   This project demonstrates use of the ObservableCollection and INotifyPropertyChanged interface. Lets take a look at Employee.cs first, notice it implements the INotifyPropertyChanged interface now scroll down and notice for each setter there is a call to the OnPropertyChanged method, which basically will will fire up the event notifying to the value of that specific property has been changed. Next EmployeeList.cs notice it is an ObservableCollection . Go ahead and set the start up project to example 05 and then run. Click on Add a new employee and the new employee should appear in the list box.   Example 06   This is a great example of IValueConverter its actuall a two for one deal, like most of my presentation demos I found this by “Binging” ( formerly known as g---ing) unfortunately now I can’t find the original author to give him  the credit he/she deserves. Before we look at the code lets run the app and look at the finished product, put in 0 in Celsius  and you should see Fahrenheit textbox displaying to 32 degrees, I know this is calculating correctly from my elementary school science class , also note the color changed to blue, now put in 100 in Celsius which should give us 212 Fahrenheit but now the color is red indicating it is hot, and finally put in 75 Fahrenheit and you should see 23.88 for Celsius and the color now should be black. Basically IValueConverter allows us different types to be bound, I’m sure you have had problems in the past trying to bind to Date values . First look at FahrenheitToCelciusConverter.cs first notice it implements IValueConverter. IValueConverter has two methods Convert and ConvertBack. In each method we have the code for converting Fahrenheit to Celsius and vice Versa. In our XAML, after we set a reference in our Windows.Resources section. and for txtCelsius we set the path to TxtFahrenheit and the converter to an instance our FahrenheitToCelciusConverter converter. no need to repeat this for TxtFahrenheit since we have a convert and ConvertBack. Text="{Binding  UpdateSourceTrigger=PropertyChanged,            Path=Text,ElementName=txtFahrenheit,            Converter={StaticResource myTemperatureConverter}}" As mentioned earlier this is a twofer Demo, in the second demo, we basically are converting a double datatype to a brush. Lets take a look at TemperatureToColorConverter, notice we in our Covert Method, if the value is less than our cold temperature threshold we return a blue brush and if it is higher than our hot temperature threshold we return a redbrush. since we don’t have to convert a brush to double value in our example the convert back is not being implemented. Take time and go through these three examples and I hope you have a better understanding   of databinding, ObservableCollection  and IValueConverter . Next blog posting we will talk about ValidationRule, DataTemplates and DataTemplate triggers.

    Read the article

  • UIImagePickerController, UIImage, Memory and More!

    - by Itay
    I've noticed that there are many questions about how to handle UIImage objects, especially in conjunction with UIImagePickerController and then displaying it in a view (usually a UIImageView). Here is a collection of common questions and their answers. Feel free to edit and add your own. I obviously learnt all this information from somewhere too. Various forum posts, StackOverflow answers and my own experimenting brought me to all these solutions. Credit goes to those who posted some sample code that I've since used and modified. I don't remember who you all are - but hats off to you! How Do I Select An Image From the User's Images or From the Camera? You use UIImagePickerController. The documentation for the class gives a decent overview of how one would use it, and can be found here. Basically, you create an instance of the class, which is a modal view controller, display it, and set yourself (or some class) to be the delegate. Then you'll get notified when a user selects some form of media (movie or image in 3.0 on the 3GS), and you can do whatever you want. My Delegate Was Called - How Do I Get The Media? The delegate method signature is the following: - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info; You should put a breakpoint in the debugger to see what's in the dictionary, but you use that to extract the media. For example: UIImage* image = [info objectForKey:UIImagePickerControllerOriginalImage]; There are other keys that work as well, all in the documentation. OK, I Got The Image, But It Doesn't Have Any Geolocation Data. What gives? Unfortunately, Apple decided that we're not worthy of this information. When they load the data into the UIImage, they strip it of all the EXIF/Geolocation data. Can I Get To The Original File Representing This Image on the Disk? Nope. For security purposes, you only get the UIImage. How Can I Look At The Underlying Pixels of the UIImage? Since the UIImage is immutable, you can't look at the direct pixels. However, you can make a copy. The code to this looks something like this: UIImage* image = ...; // An image NSData* pixelData = (NSData*) CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage)); unsigned char* pixelBytes = (unsigned char *)[pixelData bytes]; // Take away the red pixel, assuming 32-bit RGBA for(int i = 0; i < [pixelData length]; i += 4) { pixelBytes[i] = 0; // red pixelBytes[i+1] = pixelBytes[i+1]; // green pixelBytes[i+2] = pixelBytes[i+2]; // blue pixelBytes[i+3] = pixelBytes[i+3]; // alpha } However, note that CGDataProviderCopyData provides you with an "immutable" reference to the data - meaning you can't change it (and you may get a BAD_ACCESS error if you do). Look at the next question if you want to see how you can modify the pixels. How Do I Modify The Pixels of the UIImage? The UIImage is immutable, meaning you can't change it. Apple posted a great article on how to get a copy of the pixels and modify them, and rather than copy and paste it here, you should just go read the article. Once you have the bitmap context as they mention in the article, you can do something similar to this to get a new UIImage with the modified pixels: CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* newImage = [UIImage imageWithCGImage:ref]; Do remember to release your references though, otherwise you're going to be leaking quite a bit of memory. After I Select 3 Images From The Camera, I Run Out Of Memory. Help! You have to remember that even though on disk these images take up only a few hundred kilobytes at most, that's because they're compressed as a PNG or JPG. When they are loaded into the UIImage, they become uncompressed. A quick over-the-envelope calculation would be: width x height x 4 = bytes in memory That's assuming 32-bit pixels. If you have 16-bit pixels (some JPGs are stored as RGBA-5551), then you'd replace the 4 with a 2. Now, images taken with the camera are 1600 x 1200 pixels, so let's do the math: 1600 x 1200 x 4 = 7,680,000 bytes = ~8 MB 8 MB is a lot, especially when you have a limit of around 24 MB for your application. That's why you run out of memory. OK, I Understand Why I Have No Memory. What Do I Do? There is never any reason to display images at their full resolution. The iPhone has a screen of 480 x 320 pixels, so you're just wasting space. If you find yourself in this situation, ask yourself the following question: Do I need the full resolution image? If the answer is yes, then you should save it to disk for later use. If the answer is no, then read the next part. Once you've decided what to do with the full-resolution image, then you need to create a smaller image to use for displaying. Many times you might even want several sizes for your image: a thumbnail, a full-size one for displaying, and the original full-resolution image. OK, I'm Hooked. How Do I Resize the Image? Unfortunately, there is no defined way how to resize an image. Also, it's important to note that when you resize it, you'll get a new image - you're not modifying the old one. There are a couple of methods to do the resizing. I'll present them both here, and explain the pros and cons of each. Method 1: Using UIKit + (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize; { // Create a graphics image context UIGraphicsBeginImageContext(newSize); // Tell the old image to draw in this new context, with the desired // new size [image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)]; // Get the new image from the context UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext(); // End the context UIGraphicsEndImageContext(); // Return the new image. return newImage; } This method is very simple, and works great. It will also deal with the UIImageOrientation for you, meaning that you don't have to care whether the camera was sideways when the picture was taken. However, this method is not thread safe, and since thumbnailing is a relatively expensive operation (approximately ~2.5s on a 3G for a 1600 x 1200 pixel image), this is very much an operation you may want to do in the background, on a separate thread. Method 2: Using CoreGraphics + (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSize:(CGSize)newSize; { CGFloat targetWidth = targetSize.width; CGFloat targetHeight = targetSize.height; CGImageRef imageRef = [sourceImage CGImage]; CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef); CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef); if (bitmapInfo == kCGImageAlphaNone) { bitmapInfo = kCGImageAlphaNoneSkipLast; } CGContextRef bitmap; if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) { bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } else { bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } if (sourceImage.imageOrientation == UIImageOrientationLeft) { CGContextRotateCTM (bitmap, radians(90)); CGContextTranslateCTM (bitmap, 0, -targetHeight); } else if (sourceImage.imageOrientation == UIImageOrientationRight) { CGContextRotateCTM (bitmap, radians(-90)); CGContextTranslateCTM (bitmap, -targetWidth, 0); } else if (sourceImage.imageOrientation == UIImageOrientationUp) { // NOTHING } else if (sourceImage.imageOrientation == UIImageOrientationDown) { CGContextTranslateCTM (bitmap, targetWidth, targetHeight); CGContextRotateCTM (bitmap, radians(-180.)); } CGContextDrawImage(bitmap, CGRectMake(0, 0, targetWidth, targetHeight), imageRef); CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* newImage = [UIImage imageWithCGImage:ref]; CGContextRelease(bitmap); CGImageRelease(ref); return newImage; } The benefit of this method is that it is thread-safe, plus it takes care of all the small things (using correct color space and bitmap info, dealing with image orientation) that the UIKit version does. How Do I Resize and Maintain Aspect Ratio (like the AspectFill option)? It is very similar to the method above, and it looks like this: + (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSizeWithSameAspectRatio:(CGSize)targetSize; { CGSize imageSize = sourceImage.size; CGFloat width = imageSize.width; CGFloat height = imageSize.height; CGFloat targetWidth = targetSize.width; CGFloat targetHeight = targetSize.height; CGFloat scaleFactor = 0.0; CGFloat scaledWidth = targetWidth; CGFloat scaledHeight = targetHeight; CGPoint thumbnailPoint = CGPointMake(0.0,0.0); if (CGSizeEqualToSize(imageSize, targetSize) == NO) { CGFloat widthFactor = targetWidth / width; CGFloat heightFactor = targetHeight / height; if (widthFactor > heightFactor) { scaleFactor = widthFactor; // scale to fit height } else { scaleFactor = heightFactor; // scale to fit width } scaledWidth = width * scaleFactor; scaledHeight = height * scaleFactor; // center the image if (widthFactor > heightFactor) { thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5; } else if (widthFactor < heightFactor) { thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5; } } CGImageRef imageRef = [sourceImage CGImage]; CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef); CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef); if (bitmapInfo == kCGImageAlphaNone) { bitmapInfo = kCGImageAlphaNoneSkipLast; } CGContextRef bitmap; if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) { bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } else { bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo); } // In the right or left cases, we need to switch scaledWidth and scaledHeight, // and also the thumbnail point if (sourceImage.imageOrientation == UIImageOrientationLeft) { thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x); CGFloat oldScaledWidth = scaledWidth; scaledWidth = scaledHeight; scaledHeight = oldScaledWidth; CGContextRotateCTM (bitmap, radians(90)); CGContextTranslateCTM (bitmap, 0, -targetHeight); } else if (sourceImage.imageOrientation == UIImageOrientationRight) { thumbnailPoint = CGPointMake(thumbnailPoint.y, thumbnailPoint.x); CGFloat oldScaledWidth = scaledWidth; scaledWidth = scaledHeight; scaledHeight = oldScaledWidth; CGContextRotateCTM (bitmap, radians(-90)); CGContextTranslateCTM (bitmap, -targetWidth, 0); } else if (sourceImage.imageOrientation == UIImageOrientationUp) { // NOTHING } else if (sourceImage.imageOrientation == UIImageOrientationDown) { CGContextTranslateCTM (bitmap, targetWidth, targetHeight); CGContextRotateCTM (bitmap, radians(-180.)); } CGContextDrawImage(bitmap, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), imageRef); CGImageRef ref = CGBitmapContextCreateImage(bitmap); UIImage* newImage = [UIImage imageWithCGImage:ref]; CGContextRelease(bitmap); CGImageRelease(ref); return newImage; } The method we employ here is to create a bitmap with the desired size, but draw an image that is actually larger, thus maintaining the aspect ratio. So We've Got Our Scaled Images - How Do I Save Them To Disk? This is pretty simple. Remember that we want to save a compressed version to disk, and not the uncompressed pixels. Apple provides two functions that help us with this (documentation is here): NSData* UIImagePNGRepresentation(UIImage *image); NSData* UIImageJPEGRepresentation (UIImage *image, CGFloat compressionQuality); And if you want to use them, you'd do something like: UIImage* myThumbnail = ...; // Get some image NSData* imageData = UIImagePNGRepresentation(myThumbnail); Now we're ready to save it to disk, which is the final step (say into the documents directory): // Give a name to the file NSString* imageName = @"MyImage.png"; // Now, we have to find the documents directory so we can save it // Note that you might want to save it elsewhere, like the cache directory, // or something similar. NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString* documentsDirectory = [paths objectAtIndex:0]; // Now we get the full path to the file NSString* fullPathToFile = [documentsDirectory stringByAppendingPathComponent:imageName]; // and then we write it out [imageData writeToFile:fullPathToFile atomically:NO]; You would repeat this for every version of the image you have. How Do I Load These Images Back Into Memory? Just look at the various UIImage initialization methods, such as +imageWithContentsOfFile: in the Apple documentation.

    Read the article

  • 2D metaball liquid effect - how to feed output of one rendering pass as input to another shader

    - by Guye Incognito
    I'm attempting to make a shader for unity3d web project. I want to implement something like in the great answer by DMGregory in this question. in order to achieve a final look something like this.. Its metaballs with specular and shading. The steps to make this shader are. 1. Convert the feathered blobs into a heightmap. 2. Generate a normalmap from the heightmap 3. Feed the normal map and height map into a standard unity shader, for instance transparent parallax specular. I pretty much have all the pieces I need assembled but I am new to shaders and need help putting them together I can generate a heightmap from the blobs using some fragment shader code I wrote (I'm just using the red channel here cus i dont know if you can access the brightness) half4 frag (v2f i) : COLOR{ half4 texcol,finalColor; texcol = tex2D (_MainTex, i.uv); finalColor=_MyColor; if(texcol.r<_botmcut) { finalColor.r= 0; } else if((texcol.r>_topcut)) { finalColor.r= 0; } else { float r = _topcut-_botmcut; float xpos = _topcut - texcol.r; finalColor.r= (_botmcut + sqrt((xpos*xpos)-(r*r)))/_constant; } return finalColor; } turns these blobs.. into this heightmap Also I've found some CG code that generates a normal map from a height map. The bit of code that makes the normal map from finite differences is here void surf (Input IN, inout SurfaceOutput o) { o.Albedo = fixed3(0.5); float3 normal = UnpackNormal(tex2D(_BumpMap, IN.uv_MainTex)); float me = tex2D(_HeightMap,IN.uv_MainTex).x; float n = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y+1.0/_HeightmapDimY)).x; float s = tex2D(_HeightMap,float2(IN.uv_MainTex.x,IN.uv_MainTex.y-1.0/_HeightmapDimY)).x; float e = tex2D(_HeightMap,float2(IN.uv_MainTex.x-1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float w = tex2D(_HeightMap,float2(IN.uv_MainTex.x+1.0/_HeightmapDimX,IN.uv_MainTex.y)).x; float3 norm = normal; float3 temp = norm; //a temporary vector that is not parallel to norm if(norm.x==1) temp.y+=0.5; else temp.x+=0.5; //form a basis with norm being one of the axes: float3 perp1 = normalize(cross(norm,temp)); float3 perp2 = normalize(cross(norm,perp1)); //use the basis to move the normal in its own space by the offset float3 normalOffset = -_HeightmapStrength * ( ( (n-me) - (s-me) ) * perp1 + ( ( e - me ) - ( w - me ) ) * perp2 ); norm += normalOffset; norm = normalize(norm); o.Normal = norm; } Also here is the built-in transparent parallax specular shader for unity. Shader "Transparent/Parallax Specular" { Properties { _Color ("Main Color", Color) = (1,1,1,1) _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 0) _Shininess ("Shininess", Range (0.01, 1)) = 0.078125 _Parallax ("Height", Range (0.005, 0.08)) = 0.02 _MainTex ("Base (RGB) TransGloss (A)", 2D) = "white" {} _BumpMap ("Normalmap", 2D) = "bump" {} _ParallaxMap ("Heightmap (A)", 2D) = "black" {} } SubShader { Tags {"Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent"} LOD 600 CGPROGRAM #pragma surface surf BlinnPhong alpha #pragma exclude_renderers flash sampler2D _MainTex; sampler2D _BumpMap; sampler2D _ParallaxMap; fixed4 _Color; half _Shininess; float _Parallax; struct Input { float2 uv_MainTex; float2 uv_BumpMap; float3 viewDir; }; void surf (Input IN, inout SurfaceOutput o) { half h = tex2D (_ParallaxMap, IN.uv_BumpMap).w; float2 offset = ParallaxOffset (h, _Parallax, IN.viewDir); IN.uv_MainTex += offset; IN.uv_BumpMap += offset; fixed4 tex = tex2D(_MainTex, IN.uv_MainTex); o.Albedo = tex.rgb * _Color.rgb; o.Gloss = tex.a; o.Alpha = tex.a * _Color.a; o.Specular = _Shininess; o.Normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap)); } ENDCG } FallBack "Transparent/Bumped Specular" }

    Read the article

  • Oracle OpenWorld Preview: Let's Get Social and Interactive

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} On this blog, we often write about getting social and interactive.  Usually, we’re talking about how to create a social business or how to make the customer experience more social and interactive.  Today’s topic is about getting social and interactive as well. But this time we’re talking about getting social and interactive the old fashioned way, face-to-face at Oracle OpenWorld with fellow Oracle WebCenter customers, partners and experts and the broader Oracle community.  Here are some great ways to get social at OpenWorld outside of the exhibition halls and meeting rooms: Oracle OpenWorld Welcome Reception - Sponsored by FujitsuSunday, September 30, 7:00 p.m.–8:30 p.m.Yerba Buena Gardens & Howard Street Tent You’ll definitely want to attend the Opening Ceremonies for Oracle OpenWorld 2012 on Sunday, September 30. Centered in Yerba Buena Gardens (YBG) and shimmying out to other venues, the Opening Ceremonies are not to be missed. Join other attendees for great food and drink, energizing music, networking opportunities, and more. While you’re at YBG (home of ORACLE TEAM USA’s America’s Cup Pavilion), be sure to meet the sailors who will be defending the 34th America’s Cup in 2013. Get a good look at the 161-year old Trophy itself—the oldest trophy still being contested in international sport. And at the AC72 boat display, view a model of the largest wingsail ever built. Oracle WebCenter Customer Appreciation ReceptionTuesday, October 2, 6:30 p.m.—9:30 p.m.The Palace Hotel, Rallston BallroomThose Oracle WebCenter customers who’ve RSVP’d to attend the Oracle WebCenter Customer Appreciation Reception shouldn’t miss this private cocktail reception at one of San Francisco’s finest hotels. Sponsored by Oracle WebCenter partners Fishbowl Solutions, Fujitsu, Keste, Mythics, Redstone Content Solutions, TEAM Informatics, and TekStream, this evening will provide plenty of time to interact with other WebCenter customers, partners and employees over hors d'oeuvres and cocktails. Oracle Appreciation Event – Sponsored by CSC, Fujitsu and IntelWednesday, October 3, 7:30 p.m.—1:00 a.m.Treasure Island, San Francisco On Wednesday night October 3, Treasure Island will be engineered to rock as the Oracle Appreciation Event gets revved up and attendees get rolling. As always at the Oracle Appreciation Event, there will be unlimited refreshments, fun and games, the most awesome views of San Francisco from just about anywhere, and top notch entertainment.  Past performers read like a veritable who’s who of the rock and roll elite. Join us—it's our way of saying thanks to you for supporting Oracle and our flagship conference. Complimentary shuttle service to and from Treasure Island will be provided, so all you have to worry about is having a rocking night of your own. Oracle OpenWorld Music FestivalSeptember 30-October 4, Check schedule for venues and times.Oracle presents the first annual Oracle OpenWorld Musical Festival, featuring some of today’s breakthrough musicians from around the country and the world including Macy Gray, Joss Stone, Jimmy Cliff and The Hives. It’s five nights of back-to-back performances in the heart of San Francisco. Registered Oracle conference attendees get free admission, so remember your badge when you head to a show. With limited space at some venues, these concerts are first-come, first-served. So mark your calendars and get ready for the music to begin. See you there!I hope this give you an idea of the many opportunities to socialize and interact with the Oracle community at OpenWorld, and if you’re a music lover like me, you’re in for a special treat as we debut our first annual Oracle OpenWorld Music Festival.  Check out the links below for more information on these events and the many featured performers: Reflections from the Young Prisms A Brief Soul Session with Joss Stone Mixing It Up with Blues Mix Red Meat’s Music is Rare and Well Done The English Beat’s Dave Wakeling Gets Philosophical Top Ten Reasons to Attend the Oracle Appreciation Event There’s Magic in the Air, There’ll Be Music Everywhere Looking forward to seeing you at OpenWorld!

    Read the article

  • How to edit item in a listbox shown from reading a .csv file?

    - by Shuvo
    I am working in a project where my application can open a .csv file and read data from it. The .csv file contains the latitude, longitude of places. The application reads data from the file shows it in a static map and display icon on the right places. The application can open multiple file at a time and it opens with a new tab every time. But I am having trouble in couple of cases When I am trying to add a new point to the .csv file opened. I am able to write new point on the same file instead adding a new point data to the existing its replacing others and writing the new point only. I cannot use selectedIndexChange event to perform edit option on the listbox and then save the file. Any direction would be great. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; namespace CourseworkExample { public partial class Form1 : Form { public GPSDataPoint gpsdp; List<GPSDataPoint> data; List<PictureBox> pictures; List<TabPage> tabs; public static int pn = 0; private TabPage currentComponent; private Bitmap bmp1; string[] symbols = { "hospital", "university" }; Image[] symbolImages; ListBox lb = new ListBox(); string name = ""; string path = ""; public Form1() { InitializeComponent(); data = new List<GPSDataPoint>(); pictures = new List<PictureBox>(); tabs = new List<TabPage>(); symbolImages = new Image[symbols.Length]; for (int i = 0; i < 2; i++) { string location = "data/" + symbols[i] + ".png"; symbolImages[i] = Image.FromFile(location); } } private void openToolStripMenuItem_Click(object sender, EventArgs e) { FileDialog ofd = new OpenFileDialog(); string filter = "CSV File (*.csv)|*.csv"; ofd.Filter = filter; DialogResult dr = ofd.ShowDialog(); if (dr.Equals(DialogResult.OK)) { int i = ofd.FileName.LastIndexOf("\\"); name = ofd.FileName; path = ofd.FileName; if (i > 0) { name = ofd.FileName.Substring(i + 1); path = ofd.FileName.Substring(0, i + 1); } TextReader input = new StreamReader(ofd.FileName); string mapName = input.ReadLine(); GPSDataPoint gpsD = new GPSDataPoint(); gpsD.setBounds(input.ReadLine()); string s; while ((s = input.ReadLine()) != null) { gpsD.addWaypoint(s); } input.Close(); TabPage tabPage = new TabPage(); tabPage.Location = new System.Drawing.Point(4, 22); tabPage.Name = "tabPage" + pn; lb.Width = 300; int selectedindex = lb.SelectedIndex; lb.Items.Add(mapName); lb.Items.Add("Bounds"); lb.Items.Add(gpsD.Bounds[0] + " " + gpsD.Bounds[1] + " " + gpsD.Bounds[2] + " " + gpsD.Bounds[3]); lb.Items.Add("Waypoint"); foreach (WayPoint wp in gpsD.DataList) { lb.Items.Add(wp.Name + " " + wp.Latitude + " " + wp.Longitude + " " + wp.Ele + " " + wp.Sym); } tabPage.Controls.Add(lb); pn++; tabPage.Padding = new System.Windows.Forms.Padding(3); tabPage.Size = new System.Drawing.Size(192, 74); tabPage.TabIndex = 0; tabPage.Text = name; tabPage.UseVisualStyleBackColor = true; tabs.Add(tabPage); tabControl1.Controls.Add(tabPage); tabPage = new TabPage(); tabPage.Location = new System.Drawing.Point(4, 22); tabPage.Name = "tabPage" + pn; pn++; tabPage.Padding = new System.Windows.Forms.Padding(3); tabPage.Size = new System.Drawing.Size(192, 74); tabPage.TabIndex = 0; tabPage.Text = mapName; string location = path + mapName; tabPage.UseVisualStyleBackColor = true; tabs.Add(tabPage); PictureBox pb = new PictureBox(); pb.Name = "pictureBox" + pn; pb.Image = Image.FromFile(location); tabControl2.Controls.Add(tabPage); pb.Width = pb.Image.Width; pb.Height = pb.Image.Height; tabPage.Controls.Add(pb); currentComponent = tabPage; tabPage.Width = pb.Width; tabPage.Height = pb.Height; pn++; tabControl2.Width = pb.Width; tabControl2.Height = pb.Height; bmp1 = (Bitmap)pb.Image; int lx, ly; float realWidth = gpsD.Bounds[1] - gpsD.Bounds[3]; float imageW = pb.Image.Width; float dx = imageW * (gpsD.Bounds[1] - gpsD.getWayPoint(0).Longitude) / realWidth; float realHeight = gpsD.Bounds[0] - gpsD.Bounds[2]; float imageH = pb.Image.Height; float dy = imageH * (gpsD.Bounds[0] - gpsD.getWayPoint(0).Latitude) / realHeight; lx = (int)dx; ly = (int)dy; using (Graphics g = Graphics.FromImage(bmp1)) { Rectangle rect = new Rectangle(lx, ly, 20, 20); if (gpsD.getWayPoint(0).Sym.Equals("")) { g.DrawRectangle(new Pen(Color.Red), rect); } else { if (gpsD.getWayPoint(0).Sym.Equals("hospital")) { g.DrawImage(symbolImages[0], rect); } else { if (gpsD.getWayPoint(0).Sym.Equals("university")) { g.DrawImage(symbolImages[1], rect); } } } } pb.Image = bmp1; pb.Invalidate(); } } private void openToolStripMenuItem_Click_1(object sender, EventArgs e) { FileDialog ofd = new OpenFileDialog(); string filter = "CSV File (*.csv)|*.csv"; ofd.Filter = filter; DialogResult dr = ofd.ShowDialog(); if (dr.Equals(DialogResult.OK)) { int i = ofd.FileName.LastIndexOf("\\"); name = ofd.FileName; path = ofd.FileName; if (i > 0) { name = ofd.FileName.Substring(i + 1); path = ofd.FileName.Substring(0, i + 1); } TextReader input = new StreamReader(ofd.FileName); string mapName = input.ReadLine(); GPSDataPoint gpsD = new GPSDataPoint(); gpsD.setBounds(input.ReadLine()); string s; while ((s = input.ReadLine()) != null) { gpsD.addWaypoint(s); } input.Close(); TabPage tabPage = new TabPage(); tabPage.Location = new System.Drawing.Point(4, 22); tabPage.Name = "tabPage" + pn; ListBox lb = new ListBox(); lb.Width = 300; lb.Items.Add(mapName); lb.Items.Add("Bounds"); lb.Items.Add(gpsD.Bounds[0] + " " + gpsD.Bounds[1] + " " + gpsD.Bounds[2] + " " + gpsD.Bounds[3]); lb.Items.Add("Waypoint"); foreach (WayPoint wp in gpsD.DataList) { lb.Items.Add(wp.Name + " " + wp.Latitude + " " + wp.Longitude + " " + wp.Ele + " " + wp.Sym); } tabPage.Controls.Add(lb); pn++; tabPage.Padding = new System.Windows.Forms.Padding(3); tabPage.Size = new System.Drawing.Size(192, 74); tabPage.TabIndex = 0; tabPage.Text = name; tabPage.UseVisualStyleBackColor = true; tabs.Add(tabPage); tabControl1.Controls.Add(tabPage); tabPage = new TabPage(); tabPage.Location = new System.Drawing.Point(4, 22); tabPage.Name = "tabPage" + pn; pn++; tabPage.Padding = new System.Windows.Forms.Padding(3); tabPage.Size = new System.Drawing.Size(192, 74); tabPage.TabIndex = 0; tabPage.Text = mapName; string location = path + mapName; tabPage.UseVisualStyleBackColor = true; tabs.Add(tabPage); PictureBox pb = new PictureBox(); pb.Name = "pictureBox" + pn; pb.Image = Image.FromFile(location); tabControl2.Controls.Add(tabPage); pb.Width = pb.Image.Width; pb.Height = pb.Image.Height; tabPage.Controls.Add(pb); currentComponent = tabPage; tabPage.Width = pb.Width; tabPage.Height = pb.Height; pn++; tabControl2.Width = pb.Width; tabControl2.Height = pb.Height; bmp1 = (Bitmap)pb.Image; int lx, ly; float realWidth = gpsD.Bounds[1] - gpsD.Bounds[3]; float imageW = pb.Image.Width; float dx = imageW * (gpsD.Bounds[1] - gpsD.getWayPoint(0).Longitude) / realWidth; float realHeight = gpsD.Bounds[0] - gpsD.Bounds[2]; float imageH = pb.Image.Height; float dy = imageH * (gpsD.Bounds[0] - gpsD.getWayPoint(0).Latitude) / realHeight; lx = (int)dx; ly = (int)dy; using (Graphics g = Graphics.FromImage(bmp1)) { Rectangle rect = new Rectangle(lx, ly, 20, 20); if (gpsD.getWayPoint(0).Sym.Equals("")) { g.DrawRectangle(new Pen(Color.Red), rect); } else { if (gpsD.getWayPoint(0).Sym.Equals("hospital")) { g.DrawImage(symbolImages[0], rect); } else { if (gpsD.getWayPoint(0).Sym.Equals("university")) { g.DrawImage(symbolImages[1], rect); } } } } pb.Image = bmp1; pb.Invalidate(); MessageBox.Show(data.ToString()); } } private void exitToolStripMenuItem_Click(object sender, EventArgs e) { this.Close(); } private void addBtn_Click(object sender, EventArgs e) { string wayName = nameTxtBox.Text; float wayLat = Convert.ToSingle(latTxtBox.Text); float wayLong = Convert.ToSingle(longTxtBox.Text); float wayEle = Convert.ToSingle(elevTxtBox.Text); WayPoint wp = new WayPoint(wayName, wayLat, wayLong, wayEle); GPSDataPoint gdp = new GPSDataPoint(); data = new List<GPSDataPoint>(); gdp.Add(wp); lb.Items.Add(wp.Name + " " + wp.Latitude + " " + wp.Longitude + " " + wp.Ele + " " + wp.Sym); lb.Refresh(); StreamWriter sr = new StreamWriter(name); sr.Write(lb); sr.Close(); DialogResult result = MessageBox.Show("Save in New File?","Save", MessageBoxButtons.YesNo); if (result == DialogResult.Yes) { SaveFileDialog saveDialog = new SaveFileDialog(); saveDialog.FileName = "default.csv"; DialogResult saveResult = saveDialog.ShowDialog(); if (saveResult == DialogResult.OK) { sr = new StreamWriter(saveDialog.FileName, true); sr.WriteLine(wayName + "," + wayLat + "," + wayLong + "," + wayEle); sr.Close(); } } else { // sr = new StreamWriter(name, true); // sr.WriteLine(wayName + "," + wayLat + "," + wayLong + "," + wayEle); sr.Close(); } MessageBox.Show(name + path); } } } GPSDataPoint.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace CourseworkExample { public class GPSDataPoint { private float[] bounds; private List<WayPoint> dataList; public GPSDataPoint() { dataList = new List<WayPoint>(); } internal void setBounds(string p) { string[] b = p.Split(','); bounds = new float[b.Length]; for (int i = 0; i < b.Length; i++) { bounds[i] = Convert.ToSingle(b[i]); } } public float[] Bounds { get { return bounds; } } internal void addWaypoint(string s) { WayPoint wp = new WayPoint(s); dataList.Add(wp); } public WayPoint getWayPoint(int i) { if (i < dataList.Count) { return dataList[i]; } else return null; } public List<WayPoint> DataList { get { return dataList; } } internal void Add(WayPoint wp) { dataList.Add(wp); } } } WayPoint.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace CourseworkExample { public class WayPoint { private string name; private float ele; private float latitude; private float longitude; private string sym; public WayPoint(string name, float latitude, float longitude, float elevation) { this.name = name; this.latitude = latitude; this.longitude = longitude; this.ele = elevation; } public WayPoint() { name = "no name"; ele = 3.5F; latitude = 3.5F; longitude = 0.0F; sym = "no symbol"; } public WayPoint(string s) { string[] bits = s.Split(','); name = bits[0]; longitude = Convert.ToSingle(bits[2]); latitude = Convert.ToSingle(bits[1]); if (bits.Length > 4) sym = bits[4]; else sym = ""; try { ele = Convert.ToSingle(bits[3]); } catch (Exception e) { ele = 0.0f; } } public float Longitude { get { return longitude; } set { longitude = value; } } public float Latitude { get { return latitude; } set { latitude = value; } } public float Ele { get { return ele; } set { ele = value; } } public string Name { get { return name; } set { name = value; } } public string Sym { get { return sym; } set { sym = value; } } } } .csv file data birthplace.png 51.483788,-0.351906,51.460745,-0.302982 Born Here,51.473805,-0.32532,-,hospital Danced here,51,483805,-0.32532,-,hospital

    Read the article

  • Five development tools I can't live without

    - by bconlon
    When applying to join Geeks with Blogs I had to specify the development tools I use every day. That got me thinking, it's taken a long time to whittle my tools of choice down to the selection I use, so it might be worth sharing. Before I begin, I appreciate we all have our preferred development tools, but these are the ones that work for me. Microsoft Visual Studio Microsoft Visual Studio has been my development tool of choice for more years than I care to remember. I first used this when it was Visual C++ 1.5 (hats off to those who started on 1.0) and by 2.2 it had everything I needed from a C++ IDE. Versions 4 and 5 followed and if I had to guess I would expect more Windows applications are written in VC++ 6 and VB6 than any other language. Then came the not so great versions Visual Studio .Net 2002 (7.0) and 2003 (7.1). If I'm honest I was still using v6. 2005 was better and 2008 was simply brilliant. Everything worked, the compiler was super fast and I was happy again...then came 2010...oh dear. 2010 is a big step backwards for me. It's not encouraging for my upcoming WPF exploits that 2010 is fronted in WPF technology, with the forever growing Find/Replace dialog, the issues with C++ intellisense, and the buggy debugger. That said it is still my tool of choice but I hope they sort the issue in SP1. I've tried other IDEs like Visual Age and Eclipse, but for me Visual Studio is the best. A really great tool. Liquid XML Studio XML development is a tricky business. The W3C standards are often difficult to get to the bottom of so it's great to have a graphical tool to help. I first used Liquid Technologies 5 or 6 years back when I needed to process XML data in C++. Their excellent XML Data Binding tool has an easy to use Wizard UI (as compared to Castor or JAXB command line tools) and allows you to generate code from an XML Schema. So instead of having to deal with untyped nodes like with a DOM parser, instead you get an Object Model providing a custom API in C++, C#, VB etc. More recently they developed a graphical XML IDE with XML Editor, XSLT, XQuery debugger and other XML tools. So now I can develop an XML Schema graphically, click a button to generate a Sample XML document, and click another button to run the Wizard to generate code including a Sample Application that will then load my Sample XML document into the generated object model. This is a very cool toolset. Note: XML Data Binding is nothing to do with WPF Data Binding, but I hope to cover both in more detail another time. .Net Reflector Note: I've just noticed that starting form the end of February 2011 this will no longer be a free tool !! .Net Reflector turns .Net byte code back into C# source code. But how can it work this magic? Well the clue is in the name, it uses reflection to inspect a compiled .Net assembly. The assembly is compiled to byte code, it doesn't get compiled to native machine code until its needed using a just-in-time (JIT) compiler. The byte code still has all of the information needed to see classes, variables. methods and properties, so reflector gathers this information and puts it in a handy tree. I have used .Net Reflector for years in order to understand what the .Net Framework is doing as it sometimes has undocumented, quirky features. This really has been invaluable in certain instances and I cannot praise enough kudos on the original developer Lutz Roeder. Smart Assembly In order to stop nosy geeks looking at our code using a tool like .Net Reflector, we need to obfuscate (mess up) the byte code. Smart Assembly is a tool that does this. Again I have used this for a long time. It is very quick and easy to use. Another excellent tool. Coincidentally, .Net Reflector and Smart Assembly are now both owned by Red Gate. Again kudos goes to the original developer Jean-Sebastien Lange. TortoiseSVN SVN (Apache Subversion) is a Source Control System developed as an open source project. TortoiseSVN is a graphical UI wrapper over SVN that hooks into Windows Explorer to enable files to be Updated, Committed, Merged etc. from the right click menu. This is an essential tool for keeping my hard work safe! Many years ago I used Microsoft Source Safe and I disliked CVS type systems. But TortoiseSVN is simply the best source control tool I have ever used. --- So there you have it, my top 5 development tools that I use (nearly) every day and have helped to make my working life a little easier. I'm sure there are other great tools that I wish I used but have never heard of, but if you have not used any of the above, I would suggest you check them out as they are all very, very cool products. #

    Read the article

  • JustCode Provides Reflector Alternative

    - by Joe Mayo
    If you've been a loyal Reflector user, you've probably been exposed to the debacle surrounding RedGate's decision to no longer offer a free version.  Since then, the race has begun for a replacement with a provider that would stand by their promises to the community.  Mono has an ongoing free alternative, which has been available for a long time.  However, other vendors are stepping up to the plate, with their own offerings. If Not Reflector, Then What? One of these vendors is Telerik.  In their recent Q1 2011 release of JustCode, Telerik offers a decompilation utility rivaling what we've become accustomed to in Reflector.  Not only does Telerik offer a usable replacement, but they've (in my opinion), produced a product that integrates more naturally with visual Studio than any other product ever has.  Telerik's decompilation process is so easy that the accompanying demo in this post is blindingly short (except for the presence of verbose narrative). If you want to follow along with this demo, you'll need to have Telerik JustCode installed.  If you don't have JustCode yet, you can buy it or download a trial at the Telerik Web site . A Tall Tale; Prove It! With JustCode, you can view code in the .NET Framework or any other 3rd party library (that isn't well obfuscated).  This demo depends on LINQ to Twitter, which you can download from CodePlex.com and create a reference or install the package online as described in my previous post on NuGet.  Regardless of the method, you'll have a project with a reference to LINQ to Twitter.  Use a Console Project if you want to follow along with this demo. Note:  If you've created a Console project, remember to ensure that the Target Framework is set to .NET Framework 4.  The default is .NET Framework 4 Client Profile, which doesn't work with LINQ to Twitter.  You can check by double-clicking the Properties folder on the project and inspecting the Target Framework setting. Next, you'll need to add some code to your program that you want to inspect. Here, I add code to instantiate a TwitterContext, which is like a LINQ to SQL DataContext, but works with Twitter: var l2tCtx = new TwitterContext(); If you're following along add the code above to the Main method, which will look similar to this: using LinqToTwitter; namespace NuGetInstall { class Program { static void Main(string[] args) { var l2tCtx = new TwitterContext(); } } } The code above doesn't really do anything, but it does give something that I can show and demonstrate how JustCode decompilation works. Once the code is in place, click on TwitterContext and press the F12 (Go to Definition) key.  As expected, Visual Studio opens a metadata file with prototypes for the TwitterContext class.  Here's the result: Opening a metadata file is the normal way that Visual Studio works when navigating to the definition of a type where you don't have the code.  The scenario with TwitterContext happens because you don't have the source code to the file.  Visual Studio has always done this and you can experiment by selecting any .NET type, i.e. a string type, and observing that Visual Studio opens a metadata file for the .NET String type. The point I'm making here is that JustCode works the way Visual Studio works and you'll see how this can make your job easier. In the previous figure, you only saw prototypes associated with the code. i.e. Notice that the default constructor is empty.  Again, this is normal because Visual Studio doesn't have the ability to decompile code.  However, that's the purpose of this post; showing you how JustCode fills that gap. To decompile code, right click on TwitterContext in the metadata file and select JustCode Navigate -> Decompile from the context menu.  The shortcut keys are Ctrl+1.  After a brief pause, accompanied by a progress window, you'll see the metadata expand into full decompiled code. Notice below how the default constructor now has code as opposed to the empty member prototype in the original metadata: And Why is This So Different? Again, the big deal is that Telerik JustCode decompilation works in harmony with the way that Visual Studio works.  The navigate to functionality already exists and you can use that, along with a simple context menu option (or shortcut key) to transform prototypes into decompiled code. Telerik is filling the the Reflector/Red Gate gap by providing a supported alternative to decompiling code.  Many people, including myself, used Reflector to decompile code when we were stuck with buggy libraries or insufficient documentation.  Now we have an alternative that's officially supported by a company with an excellent track record for customer (developer) service, Telerik.  Not only that, JustCode has several other IDE productivity tools that make the deal even sweeter. Joe

    Read the article

  • Dealing with "I-am-cool-and-you-are-dumb" manager [closed]

    - by Software Guy
    I have been working with a software company for about 6 months now. I like the projects I work on there and I really like all the people there except for 1 guy. That guy is technically smart, and he is a co-founder of the company. He is an okay guy in person (the kind you wouldn't want to care about much) but things get tricky when he is your manager. In general I am all okay but there are times when I feel I am not being treated fairly: He doesn't give much thought to when he makes mistakes and when I do something similar, he is super critical. Recently he went as far as to say "I am not sure if I can trust you with this feature". The detais of this specific case are this: I was working on this feature, and I was already a couple of hours over my normal working hours, and then I decided to stop and continue tomorrow. We use git, and I like to commit changes locally and only push when I feel they are ready. This manager insists that I push all the changes to the central repo (in case my hard drive crashes). So I push the change, and the ticket is marked as "to be tested". Next day I come in, he sits next to me and starts complaining and says that I posted above. I really didn't know what to say, I tried to explain to him that the ticket is still being worked upon but he didn't seem to listen. He interrupts me in-between when I am coding, which I do not mind, but when I do that same, his face turns like this :| and reacts as if his work was super important and I am just wasting his time. He asks me to accumulate all questions, and then ask him altogether which is not always possible, as you need a clarification before you can continue on a feature implementation. And when I am coding, he talks on the phone with his customers next to me (when he can go to the meeting room with his laptop) and doesn't care. He made me switch to a whole new IDE (from Netbeans to a commercial IDE costing a lot of money) for a really tiny feature (which I later found out was in Netbeans as well!). I didn't make a big deal out of it as I am equally comfortable working with this new IDE, but I couldn't get the science behind his obsession. He said this feature makes sure that if any method is updated by a programmer, the IDE will turn the method name to red in places where it is used. I told him that I do not have a problem since I always search for method usage in the project and make sure its updated. IDEs even have refactoring features for exactly that, but... I recently implemented a feature for a project, and I was happy about it and considering him a senior, I asked him his comments about the implementation quality.. he thought long and hard, made a few funny faces, and when he couldn't find anything, he said "ummm, your program will crash if JS is disabled" - he was wrong, since I had made sure it would work fine with default values even if JS was disabled. I told him that and then he said "oh okay". BUT, the funny thing is, a few days back, he implemented something and I objected with "But that would not run if JS is disabled" and his response was "We don't have to care about people who disable JS" :-/ Once he asked me to investigate if there was a way to modify a CMS generated menu programmatically by extending the CMS, I did my research and told him that the only was is to inject a menu item using JavaScript / jQuery and his reaction was "ah that's ugly, and hacky, not acceptable" and two days later, I see that feature implemented in the same way as I had suggested. The point is, his reaction was not respectful at all, even if what I proposed was hacky, he should be respectful, that I know what's hacky and if I am suggesting something hacky, there must be a reason for it. There are plenty of other reasons / examples where I feel I am not being treated fairly. I want your advice as to what is it that I am doing wrong and how to deal with such a situation. The other guys in the team are actually very good people, and I do not want to leave the job either (although I could, if I want to). All I want is respect and equal treatment. I have thought about talking to this guy in a face to face meeting, but that worries me that his attitude might get worse and make things more difficult for me (since he doesn't seem to be the guy who thinks he can be wrong too). I am also considering talking to the other co-founder but I am not sure how he will take it (as both founders have been friends forever). Thanks for reading the long message, I really appreciate your help.

    Read the article

  • Trying to detect collision between two polygons using Separating Axis Theorem

    - by Holly
    The only collision experience i've had was with simple rectangles, i wanted to find something that would allow me to define polygonal areas for collision and have been trying to make sense of SAT using these two links Though i'm a bit iffy with the math for the most part i feel like i understand the theory! Except my implementation somewhere down the line must be off as: (excuse the hideous font) As mentioned above i have defined a CollisionPolygon class where most of my theory is implemented and then have a helper class called Vect which was meant to be for Vectors but has also been used to contain a vertex given that both just have two float values. I've tried stepping through the function and inspecting the values to solve things but given so many axes and vectors and new math to work out as i go i'm struggling to find the erroneous calculation(s) and would really appreciate any help. Apologies if this is not suitable as a question! CollisionPolygon.java: package biz.hireholly.gameplay; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import biz.hireholly.gameplay.Types.Vect; public class CollisionPolygon { Paint paint; private Vect[] vertices; private Vect[] separationAxes; CollisionPolygon(Vect[] vertices){ this.vertices = vertices; //compute edges and separations axes separationAxes = new Vect[vertices.length]; for (int i = 0; i < vertices.length; i++) { // get the current vertex Vect p1 = vertices[i]; // get the next vertex Vect p2 = vertices[i + 1 == vertices.length ? 0 : i + 1]; // subtract the two to get the edge vector Vect edge = p1.subtract(p2); // get either perpendicular vector Vect normal = edge.perp(); // the perp method is just (x, y) => (-y, x) or (y, -x) separationAxes[i] = normal; } paint = new Paint(); paint.setColor(Color.RED); } public void draw(Canvas c, int xPos, int yPos){ for (int i = 0; i < vertices.length; i++) { Vect v1 = vertices[i]; Vect v2 = vertices[i + 1 == vertices.length ? 0 : i + 1]; c.drawLine( xPos + v1.x, yPos + v1.y, xPos + v2.x, yPos + v2.y, paint); } } /* consider changing to a static function */ public boolean intersects(CollisionPolygon p){ // loop over this polygons separation exes for (Vect axis : separationAxes) { // project both shapes onto the axis Vect p1 = this.minMaxProjection(axis); Vect p2 = p.minMaxProjection(axis); // do the projections overlap? if (!p1.overlap(p2)) { // then we can guarantee that the shapes do not overlap return false; } } // loop over the other polygons separation axes Vect[] sepAxesOther = p.getSeparationAxes(); for (Vect axis : sepAxesOther) { // project both shapes onto the axis Vect p1 = this.minMaxProjection(axis); Vect p2 = p.minMaxProjection(axis); // do the projections overlap? if (!p1.overlap(p2)) { // then we can guarantee that the shapes do not overlap return false; } } // if we get here then we know that every axis had overlap on it // so we can guarantee an intersection return true; } /* Note projections wont actually be acurate if the axes aren't normalised * but that's not necessary since we just need a boolean return from our * intersects not a Minimum Translation Vector. */ private Vect minMaxProjection(Vect axis) { float min = axis.dot(vertices[0]); float max = min; for (int i = 1; i < vertices.length; i++) { float p = axis.dot(vertices[i]); if (p < min) { min = p; } else if (p > max) { max = p; } } Vect minMaxProj = new Vect(min, max); return minMaxProj; } public Vect[] getSeparationAxes() { return separationAxes; } public Vect[] getVertices() { return vertices; } } Vect.java: package biz.hireholly.gameplay.Types; /* NOTE: Can also be used to hold vertices! Projections, coordinates ect */ public class Vect{ public float x; public float y; public Vect(float x, float y){ this.x = x; this.y = y; } public Vect perp() { return new Vect(-y, x); } public Vect subtract(Vect other) { return new Vect(x - other.x, y - other.y); } public boolean overlap(Vect other) { if( other.x <= y || other.y >= x){ return true; } return false; } /* used specifically for my SAT implementation which i'm figuring out as i go, * references for later.. * http://www.gamedev.net/page/resources/_/technical/game-programming/2d-rotated-rectangle-collision-r2604 * http://www.codezealot.org/archives/55 */ public float scalarDotProjection(Vect other) { //multiplier = dot product / length^2 float multiplier = dot(other) / (x*x + y*y); //to get the x/y of the projection vector multiply by x/y of axis float projX = multiplier * x; float projY = multiplier * y; //we want to return the dot product of the projection, it's meaningless but useful in our SAT case return dot(new Vect(projX,projY)); } public float dot(Vect other){ return (other.x*x + other.y*y); } }

    Read the article

  • Unexpected advantage of Engineered Systems

    - by user12244672
    It's not surprising that Engineered Systems accelerate the debugging and resolution of customer issues. But what has surprised me is just how much faster issue resolution is with Engineered Systems such as SPARC SuperCluster. These are powerful, complex, systems used by customers wanting extreme database performance, app performance, and cost saving server consolidation. A SPARC SuperCluster consists or 2 or 4 powerful T4-4 compute nodes, 3 or 6 extreme performance Exadata Storage Cells, a ZFS Storage Appliance 7320 for general purpose storage, and ultra fast Infiniband switches.  Each with its own firmware. It runs Solaris 11, Solaris 10, 11gR2, LDoms virtualization, and Zones virtualization on the T4-4 compute nodes, a modified version of Solaris 11 in the ZFS Storage Appliance, a modified and highly tuned version of Oracle Linux running Exadata software on the Storage Cells, another Linux derivative in the Infiniband switches, etc. It has an Infiniband data network between the components, a 10Gb data network to the outside world, and a 1Gb management network. And customers can run whatever middleware and apps they want on it, clustered in whatever way they want. In one word, powerful.  In another, complex. The system is highly Engineered.  But it's designed to run general purpose applications. That is, the physical components, configuration, cabling, virtualization technologies, switches, firmware, Operating System versions, network protocols, tunables, etc. are all preset for optimum performance and robustness. That improves the customer experience as what the customer runs leverages our technical know-how and best practices and is what we've tested intensely within Oracle. It should also make debugging easier by fixing a large number of variables which would otherwise be in play if a customer or Systems Integrator had assembled such a complex system themselves from the constituent components.  For example, there's myriad network protocols which could be used with Infiniband.  Myriad ways the components could be interconnected, myriad tunable settings, etc. But what has really surprised me - and I've been working in this area for 15 years now - is just how much easier and faster Engineered Systems have made debugging and issue resolution. All those error opportunities for sub-optimal cabling, unusual network protocols, sub-optimal deployment of virtualization technologies, issues with 3rd party storage, issues with 3rd party multi-pathing products, etc., are simply taken out of the equation. All those error opportunities for making an issue unique to a particular set-up, the "why aren't we seeing this on any other system ?" type questions, the doubts, just go away when we or a customer discover an issue on an Engineered System. It enables a really honed response, getting to the root cause much, much faster than would otherwise be the case. Here's a couple of examples from the last month, one found in-house by my team, one found by a customer: Example 1: We found a node eviction issue running 11gR2 with Solaris 11 SRU 12 under extreme load on what we call our ExaLego test system (mimics an Exadata / SuperCluster 11gR2 Exadata Storage Cell set-up).  We quickly established that an enhancement in SRU12 enabled an 11gR2 process to query Infiniband's Subnet Manager, replacing a fallback mechanism it had used previously.  Under abnormally heavy load, the query could return results which were misinterpreted resulting in node eviction.  In several daily joint debugging sessions between the Solaris, Infiniband, and 11gR2 teams, the issue was fully root caused, evaluated, and a fix agreed upon.  That fix went back into all Solaris releases the following Monday.  From initial issue discovery to the fix being put back into all Solaris releases was just 10 days. Example 2: A customer reported sporadic performance degradation.  The reasons were unclear and the information sparse.  The SPARC SuperCluster Engineered Systems support teams which comprises both SPARC/Solaris and Database/Exadata experts worked to root cause the issue.  A number of contributing factors were discovered, including tunable parameters.  An intense collaborative investigation between the engineering teams identified the root cause to a CPU bound networking thread which was being starved of CPU cycles under extreme load.  Workarounds were identified.  Modifications have been put back into 11gR2 to alleviate the issue and a development project already underway within Solaris has been sped up to provide the final resolution on the Solaris side.  The fixed SPARC SuperCluster configuration greatly aided issue reproduction and dramatically sped up root cause analysis, allowing the correct workarounds and fixes to be identified, prioritized, and implemented.  The customer is now extremely happy with performance and robustness.  Since the configuration is common to other customers, the lessons learned are being proactively rolled out to other customers and incorporated into the installation procedures for future customers.  This effectively acts as a turbo-boost to performance and reliability for all SPARC SuperCluster customers.  If this had occurred in a "home grown" system of this complexity, I expect it would have taken at least 6 months to get to the bottom of the issue.  But because it was an Engineered System, known, understood, and qualified by both the Solaris and Database teams, we were able to collaborate closely to identify cause and effect and expedite a solution for the customer.  That is a key advantage of Engineered Systems which should not be underestimated.  Indeed, the initial issue mitigation on the Database side followed by final fix on the Solaris side, highlights the high degree of collaboration and excellent teamwork between the Oracle engineering teams.  It's a compelling advantage of the integrated Oracle Red Stack in general and Engineered Systems in particular.

    Read the article

  • NetBeans, JSF, and MySQL Primary Keys using AUTO_INCREMENT

    - by MarkH
    I recently had the opportunity to spin up a small web application using JSF and MySQL. Having developed JSF apps with Oracle Database back-ends before and possessing some small familiarity with MySQL (sans JSF), I thought this would be a cakewalk. Things did go pretty smoothly...but there was one little "gotcha" that took more time than the few seconds it really warranted. The Problem Every DBMS has its own way of automatically generating primary keys, and each has its pros and cons. For the Oracle Database, you use a sequence and point your Java classes to it using annotations that look something like this: @GeneratedValue(strategy=GenerationType.SEQUENCE, generator="POC_ID_SEQ") @SequenceGenerator(name="POC_ID_SEQ", sequenceName="POC_ID_SEQ", allocationSize=1) Between creating the actual sequence in the database and making sure you have your annotations right (watch those typos!), it seems a bit cumbersome. But it typically "just works", without fuss. Enter MySQL. Designating an integer-based field as PRIMARY KEY and using the keyword AUTO_INCREMENT makes the same task seem much simpler. And it is, mostly. But while NetBeans cranks out a superb "first cut" for a basic JSF CRUD app, there are a couple of small things you'll need to bring to the mix in order to be able to actually (C)reate records. The (RUD) performs fine out of the gate. The Solution Omitting all design considerations and activity (!), here is the basic sequence of events I followed to create, then resolve, the JSF/MySQL "Primary Key Perfect Storm": Fire up NetBeans. Create JSF project. Create Entity Classes from Database. Create JSF Pages from Entity Classes. Test run. Try to create record and hit error. It's a simple fix, but one that was fun to find in its completeness. :-) Even though you've told it what to do for a primary key, a MySQL table requires a gentle nudge to actually generate that new key value. Two things are needed to make the magic happen. First, you need to ensure the following annotation is in place in your Java entity classes: @GeneratedValue(strategy = GenerationType.IDENTITY) All well and good, but the real key is this: in your controller class(es), you'll have a create() function that looks something like this, minus the comment line and the setId() call in bold red type:     public String create() {         try {             // Assign 0 to ID for MySQL to properly auto_increment the primary key.             current.setId(0);             getFacade().create(current);             JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("CategoryCreated"));             return prepareCreate();         } catch (Exception e) {             JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured"));             return null;         }     } Setting the current object's primary key attribute to zero (0) prior to saving it tells MySQL to get the next available value and assign it to that record's key field. Short and simple…but not inherently obvious if you've never used that particular combination of NetBeans/JSF/MySQL before. Hope this helps! All the best, Mark

    Read the article

  • Error in my Separating Axis Theorem collision code

    - by Holly
    The only collision experience i've had was with simple rectangles, i wanted to find something that would allow me to define polygonal areas for collision and have been trying to make sense of SAT using these two links Though i'm a bit iffy with the math for the most part i feel like i understand the theory! Except my implementation somewhere down the line must be off as: (excuse the hideous font) As mentioned above i have defined a CollisionPolygon class where most of my theory is implemented and then have a helper class called Vect which was meant to be for Vectors but has also been used to contain a vertex given that both just have two float values. I've tried stepping through the function and inspecting the values to solve things but given so many axes and vectors and new math to work out as i go i'm struggling to find the erroneous calculation(s) and would really appreciate any help. Apologies if this is not suitable as a question! CollisionPolygon.java: package biz.hireholly.gameplay; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import biz.hireholly.gameplay.Types.Vect; public class CollisionPolygon { Paint paint; private Vect[] vertices; private Vect[] separationAxes; int x; int y; CollisionPolygon(Vect[] vertices){ this.vertices = vertices; //compute edges and separations axes separationAxes = new Vect[vertices.length]; for (int i = 0; i < vertices.length; i++) { // get the current vertex Vect p1 = vertices[i]; // get the next vertex Vect p2 = vertices[i + 1 == vertices.length ? 0 : i + 1]; // subtract the two to get the edge vector Vect edge = p1.subtract(p2); // get either perpendicular vector Vect normal = edge.perp(); // the perp method is just (x, y) => (-y, x) or (y, -x) separationAxes[i] = normal; } paint = new Paint(); paint.setColor(Color.RED); } public void draw(Canvas c, int xPos, int yPos){ for (int i = 0; i < vertices.length; i++) { Vect v1 = vertices[i]; Vect v2 = vertices[i + 1 == vertices.length ? 0 : i + 1]; c.drawLine( xPos + v1.x, yPos + v1.y, xPos + v2.x, yPos + v2.y, paint); } } public void update(int xPos, int yPos){ x = xPos; y = yPos; } /* consider changing to a static function */ public boolean intersects(CollisionPolygon p){ // loop over this polygons separation exes for (Vect axis : separationAxes) { // project both shapes onto the axis Vect p1 = this.minMaxProjection(axis); Vect p2 = p.minMaxProjection(axis); // do the projections overlap? if (!p1.overlap(p2)) { // then we can guarantee that the shapes do not overlap return false; } } // loop over the other polygons separation axes Vect[] sepAxesOther = p.getSeparationAxes(); for (Vect axis : sepAxesOther) { // project both shapes onto the axis Vect p1 = this.minMaxProjection(axis); Vect p2 = p.minMaxProjection(axis); // do the projections overlap? if (!p1.overlap(p2)) { // then we can guarantee that the shapes do not overlap return false; } } // if we get here then we know that every axis had overlap on it // so we can guarantee an intersection return true; } /* Note projections wont actually be acurate if the axes aren't normalised * but that's not necessary since we just need a boolean return from our * intersects not a Minimum Translation Vector. */ private Vect minMaxProjection(Vect axis) { float min = axis.dot(new Vect(vertices[0].x+x, vertices[0].y+y)); float max = min; for (int i = 1; i < vertices.length; i++) { float p = axis.dot(new Vect(vertices[i].x+x, vertices[i].y+y)); if (p < min) { min = p; } else if (p > max) { max = p; } } Vect minMaxProj = new Vect(min, max); return minMaxProj; } public Vect[] getSeparationAxes() { return separationAxes; } public Vect[] getVertices() { return vertices; } } Vect.java: package biz.hireholly.gameplay.Types; /* NOTE: Can also be used to hold vertices! Projections, coordinates ect */ public class Vect{ public float x; public float y; public Vect(float x, float y){ this.x = x; this.y = y; } public Vect perp() { return new Vect(-y, x); } public Vect subtract(Vect other) { return new Vect(x - other.x, y - other.y); } public boolean overlap(Vect other) { if(y > other.x && other.y > x){ return true; } return false; } /* used specifically for my SAT implementation which i'm figuring out as i go, * references for later.. * http://www.gamedev.net/page/resources/_/technical/game-programming/2d-rotated-rectangle-collision-r2604 * http://www.codezealot.org/archives/55 */ public float scalarDotProjection(Vect other) { //multiplier = dot product / length^2 float multiplier = dot(other) / (x*x + y*y); //to get the x/y of the projection vector multiply by x/y of axis float projX = multiplier * x; float projY = multiplier * y; //we want to return the dot product of the projection, it's meaningless but useful in our SAT case return dot(new Vect(projX,projY)); } public float dot(Vect other){ return (other.x*x + other.y*y); } }

    Read the article

  • Add Widget via Action in Toolbar

    - by Geertjan
    The question of the day comes from Vadim, who asks on the NetBeans Platform mailing list: "Looking for example showing how to add Widget to Scene, e.g. by toolbar button click." Well, the solution is very similar to this blog entry, where you see a solution provided by Jesse Glick for VisiTrend in Boston: https://blogs.oracle.com/geertjan/entry/zoom_capability Other relevant articles to read are as follows: http://netbeans.dzone.com/news/which-netbeans-platform-action http://netbeans.dzone.com/how-to-make-context-sensitive-actions Let's go through it step by step, with this result in the end, a solution involving 4 classes split (optionally, since a central feature of the NetBeans Platform is modularity) across multiple modules: The Customer object has a "name" String and the Droppable capability has a method "doDrop" which takes a Customer object: public interface Droppable {    void doDrop(Customer c);} In the TopComponent, we use "TopComponent.associateLookup" to publish an instance of "Droppable", which creates a new LabelWidget and adds it to the Scene in the TopComponent. Here's the TopComponent constructor: public CustomerCanvasTopComponent() {    initComponents();    setName(Bundle.CTL_CustomerCanvasTopComponent());    setToolTipText(Bundle.HINT_CustomerCanvasTopComponent());    final Scene scene = new Scene();    final LayerWidget layerWidget = new LayerWidget(scene);    Droppable d = new Droppable(){        @Override        public void doDrop(Customer c) {            LabelWidget customerWidget = new LabelWidget(scene, c.getTitle());            customerWidget.getActions().addAction(ActionFactory.createMoveAction());            layerWidget.addChild(customerWidget);            scene.validate();        }    };    scene.addChild(layerWidget);    jScrollPane1.setViewportView(scene.createView());    associateLookup(Lookups.singleton(d));} The Action is displayed in the toolbar and is enabled only if a Droppable is currently in the Lookup: @ActionID(        category = "Tools",        id = "org.customer.controler.AddCustomerAction")@ActionRegistration(        iconBase = "org/customer/controler/icon.png",        displayName = "#AddCustomerAction")@ActionReferences({    @ActionReference(path = "Toolbars/File", position = 300)})@NbBundle.Messages("AddCustomerAction=Add Customer")public final class AddCustomerAction implements ActionListener {    private final Droppable context;    public AddCustomerAction(Droppable droppable) {        this.context = droppable;    }    @Override    public void actionPerformed(ActionEvent ev) {        NotifyDescriptor.InputLine inputLine = new NotifyDescriptor.InputLine("Name:", "Data Entry");        Object result = DialogDisplayer.getDefault().notify(inputLine);        if (result == NotifyDescriptor.OK_OPTION) {            Customer customer = new Customer(inputLine.getInputText());            context.doDrop(customer);        }    }} Therefore, when the Properties window, for example, is selected, the Action will be disabled. (See the Zoomable example referred to in the link above for another example of this.) As you can see above, when the Action is invoked, a Droppable must be available (otherwise the Action would not have been enabled). The Droppable is obtained in the Action and a new Customer object is passed to its "doDrop" method. The above in pictures, take note of the enablement of the toolbar button with the red dot, on the extreme left of the toolbar in the screenshots below: The above shows the JButton is only enabled if the relevant TopComponent is active and, when the Action is invoked, the user can enter a name, after which a new LabelWidget is created in the Scene. The source code of the above is here: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/misc/WidgetCreationFromAction Note: Showing this as an MVC example is slightly misleading because, depending on which model object ("Customer" and "Droppable") you're looking at, the V and the C are different. From the point of view of "Customer", the TopComponent is the View, while the Action is the Controler, since it determines when the M is displayed. However, from the point of view of "Droppable", the TopComponent is the Controler, since it determines when the Action, i.e., which is in this case the View, displays the presence of the M.

    Read the article

  • NDepend Evaluation: Part 3

    - by Anthony Trudeau
    NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high code quality. NDepend uses a number of metrics and aggregates the data in pleasing static and active visual reports. My evaluation of NDepend will be broken up into several different parts. In the first part of the evaluation I looked at installing the add-in.  And in the last part I went over my first impressions including an overview of the features.  In this installment I provide a little more detail on a few of the features that I really like. Dependency Matrix The dependency matrix is one of the rich visual components provided with NDepend.  At a glance it lets you know where you have coupling problems including cycles.  It does this with number indicating the weight of the dependency and a color-coding that indicates the nature of the dependency. Green and blue cells are direct dependencies (with the difference being whether the relationship is from row-to-column or column-to-row).  Black cells are the ones that you really want to know about.  These indicate that you have a cycle.  That is, type A refers to type B and type B also refers to Type A. But, that’s not the end of the story.  A handy pop-up appears when you hover over the cell in question.  It explains the color, the dependency, and provides several interesting links that will teach you more than you want to know about the dependency. You can double-click the problem cells to explode the dependency.  That will show the dependencies on a method-by-method basis allowing you to more easily target and fix the problem.  When you’re done you can click the back button on the toolbar. Dependency Graph The dependency graph is another component provided.  It’s complementary to the dependency matrix, but it isn’t as easy to identify dependency issues using the window. On a positive note, it does provide more information than the matrix. My biggest issue with the dependency graph is determining what is shown.  This was not readily obvious.  I ended up using the navigation buttons to get an acceptable view.  I would have liked to choose what I see. Once you see the types you want you can get a decent idea of coupling strength based on the width of the dependency lines.  Double-arrowed lines are problematic and are shown in red.  The size of the boxes will be related to the metric being displayed.  This is controlled using the Box Size drop-down in the toolbar.  Personally, I don’t find the size of the box to be helpful, so I change it to Constant Font. One nice thing about the display is that you can see the entire path of dependencies when you hover over a type.  This is done by color-coding the dependencies and dependants.  It would be nice if selecting the box for the type would lock the highlighting in place. I did find a perhaps unintended work-around to the color-coding.  You can lock the color-coding in by hovering over the type, right-clicking, and then clicking on the canvas area to clear the pop-up menu.  You can then do whatever with it including saving it to an image file with the color-coding. CQL NDepend uses a code query language (CQL) to work with your code just like it was a database.  CQL cannot be confused with the robustness of T-SQL or even LINQ, but it represents an impressive attempt at providing an expressive way to enumerate and interrogate your code. There are two main windows you’ll use when working with CQL.  The CQL Query Explorer allows you to define what queries (rules) are run as part of a report – I immediately unselected rules that I don’t want in my results.  The CQL Query Edit window is where you can view or author your own rules.  The explorer window is pretty self-explanatory, so I won’t mention it further other than to say that any queries you author will appear in the custom group. Authoring your own queries is really hard to screw-up.  The Intellisense-like pop-ups tell you what you can do while making composition easy.  I was able to create a query within two minutes of playing with the editor.  My query warns if any types that are interfaces don’t start with an “I”. WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” The results from the CQL Query Edit window are immediate. That fact makes it useful for ad hoc querying.  It’s worth mentioning two things that could make the experience smoother.  First, out of habit from using Visual Studio I expect to be able to scroll and press Tab to select an item in the list (like Intellisense).  You have to press Enter when you scroll to the item you want.  Second, the commands are case-sensitive.  I don’t see a really good reason to enforce that. CQL has a lot of potential not just in enforcing code quality, but also enforcing architectural constraints that your enterprise has defined. Up Next My next update will be the final part of the evaluation.  I will summarize my experience and provide my conclusions on the NDepend add-in. ** View Part 1 of the Evaluation ** ** View Part 2 of the Evaluation ** Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend.

    Read the article

  • What Counts For a DBA: Fitness

    - by Louis Davidson
    If you know me, you can probably guess that physical exercise is not really my thing. There was a time in my past when it a larger part of my life, but even then never in the same sort of passionate way as a number of our SQL friends.  For me, I find that mental exercise satisfies what I believe to be the same inner need that drives people to run farther than I like to drive on most Saturday mornings, and it is certainly just as addictive. Mental fitness shares many common traits with physical fitness, especially the need to attain it through repetitive training. I only wish that mental training burned off a bacon cheeseburger in the same manner as does jogging around a dewy park on Saturday morning. In physical training, there are at least two goals, the first of which is to be physically able to do a task. The second is to train the brain to perform the task without thinking too hard about it. No matter how long it has been since you last rode a bike, you will be almost certainly be able to hop on and start riding without thinking about the process of pedaling or balancing. If you’ve never ridden a bike, you could be a physics professor /Olympic athlete and still crash the first few times you try, even though you are as strong as an ox and your knowledge of the physics of bicycle riding makes the concept child’s play. For programming tasks, the process is very similar. As a DBA, you will come to know intuitively how to backup, optimize, and secure database systems. As a data programmer, you will work to instinctively use the clauses of Transact-SQL DML so that, when you need to group data three ways (and not four), you will know to use the GROUP BY clause with GROUPING SETS without resorting to a search engine.  You have the skill. Making it naturally then requires repetition and experience is the primary requirement, not just simply learning about a topic. The hardest part of being really good at something is this difference between knowledge and skill. I have recently taken several informative training classes with Kimball University on data warehousing and ETL. Now I have a lot more knowledge about designing data warehouses than before. I have also done a good bit of data warehouse designing of late and have started to improve to some level of proficiency with the theory. Yet, for all of this head knowledge, it is still a struggle to take what I have learned and apply it to the designs I am working on.  Data warehousing is still a task that is not yet deeply ingrained in my brain muscle memory. On the other hand, relational database design is something that no matter how much or how little I may get to do it, I am comfortable doing it. I have done it as a profession now for well over a decade, I teach classes on it, and I also have done (and continue to do) a lot of mental training beyond the work day. Sometimes the training is just basic education, some reading blogs and attending sessions at PASS events.  My best training comes from spending time working on other people’s design issues in forums (though not nearly as much as I would like to lately). Working through other people’s problems is a great way to exercise your brain on problems with which you’re not immediately familiar. The final bit of exercise I find useful for cultivating mental fitness for a data professional is also probably the nerdiest thing that I will ever suggest you do.  Akin to running in place, the idea is to work through designs in your head. I have designed more than one database system that would revolutionize grocery store operations, sales at my local Target store, the ordering process at Amazon, and ways to improve Disney World operations to get me through a line faster (some of which they are starting to implement without any of my help.) Never are the designs truly fleshed out, but enough to work through structures and processes.  On “paper”, I have designed database systems to catalog things as trivial as my Lego creations, rental car companies and my audio and video collections. Once I get the database designed mentally, sometimes I will create the database, add some data (often using Red-Gate’s Data Generator), and write a few queries to see if a concept was realistic, but I will rarely fully flesh out the database since I have no desire to do any user interface programming anymore.  The mental training allows me to keep in practice for when the time comes to do the work I love the most for real…even if I have been spending most of my work time lately building data warehouses.  If you are really strong of mind and body, perhaps you can mix a mental run with a physical run; though don’t run off of a cliff while contemplating how you might design a database to catalog the trees on a mountain…that would be contradictory to the purpose of both types of exercise.

    Read the article

  • Contricted A* problem

    - by Ragekit
    I've got a little problem with an A* algorithm that I need to constrict a little bit. Basically : I use an A* to find the shortest path between 2 randomly placed room in 3D space, and then build a corridor between them. The problem I found is that sometimes it makes chimney like corridors that are not ideal, so I constrict the A* so that if the last movement was up or down, you go sideways. Everything is fine, but in some corner cases, it fails to find a path (when there is obviously one). Like here between the blue and red dot : (i'm in unity btw, but i don't think it matters) Here is the code of the actual A* (a bit long, and some redundency) while(current != goal) { //add stair up / stair down foreach(Node<GridUnit> test in current.Neighbors) { if(!test.Data.empty && test != goal) continue; //bug at arrival; if(test == goal && penul !=null) { Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(currentDiff.y,0)) { //wanna drop on the last if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,to.Data.bounds.center)) { continue; } else { if(Mathf.Approximately(to.Data.bounds.center.x, current.Data.parentUnit.bounds.center.x) && Mathf.Approximately(to.Data.bounds.center.z, current.Data.parentUnit.bounds.center.z)) { continue; } } } } if(current.Data.parentUnit != null) { Vector3 previousDiff = current.Data.parentUnit.bounds.center - current.Data.bounds.center; Vector3 currentDiff = current.Data.bounds.center - test.Data.bounds.center; if(!Mathf.Approximately(previousDiff.y,0)) { if(!Mathf.Approximately(currentDiff.y,0)) { //you wanna drop now : continue; } if(current.Data.parentUnit.parentUnit != null) { if(!coplanar(test.Data.bounds.center,current.Data.bounds.center,current.Data.parentUnit.bounds.center,current.Data.parentUnit.parentUnit.bounds.center)) { continue; }else { if(Mathf.Approximately(test.Data.bounds.center.x, current.Data.parentUnit.parentUnit.bounds.center.x) && Mathf.Approximately(test.Data.bounds.center.z, current.Data.parentUnit.parentUnit.bounds.center.z)) { continue; } } } } } g = current.Data.g + HEURISTIC(current.Data,test.Data); h = HEURISTIC(test.Data,goal.Data); f = g + h; if(open.Contains(test) || closed.Contains(test)) { if(test.Data.f > f) { //found a shorter path going passing through that point test.Data.f = f; test.Data.g = g; test.Data.h = h; test.Data.parentUnit = current.Data; } } else { //jamais rencontré test.Data.f = f; test.Data.h = h; test.Data.g = g; test.Data.parentUnit = current.Data; open.Add(test); } } closed.Add (current); if(open.Count == 0) { Debug.Log("nothingfound"); //nothing more to test no path found, stay to from; List<GridUnit> r = new List<GridUnit>(); r.Add(from.Data); return r; } //sort open from small to biggest travel cost open.Sort(delegate(Node<GridUnit> x, Node<GridUnit> y) { return (int)(x.Data.f-y.Data.f); }); //get the smallest travel cost node; Node<GridUnit> smallest = open[0]; current = smallest; open.RemoveAt(0); } //build the path going backward; List<GridUnit> ret = new List<GridUnit>(); if(penul != null) { ret.Insert(0,to.Data); } GridUnit cur = goal.Data; ret.Insert(0,cur); do{ cur = cur.parentUnit; ret.Insert(0,cur); } while(cur != from.Data); return ret; You see at the start of the foreach i constrict the A* like i said. If you have any insight it would be cool. Thanks

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >