Search Results

Search found 12141 results on 486 pages for 'basic skills'.

Page 436/486 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • About Solaris 11 and UltraSPARC II/III/IV/IV+

    - by nospam(at)example.com (Joerg Moellenkamp)
    I know that I will get the usual amount of comments like "Oh, Jörg ? you can't be negative about Oracle" for this article. However as usual I want to explain the logic behind my reasoning. Yes ? I know that there is a lot of UltraSPARC III, IV and IV+ gear out there. But there are some very basic questions: Does your application you are currently running on this gear stops running just because you can't run Solaris 11 on it? What is the need to upgrade a system already in production to Solaris 11? I have the impression, that some people think that the systems get useless in the moment Oracle releases Solaris 11. I know that Sun sold UltraSPARC IV+ systems until 2009. The Sun SF490 introduced 2004 for example, that was a Sun SF480 with UltraSPARC IV and later with UltraSPARC IV+. And yes, Sun made some speedbumps. At that time the systems of the UltraSPARC III to IV+ generations were supported on Solaris 8, on Solaris 9 and on Solaris 10. However from my perspective we sold them to customers, which weren't able to migrate to Solaris 10 because they used applications not supported on Solaris 9 or who just didn't wanted to migrate to Solaris 10. Believe it or not ? I personally know two customers that migrated core systems to Solaris 10 in ? well 2008/9. This was especially true when the M3000 was announced in 2008 when it closed the darned single socket gap. It may be different at you site, however that's what I remember about that time when talking with customers. At first: Just because there is no Solaris 11 for UltraSPARC III, IV and IV+, it doesn't mean that Solaris 10 will go away anytime soon. I just want to point you to "Expect Lifetime Support - Hardware and Operating Systems". It states about Premier Support:Maintenance and software upgrades are included for Oracle operating systems and Oracle VM for a minimum of eight years from the general availability date.GA for Solaris 10 was in 2005. Plus 8 years ? 2013 ? at minimum. Then you can still opt for 3 years of "Extended Support" ? 2016 ? at minimum. 2016 your systems purchased in 2009 are 7 years old. Even on systems purchased at the very end of the lifetime of that system generation. That are the rules as written in the linked document. I said minimum The actual dates are even further in the future: Premier Support for Solaris 10 ends in 2015, Extended support ends 2018. Sustaining support ? indefinite. You will find this in the document "Oracle Lifetime Support Policy: Oracle Hardware and Operating Systems".So I don't understand when some people write, that Oracle is less protective about hardware investments than Sun. And for hardware it's the same as with Sun: Service 5 years after EOL as part of Premier Support. I would like to write about a different perspective as well: I have to be a little cautious here, because this is going in the roadmap area, so I will mention the public sources here: John Fowler told last year that we have to expect at at least 3x the single thread performance of T3 for T4. We have 8 cores in T4, as stated by Rick Hetherington. Let's assume for a moment that a T4 core will have the performance of a UltraSPARC core (just to simplify math and not to disclosing anything about the performance, all existing SPARC cores are considered equal). So given this pieces of information, you could consolidate 8 V215, 4 or 8 V245, 2 full blown V445,2 full blown 490, 2 full blown M3000 on a single T4 SPARC processor. The Fowler roadmap prezo talked about 4-socket systems with T4. So 32 V215, 16 to 8 V245, 8 fullblown V445, 8 full blown V490, 8 full blown M3000 in a system image. I think you get the idea. That said, most of the systems we are talking about have already amortized and perhaps it's just time to invest in new systems to yield other advantages like reduced space consumptions, like reduced power consumption, like some of the neat features sun4v gives you, and yes ? reduced number of processor licenses for Oracle and less money for Oracle HW/SW support. As much as I dislike it myself that my own UltraSPARC III and UltraSPARC II based systems won't run on Solaris 11 (and I have quite a few of them in my personal lab), I really think that the impact on production environments will be much less than most people think now. By the way: The reason for this move is a quite significant new feature. I will tell you that it was this feature, when it's out. I assume, telling just a word more could lead to much more time to blog.

    Read the article

  • Using Native Drag and Drop in HTML 5 pages

    - by nikolaosk
    This is going to be the eighth post in a series of posts regarding HTML 5. You can find the other posts here, here , here , here, here , here and here. In this post I will show you how to implement Drag and Drop functionality in an HTML 5 page using JQuery.This is a great functionality and we do not need to resort anymore to plugins like Silverlight and Flash to achieve this great feature. This is also called a native approach on Drag and Drop.I will use some events and I will write code to respond when these events are fired.As I said earlier we need to write Javascript to implement the drag and drop functionality. I will use the very popular JQuery Library. Please download the library (minified version) from http://jquery.com/downloadI will create a simple HTML page.There will be two thumbnails pics on it. There will also be the drag and drop area where the user will drag the thumb pics into it and they will resize to their actual size. The HTML markup for the page follows<!doctype html><html lang="en"><head><title>Liverpool Legends Gallery</title><meta charset="utf-8"><link rel="stylesheet" type="text/css" href="style.css"><script type="text/javascript" charset="utf-8" src="jquery-1.8.1.min.js"></script>  <script language="JavaScript" src="drag.js"></script>   </head><body><header><h1>A page dedicated to Liverpool Legends</h1><h2>Drag and Drop the thumb image in the designated area to see the full image</h2></header><div id="main"><img src="thumbs/steven-gerrard.jpg"  big="large-images/steven-gerrard-large.jpg" alt="John Barnes"><img src="thumbs/robbie-fowler.jpg" big="large-images/robbie-fowler-large.jpg" alt="Ian Rush"><div id="drag"><p>Drop your image here</p> </div></body></html> There is nothing difficult or fancy in the HTML markup above. I have a link to the external JQuery library and another javascript file that I will implement the whole drag and drop functionality.The code for the css file (style.css) follows#main{  float: left;  width: 340px;  margin-right: 30px;}#drag{  float: left;  width: 400px;  height:300px;  background-color: #c0c0c0;}These are simple CSS rules. This post cannot be a tutorial on CSS.For all these posts I assume that you have the basic HTML,CSS,Javascript skills.Now I am going to create a javascript file (drag.js) to implement the drag and drop functionality.I will provide the whole code for the drag.js file and then I will explain what I am doing in each step.$(function() {          var players = $('#main img');          players.attr('draggable', 'true');                    players.bind('dragstart', function(event) {              var data = event.originalEvent.dataTransfer;               var src = $(this).attr("big");              data.setData("Text", src);               return true;          });          var target = $('#drag');          target.bind('drop', function(event) {            var data = event.originalEvent.dataTransfer;            var src = ( data.getData('Text') );                         var img = $("<img></img>").attr("src", src);            $(this).html(img);            if (event.preventDefault) event.preventDefault();            return(false);          });                   target.bind('dragover', function(event) {                if (event.preventDefault) event.preventDefault();            return false;          });           players.bind('dragend', function(event) {             if (event.preventDefault) event.preventDefault();             return false;           });        });   In these lines var players = $('#main img'); players.attr('draggable', 'true');We grab all the images in the #main div and store them in a variable and then make them draggable.Then in following lines I am using the dragstart event.  players.bind('dragstart', function(event) {              var data = event.originalEvent.dataTransfer;               var src = $(this).attr("big");              data.setData("Text", src);               return true;          }); In this event I am associating the custom data attribute value with the item I am dragging.Then I create a variable to get hold of the dropping area var target = $('#drag'); Then in the following lines I implement the drop event and what happens when the user drops the image in the designated area on the page. target.bind('drop', function(event) {            var data = event.originalEvent.dataTransfer;            var src = ( data.getData('Text') );                         var img = $("<img></img>").attr("src", src);            $(this).html(img);            if (event.preventDefault) event.preventDefault();            return(false);          }); The dragend  event is fired when the user has finished the drag operation        players.bind('dragend', function(event) {             if (event.preventDefault) event.preventDefault();             return false;           }); When this method event.preventDefault() is called , the default action of the event will not be triggered.Please have a look a the picture below to see how the page looks before the drag and drop takes place. Then simply I drag and drop a picture in the dropping area.Have a look at the picture below It works!!! Hope it helps!!  

    Read the article

  • Yippy &ndash; the F# MVVM Pattern

    - by MarkPearl
    I did a recent post on implementing WPF with F#. Today I would like to expand on this posting to give a simple implementation of the MVVM pattern in F#. A good read about this topic can also be found on Dean Chalk’s blog although my example of the pattern is possibly simpler. With the MVVM pattern one typically has 3 segments, the view, viewmodel and model. With the beauty of WPF binding one is able to link the state based viewmodel to the view. In my implementation I have kept the same principles. I have a view (MainView.xaml), and and a ViewModel (MainViewModel.fs).     What I would really like to illustrate in this posting is the binding between the View and the ViewModel so I am going to jump to that… In Program.fs I have the following code… module Program open System open System.Windows open System.Windows.Controls open System.Windows.Markup open myViewModels // Create the View and bind it to the View Model let myView = Application.LoadComponent(new System.Uri("/FSharpWPF;component/MainView.xaml", System.UriKind.Relative)) :?> Window myView.DataContext <- new MainViewModel() :> obj // Application Entry point [<STAThread>] [<EntryPoint>] let main(_) = (new Application()).Run(myView) You can see that I have simply created the view (myView) and then created an instance of my viewmodel (MainViewModel) and then bound it to the data context with the code… myView.DataContext <- new MainViewModel() :> obj If I have a look at my viewmodel (MainViewModel) it looks like this… module myViewModels open System open System.Windows open System.Windows.Input open System.ComponentModel open ViewModelBase type MainViewModel() = // private variables let mutable _title = "Bound Data to Textbox" // public properties member x.Title with get() = _title and set(v) = _title <- v // public commands member x.MyCommand = new FuncCommand ( (fun d -> true), (fun e -> x.ShowMessage) ) // public methods member public x.ShowMessage = let msg = MessageBox.Show(x.Title) () I have exposed a few things, namely a property called Title that is mutable, a command and a method called ShowMessage that simply pops up a message box when called. If I then look at my view which I have created in xaml (MainView.xaml) it looks as follows… <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="F# WPF MVVM" Height="350" Width="525"> <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <TextBox Text="{Binding Path=Title, Mode=TwoWay}" Grid.Row="0"/> <Button Command="{Binding MyCommand}" Grid.Row="1"> <TextBlock Text="Click Me"/> </Button> </Grid> </Window>   It is also very simple. It has a button that’s command is bound to the MyCommand and a textbox that has its text bound to the Title property. One other module that I have created is my ViewModelBase. Right now it is used to store my commanding function but I would look to expand on it at a later stage to implement other commonly used functions… module ViewModelBase open System open System.Windows open System.Windows.Input open System.ComponentModel type FuncCommand (canExec:(obj -> bool),doExec:(obj -> unit)) = let cecEvent = new DelegateEvent<EventHandler>() interface ICommand with [<CLIEvent>] member x.CanExecuteChanged = cecEvent.Publish member x.CanExecute arg = canExec(arg) member x.Execute arg = doExec(arg) Put this all together and you have a basic project that implements the MVVM pattern in F#. For me this is quite exciting as it turned out to be a lot simpler to do than I originally thought possible. Also because I have my view in XAML I can use the XAML designer to design forms in F# which I believe is a much cleaner way to go rather than implementing it all in code. Finally if I look at my viewmodel code, it is actually quite clean and compact…

    Read the article

  • CodePlex Daily Summary for Tuesday, July 30, 2013

    CodePlex Daily Summary for Tuesday, July 30, 2013Popular ReleasesnopCommerce. Open source shopping cart (ASP.NET MVC): nopCommerce 3.10: Highlight features & improvements: • Performance optimization. • New more user-friendly product/product-variant logic. Now we'll have only products (simple and grouped). • Bundle products support added. • Allow a store owner to associate product image for product variant attribute values. To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).Small Tools: Helpers 1.01: Fix params count issue Fix STAThread issue Add support for exe.config filesExtJS based ASP.NET Controls: FineUI v3.3.1: ??FineUI ?? ExtJS ??? ASP.NET ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license)。 ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI ???? ExtJS ????????,???? ExtJS ?,???????????ExtJS?: 1. ????? FineUI ? ExtJS ?:http://fineui.com/bbs/fo...AutoNLayered - Domain Oriented N-Layered .NET 4.5: AutoNLayered v1.0.5: - Fix Dtos. Abstract collections replaced by concrete (correct serialization WCF). - OrderBy in navigation properties. - Unit Test with Fakes. - Map of entities/dto moved to application services. - Libraries updated. Warning using Fakes: http://connect.microsoft.com/VisualStudio/feedback/details/782031/visual-studio-2012-add-fakes-assembly-does-not-add-all-needed-referencesPath Copy Copy: 11.1: Minor release with two new features: Submenu's contextual menu item now has an icon next to it Added reference to JavaScript regular expression format in Settings application Since this release does not have any glaring bug fixes, it is more of an optional update for existing users. It depends on whether you want to be able to spot the Path Copy Copy submenu more easily. I recommend you install it to see if the icon makes sense. As always, please don't hesitate to leave feedback via Discus...CMake Tools for Visual Studio: CMake Tools for Visual Studio 1.0 RC3: This is the third release candidate of CMake Tools for Visual Studio 1.0, which contains the following bug fixes: Opening a CMake file from Windows Explorer while Visual Studio is already open will no start a new instance of Visual Studio. Typing a symbol while the IntelliSense list box is visible and the text typed so far does not match any item in the list will dismiss the list box and insert the symbol typed.R.NET: R.NET 1.5: The major changes in v1.5 are: Initialize method must be called before using R. Settings should be passed to the method. EagerEvaluate method renamed to Evaluate (use Defer method when you want old version of Evaluate).Media Companion: Media Companion MC3.574b: Some good bug fixes been going on with the new XBMC-Link function. Thanks to all who were able to do testing and gave feedback. New:* Added some adhoc extra General movie filters, one of which is Plot = Outline (see fixes above). To see the filters, add the following line to your config.xml: <ShowExtraMovieFilters>True</ShowExtraMovieFilters>. The others are: Imdb in folder name, Imdb in not folder name & Imdb not in folder name & year mismatch. * Movie - display <tag> list on browser tab ...OfflineBrowser: Preview Release with Search: I've added search to this release.VG-Ripper & PG-Ripper: VG-Ripper 2.9.46: changes FIXED LoginMath.NET Numerics: Math.NET Numerics v2.6.0: What's New in Math.NET Numerics 2.6 - Announcement, Explanations and Sample Code. New: Linear Curve Fitting Linear least-squares fitting (regression) to lines, polynomials and linear combinations of arbitrary functions. Multi-dimensional fitting. Also works well in F# with the F# extensions. New: Root Finding Brent's method. ~Candy Chiu, Alexander Täschner Bisection method. ~Scott Stephens, Alexander Täschner Broyden's method, for multi-dimensional functions. ~Alexander Täschner ...AJAX Control Toolkit: July 2013 Release: AJAX Control Toolkit Release Notes - July 2013 Release Version 7.0725July 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - Instructions for using the AJAX Control Toolkit with ASP.NET 4.5 can be found at...MJP's DirectX 11 Samples: Specular Antialiasing Sample: Sample code to complement my presentation that's part of the Physically Based Shading in Theory and Practice course at SIGGRAPH 2013, entitled "Crafting a Next-Gen Material Pipeline for The Order: 1886". Demonstrates various methods of preventing aliasing from specular BRDF's when using high-frequency normal maps. The zip file contains source code as well as a pre-compiled x64 binary.EXCEL??、??、????????:DataPie(??MSSQL 2008、ORACLE、ACCESS 2007): DataPieV3.6.1: ????csv????,???sql??,??csv????Qibla Compass for Windows Phone: Qibla Compass for Windows Phone: This release is in open beta version. You can always download and provide your feedback. Since it was just developed to give users an idea of Qibla Direction and its mapping therefore you might not see major releases in future.Event Scavenger: Version 5: I've decided to do a full (recommended) release of version 5. I've been running it myself for months and did not have any issues with it yet. This release just contains the installs. The web site's documentation has not been updated yet and reflects the previous version details. If you have an issue with this version then you can happily switch back to 4.x. Version 5 can run side-by-side with earlier versions (service) as it has a new service and database.wpadk: WPadk_WP8???: ???:V1.1 ??wp???????????????wp8???????StockSharp: StockSharp 4.1.16: ?????? ????????? - http://stocksharp.com/forum/yaf_postsm28239_S--API-4-1.aspx#post28239GeoTransformer: GeoTransformer 4.5: Extensions can now be installed and uninstalled from the application. The extensions update the same way as the application - silently and automatically. Added ability to search for caches by pressing CTRL+F in the table views. (Thanks to JanisU for implementing this request) Added ability to remove edited customizations for multiple caches at once (use SHIFT or CTRL to select multiple lines in the table). A new experimental version for Windows 8 RT (on ARM processor) is also made availa...Kartris E-commerce: Kartris v2.5003: This fixes an issue where search engines appear to identify as IE and so trigger the noIE page if there is not a non-responsive skin specified.New ProjectsBus Booking System: Bus Booking systemC#??????: ????C#??????????????。Cotizav 2.0: Este proyecto es para el soporte de Cotizaciones.DeferredShading: deferred shading rendererIVR Junction: IVR Junction connects an Interactive Voice Response (IVR) system to cloud services such as YouTube, Facebook and other social media.Mac Address Changer: It's a quite and easy tool to change your mac addressmotokraft user control: user control for motokraftSingle Reference JavaScript Pattern: This is very simple pattern. In here you need to only refer one script in a page. I'm sure it is saving your development time as well as maintenance timeSocial_Life_Time: This is social network that people can communicate with each otherThe Ironic Text Based MMORPG: Modern MMORPGs have become highly interactive, complex systems of skills, stats, and action combat. This game introduces a new level of text based immersion!Timeline Year Control: Timeline Year Control An ASP.Net year indicator timeline control.winrtsock: winsock façade for Windows Runtime for porting bsd socket code to Windows RuntimeZker: No summary?????: C#?????

    Read the article

  • Session Report - Java on the Raspberry Pi

    - by Janice J. Heiss
    On mid-day Wednesday, the always colorful Oracle Evangelist Simon Ritter demonstrated Java on the Raspberry Pi at his session, “Do You Like Coffee with Your Dessert?”. The Raspberry Pi consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there is a single feature that makes the Raspberry Pi significant,” observed Ritter, “but a combination of things really makes it stand out. First, it's $35 for what is effectively a completely usable computer. You do have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM (Advanced RISC Machine and Acorn RISC Machine) processor is noteworthy, because it avoids problems like cooling (no heat sink or fan) and can use a USB power brick. When you add in the enormous community support, it offers a great platform for teaching everyone about computing.”Some 200 enthusiastic attendees were present at the session which had the feel of Simon Ritter sharing a fun toy with friends. The main point of the session was to show what Oracle was doing to support Java on the Raspberry Pi in a way that is entertaining and fun. Ritter pointed out that, in addition to being great for teaching, it’s an excellent introduction to the ARM architecture, and runs well with Java and will get better once it has official hard float support. The possibilities are vast.Ritter explained that the Raspberry Pi Project started in 2006 with the goal of devising a computer to inspire children; it drew inspiration from the BBC Micro literacy project of 1981 that produced a series of microcomputers created by the Acorn Computer company. It was officially launched on February 29, 2012, with a first production of 10,000 boards. There were 100,000 pre-orders in one day; currently about 4,000 boards are produced a day. Ritter described the specification as follows:* CPU: ARM 11 core running at 700MHz Broadcom SoC package Can now be overclocked to 1GHz (without breaking the warranty!) * Memory: 256Mb* I/O: HDMI and composite video 2 x USB ports (Model B only) Ethernet (Model B only) Header pins for GPIO, UART, SPI and I2C He took attendees through a brief history of ARM Architecture:* Acorn BBC Micro (6502 based) Not powerful enough for Acorn’s plans for a business computer * Berkeley RISC Project UNIX kernel only used 30% of instruction set of Motorola 68000 More registers, less instructions (Register windows) One chip architecture to come from this was… SPARC * Acorn RISC Machine (ARM) 32-bit data, 26-bit address space, 27 registers First machine was Acorn Archimedes * Spin off from Acorn, Advanced RISC MachinesNext he presented its features:* 32-bit RISC Architecture–  ARM accounts for 75% of embedded 32-bit CPUs today– 6.1 Billion chips sold last year (zero manufactured by ARM)* Abstract architecture and microprocessor core designs– Raspberry Pi is ARM11 using ARMv6 instruction set* Low power consumption– Good for mobile devices– Raspberry Pi can be powered from 700mA 5V only PSU– Raspberry Pi does not require heatsink or fanHe described the current ARM Technology:* ARMv6– ARM 11, ARM Cortex-M* ARMv7– ARM Cortex-A, ARM Cortex-M, ARM Cortex-R* ARMv8 (Announced)– Will support 64-bit data and addressingHe next gave the Java Specifics for ARM: Floating point operations* Despite being an ARMv6 processor it does include an FPU– FPU only became standard as of ARMv7* FPU (Hard Float, or HF) is much faster than a software library* Linux distros and Oracle JVM for ARM assume no HF on ARMv6– Need special build of both– Raspbian distro build now available– Oracle JVM is in the works, release date TBDNot So RISCPerformance Improvements* DSP Enhancements* Jazelle* Thumb / Thumb2 / ThumbEE* Floating Point (VFP)* NEON* Security Enhancements (TrustZone)He spent a few minutes going over the challenges of using Java on the Raspberry Pi and covered:* Sound* Vision * Serial (TTL UART)* USB* GPIOTo implement sound with Java he pointed out:* Sound drivers are now included in new distros* Java Sound API– Remember to add audio to user’s groups– Some bits work, others not so much* Playing (the right format) WAV file works* Using MIDI hangs trying to open a synthesizer* FreeTTS text-to-speech– Should work once sound works properlyHe turned to JavaFX on the Raspberry Pi:* Currently internal builds only– Will be released as technology preview soon* Work involves optimal implementation of Prism graphics engine– X11?* Once the JavaFX implementation is completed there will be little of concern to developers-- It’s just Java (WORA). He explained the basis of the Serial Port:* UART provides TTL level signals (3.3V)* RS-232 uses 12V signals* Use MAX3232 chip to convert* Use this for access to serial consoleHe summarized his key points. The Raspberry Pi is a very cool (and cheap) computer that is great for teaching, a great introduction to ARM that works very well with Java and will work better in the future. The opportunities are limitless. For further info, check out, Raspberry Pi User Guide by Eben Upton and Gareth Halfacree. From there, Ritter tried out several fun demos, some of which worked better than others, but all of which were greeted with considerable enthusiasm and support and good humor (even when he ran into some glitches).  All in all, this was a fun and lively session.

    Read the article

  • creating a pre-menu level select screen

    - by Ephiras
    Hi I am working on creating a tower Defence java applet game and have come to a road block about implementing a title screen that i can select the level and difficulty of the rest of the game. my title screen class is called Menu. from this menu class i need to pass in many different variables into my Main class. i have used different classes before and know how to run them and such. but if both classes extend applet and each has its individual graphics method how can i run things from Main even though it was created in Menu. what i essentially want to do is run the Menu class withits action listeners and graphics until a Difficulty button has been selected, run the main class (which 100% works without having to have the Menu class) and pretty much terminate Menu so that i cannot go back to it, do not see its buttons or graphics menus. can i run one applet annd when i choose a button close that one and launch the other one? IF you would like to download the full project you can find it here, i had to comment out all the code that wasn't working my Menu class import java.awt.*; import java.awt.event.*; import java.applet.*; public class Menu extends Applet implements ActionListener{ Button bEasy,bMed,bHard; Main m; public void init(){ bEasy= new Button("Easy"); bEasy.setBounds(140,200,100,50); add(bEasy); bMed = new Button("Medium");bMed.setBounds(280,200,100,50); add(bMed); bHard= new Button("Hard");bHard.setBounds(420,200,100,50); add(bHard); setLayout(null); } public void actionPerformed(ActionEvent e){ Main m = new Main(20,10,3000,mapMed);//break; switch (e.getSource()){ case bEasy: Main m = new Main(6000,20,"levels/levelEasy.png");break;//enimies tower money world case bMed: Main m = new Main(4000,15,"levels/levelMed.png");break; case bHard: Main m = new Main(2000,10,"levels/levelEasy.png");break; default: break; } } public void paint(){ //m.draw(g) } } and here is my main class initialising code. import java.awt.*; import java.awt.event.*; import java.applet.*; import java.io.IOException; public class Main extends Applet implements Runnable, MouseListener, MouseMotionListener, ActionListener{ Button startButton, UpgRange, UpgDamage; //set up the buttons Color roadCol,startCol,finCol,selGrass,selRoad; //set up the colors Enemy e[][]; Tower t[]; Image towerpic,backpic,roadpic,levelPic; private Image i; private Graphics doubleG; //here is the world 0=grass 1=road 2=start 3=end int world[][],eStartX,eStartY; boolean drawMouse,gameEnd; static boolean start=false; static int gridLength=15; static int round=0; int Mx,My,timer=1500; static int sqrSize=31; int towers=0,towerSelected=-10; static int castleHealth=2000; String levelPath; //choose the level Easy Med or Hard int maxEnemy[] = {5,7,12,20,30,15,50,30,40,60};//number of enimies per round int maxTowers=15;//maximum number of towers allowed static int money =2000,damPrice=600,ranPrice=350,towerPrice=700; //money = the intial ammount of money you start of with //damPrice is the price to increase the damage of a tower //ranPrice is the price to increase the range of a tower public void main(int cH,int mT,int mo,int dP,int rP,int tP,String path,int[] mE)//constructor 1 castleHealth=cH; maxTowers=mT; money=mo; damPrice=dP; ranPrice=rP; towerPrice=tP; String levelPath=path; maxEnemy = mE; buildLevel(); } public void main(int cH,int mT,String path)//basic constructor castleHealth=cH; maxTowers=mT; String levelPath=path; maxEnemy = mE; buildLevel(); } public void init(){ setSize(sqrSize*15+200,sqrSize*15);//set the size of the screen roadCol = new Color(255,216,0);//set the colors for the different objects startCol = new Color(0,38,255); finCol = new Color(255,0,0); selRoad = new Color(242,204,155);//selColor is the color of something when your mouse hovers over it selGrass = new Color(0,190,0); roadpic = getImage(getDocumentBase(),"images/road.jpg"); towerpic = getImage(getDocumentBase(),"images/tower.png"); backpic = getImage(getDocumentBase(),"images/grass.jpg"); levelPic = getImage(getDocumentBase(),"images/level.jpg"); e= new Enemy[maxEnemy.length][];//activates all of the enimies for (int r=0;r<e.length;r++) e[r] = new Enemy[maxEnemy[r]]; t= new Tower[maxTowers]; for (int i=0;i<t.length;i++) t[i]= new Tower();//activates all the towers for (int i=0;i<e.length; i++)//sets all of the enimies starting co ordinates for (int j=0;j<e[i].length;j++) e[i][j] = new Enemy(eStartX,eStartY,world); initButtons();//initialise all the buttons addMouseMotionListener(this); addMouseListener(this); }

    Read the article

  • Pace Layering Comes Alive

    - by Tanu Sood
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Rick Beers is Senior Director of Product Management for Oracle Fusion Middleware. Prior to joining Oracle, Rick held a variety of executive operational positions at Corning, Inc. and Bausch & Lomb. With a professional background that includes senior management positions in manufacturing, supply chain and information technology, Rick brings a unique set of experiences to cover the impact that technology can have on business models, processes and organizations. Rick hosts the IT Leaders Editorial on a monthly basis. By now, readers of this column are quite familiar with Oracle AppAdvantage, a unified framework of middleware technologies, infrastructure and applications utilizing a pace layered approach to enterprise systems platforms. 1. Standardize and Consolidate core Enterprise Applications by removing invasive customizations, costly workarounds and the complexity that multiple instances creates. 2. Move business specific processes and applications to the Differentiate Layer, thus creating greater business agility with process extensions and best of breed applications managed by cross- application process orchestration. 3. The Innovate Layer contains all the business capabilities required for engagement, collaboration and intuitive decision making. This is the layer where innovation will occur, as people engage one another in a secure yet open and informed way. 4. Simplify IT by minimizing complexity, improving performance and lowering cost with secure, reliable and managed systems across the entire Enterprise. But what hasn’t been discussed is the pace layered architecture that Oracle AppAdvantage adopts. What is it, what are its origins and why is it relevant to enterprise scale applications and technologies? It’s actually a fascinating tale that spans the past 20 years and a basic understanding of it provides a wonderful context to what is evolving as the future of enterprise systems platforms. It all begins in 1994 with a book by noted architect Stewart Brand, of ’Whole Earth Catalog’ fame. In his 1994 book How Buildings Learn, Brand popularized the term ‘Shearing Layers’, arguing that any building is actually a hierarchy of pieces, each of which inherently changes at different rates. In 1997 he produced a 6 part BBC Series adapted from the book, in which Part 6 focuses on Shearing Layers. In this segment Brand begins to introduce the concept of ‘pace’. Brand further refined this idea in his subsequent book, The Clock of the Long Now, which began to link the concept of Shearing Layers to computing and introduced the term ‘pace layering’, where he proposes that: “An imperative emerges: an adaptive [system] has to allow slippage between the differently-paced systems … otherwise the slow systems block the flow of the quick ones and the quick ones tear up the slow ones with their constant change. Embedding the systems together may look efficient at first but over time it is the opposite and destructive as well.” In 2000, IBM architects Ian Simmonds and David Ing published a paper entitled A Shearing Layers Approach to Information Systems Development, which applied the concept of Shearing Layers to systems design and development. It argued that at the time systems were still too rigid; that they constrained organizations by their inability to adapt to changes. The findings in the Conclusions section are particularly striking: “Our starting motivation was that enterprises need to become more adaptive, and that an aspect of doing that is having adaptable computer systems. The challenge is then to optimize information systems development for change (high maintenance) rather than stability (low maintenance). Our response is to make it explicit within software engineering the notion of shearing layers, and explore it as the principle that systems should be built to be adaptable in response to the qualitatively different rates of change to which they will be subjected. This allows us to separate functions that should legitimately change relatively slowly and at significant cost from that which should be changeable often, quickly and cheaply.” The problem at the time of course was that this vision of adaptable systems was simply not possible within the confines of 1st generation ERP, which were conceived, designed and developed for standardization and compliance. It wasn’t until the maturity of open, standards based integration, and the middleware innovation that followed, that pace layering became an achievable goal. And Oracle is leading the way. Oracle’s AppAdvantage framework makes pace layering come alive by taking a strategic vision 20 years in the making and transforming it to a reality. It allows enterprises to retain and even optimize their existing ERP systems, while wrapping around those ERP systems three layers of capabilities that inherently adapt as needed, at a pace that’s optimal for the enterprise.

    Read the article

  • Agile Testing Days 2012 – My First Conference!

    - by Chris George
    I’d like to give you a bit of background first… so please bear with me! In 1996, whilst studying for my final year of my degree, I applied for a job as a C++ Developer at a small software house in Hertfordshire  After bodging up the technical part of the interview I didn’t get the job, but was offered a position as a QA Engineer instead. The role sounded intriguing and the pay was pretty good so in the absence of anything else I took it. Here began my career in the world of software testing! Back then, testing/QA was often an afterthought, something that was bolted on to the development process and very much a second class citizen. Test automation was rare, and tools were basic or non-existent! The internet was just starting to take off, and whilst there might have been testing communities and resources, we were certainly not exposed to any of them. After 8 years I moved to another small company, and again didn’t find myself exposed to any of the changes that were happening in the industry. It wasn’t until I joined Red Gate in 2008 that my view of testing and software development as a whole started to expand. But it took a further 4 years for my view of testing to be totally blown open, and so the story really begins… In May 2012 I was fortunate to land the role of Head of Test Engineering. Soon after, I received an email with details for the “Agile Testi However, in my new role, I decided that it was time to bite the bullet and at least go to one conference. Perhaps I could get some new ideas to supplement and support some of the ideas I already had.ng Days” conference in Potsdam, Germany. I looked over the suggested programme and some of the talks peeked my interest. For numerous reasons I’d shied away from attending conferences in the past, one of the main ones being that I didn’t see much benefit in attending loads of talks when I could just read about stuff like that on the internet. So, on the 18th November 2012, myself and three other Red Gaters boarded a plane at Heathrow bound for Potsdam, Germany to attend Agile Testing Days 2012. Tutorial Day – “Software Testing Reloaded” We chose to do the tutorials on the 19th, I chose the one titled “Software Testing Reloaded – So you wanna actually DO something? We’ve got just the workshop for you. Now with even less powerpoint!”. With such a concise and serious title I just had to see what it was about! I nervously entered the room to be greeted by tables, chairs etc all over the place, not set out and frankly in one hell of a mess! There were a few people in there playing a game with dice. Okaaaay… this is going to be a long day! Actually the dice game was an exercise in deduction and simplification… I found it very interesting and is certainly something I’ll be using at work as a training exercise! (I won’t explain the game here cause I don’t want to let the cat out of the bag…) The tutorial consisted of several games, exploring different aspects of testing. They were all practical yet required a fair amount of thin king. Matt Heusser and Pete Walen were running the tutorial, and presented it in a very relaxed and light-hearted manner. It was really my first experience of working in small teams with testers from very different backgrounds, and it was really enjoyable. Matt & Pete were very approachable and offered advice where required whilst still making you work for the answers! One of the tasks was to devise several strategies for testing some electronic dice. The premise was that a Vegas casino wanted to use the dice to appeal to the twenty-somethings interested in tech, but needed assurance that they were as reliable and random as traditional dice. This was a very interesting and challenging exercise that forced us to challenge various assumptions, determine/clarify requirements but most of all it was frustrating because the dice made a very very irritating beeping noise. Multiple that by at least 12 dice and I was dreaming about them all that night!! Some of the main takeaways that were brilliantly demonstrated through the games were not to make assumptions, challenge requirements, and have fun testing! The tutorial lasted the whole day, but to be honest the day went very quickly! My introduction into the conference experience started very well indeed, and I would talk to both Matt and Pete several times during the 4 days. Days 1,2 & 3 will be coming soon…  

    Read the article

  • Drive Online Engagement with Intuitive Portals and Websites

    - by kellsey.ruppel
    As more and more business is being conducted via online channels, engaging users and making them more productive and efficient though these online channels is becoming critical. These users could be customers, partners or employees and while the respective channels through which they interact might be different, these users do increasingly interact with your business through the Web, or mobile devices or now through various social mediums.  Businesses need a user engagement strategy and solution that allows them to deliver targeted and personalized content and applications to users through the various online mediums and touch points.  The customer experience today is made up of an ongoing set of interactions with organizations across many channels, online and offline.  The Direct channel (including sales reps, email and mail) is an important point of contact, as is the Contact Center.  Contact Centers rely on the phone as a means of interacting with customers, and also more now than ever, the Web as well.  However, the online organization is often managed separately from the Contact Center organization within a business. In-store is an important channel for retailers, offering Point-of-Service for human interactions, and Kiosks which enable self-service. Kiosks are a Web-enabled touch point but in-store kiosks are often managed by the head of retail operations, rather than the online organization.  And of course, the online channel, including customer interactions with an organization via digital means -- on the website, mobile websites, and social networking sites, has risen to paramount importance in recent years in the customer experience. Historically all of these channels have been managed separately. The result of all of this fragmentation is that the customer touch points with an organization are siloed.  Their interactions online are not known and respected in their dealings in-store.  Their calls to the contact center are not taken as input into what the website offers them when they arrive. Think of how many times you’ve fallen victim to this. Your experience with the company call center is different than the experience in-store. Your experience with the company website on your desktop computer is different than your experience on your iPad. I think you get the point. But the customer isn’t the only one we need to look at here, as employees and the IT organization have challenges as well when it comes to online engagement. There are many common tools and technologies that organizations have been using to try and engage users, whether it’s customers, employees or partners. Some have adopted different blog and wiki technologies (some hosted, some open source, sometimes embedded in platforms), to things like tagging, file sharing and content management, or composite applications for self-service applications and activity streams. Basically, there are so many different tools & technologies that each address different aspects of user engagement. Now, one of the challenges with this, is that if we look at each individual tool, typically just implementing for example a file sharing and basic collaboration solution, may meet the needs of the business user for one aspect of user engagement, but it may not be the best solution to engage with customers and partners, or it may not fit with IT standards such as integrating with their single sign on tools or their corporate website. Often, the scenario is that businesses are having to acquire multiple pieces and parts as well as build custom applications to meet their needs. Leaving customers and partners with a more fragmented way of interacting with the company. Every organization has some sort of enterprise balancing act between the needs of the business user and the needs and restrictions enforced by enterprise IT groups. As we’ve been discussing, we all know that the expectations for online engagement have changed since the days of the static, one-size fits all website. With these changes have come some very difficult organizational challenges as well. Today, as a business user, you want to engage with your customers, and your customers expect you to know who they are. They expect you to recall the details they’ve provided to you on your website, to your CSRs and to your sales people. They expect you to remember their purchases, their preferences and their problems. And they expect you to know who they are, equally well, across channels, including your web presence. This creates a host of challenges for today’s business users. Delivering targeted, relevant content online is now essential for converting prospects into customers and for engendering long term loyalty. Business users need the ability to leverage customer data from different sources to fuel their segmentation and targeting strategies and to easily set-up, manage and optimize online campaigns. Also critical, they need the ability to accomplish these things on-the-fly, at the speed of the marketplace, while making iterative improvements.  These changing expectations put a host of demands on the IT organization as well. The web presence must be able to scale to support the delivery of personalized and targeted content to thousands of site visitors without sacrificing performance. And integration between systems becomes more important as well, as organizations strive to obtain one view of the customer culled from WCM data, CRM data and more. So then, how do you solve these challenges and meet the growing demands of your users?  You need a solution that: Unifies every customer interaction across all channels Personalizes the products and content that interest the customer and to the device Delivers targeted promotions to the right customer Engages and improve employee productivity Provides self-service access to applications Includes embedded in-context social   So how then do you achieve this level of online engagement, complete customer experience and engage your employees? The answer: Oracle WebCenter. If you want to learn how to get there, we encourage you to attend this webcast on Thursday Drive Online Engagement with Intuitive Portals and Websites, where we'll talk about how you are able to transform your portal experience and optimize online engagement -- making your portals more interactive and more engaging across multiple channels. Register today!

    Read the article

  • concurrency::index<N> from amp.h

    - by Daniel Moth
    Overview C++ AMP introduces a new template class index<N>, where N can be any value greater than zero, that represents a unique point in N-dimensional space, e.g. if N=2 then an index<2> object represents a point in 2-dimensional space. This class is essentially a coordinate vector of N integers representing a position in space relative to the origin of that space. It is ordered from most-significant to least-significant (so, if the 2-dimensional space is rows and columns, the first component represents the rows). The underlying type is a signed 32-bit integer, and component values can be negative. The rank field returns N. Creating an index The default parameterless constructor returns an index with each dimension set to zero, e.g. index<3> idx; //represents point (0,0,0) An index can also be created from another index through the copy constructor or assignment, e.g. index<3> idx2(idx); //or index<3> idx2 = idx; To create an index representing something other than 0, you call its constructor as per the following 4-dimensional example: int temp[4] = {2,4,-2,0}; index<4> idx(temp); Note that there are convenience constructors (that don’t require an array argument) for creating index objects of rank 1, 2, and 3, since those are the most common dimensions used, e.g. index<1> idx(3); index<2> idx(3, 6); index<3> idx(3, 6, 12); Accessing the component values You can access each component using the familiar subscript operator, e.g. One-dimensional example: index<1> idx(4); int i = idx[0]; // i=4 Two-dimensional example: index<2> idx(4,5); int i = idx[0]; // i=4 int j = idx[1]; // j=5 Three-dimensional example: index<3> idx(4,5,6); int i = idx[0]; // i=4 int j = idx[1]; // j=5 int k = idx[2]; // k=6 Basic operations Once you have your multi-dimensional point represented in the index, you can now treat it as a single entity, including performing common operations between it and an integer (through operator overloading): -- (pre- and post- decrement), ++ (pre- and post- increment), %=, *=, /=, +=, -=,%, *, /, +, -. There are also operator overloads for operations between index objects, i.e. ==, !=, +=, -=, +, –. Here is an example (where no assertions are broken): index<2> idx_a; index<2> idx_b(0, 0); index<2> idx_c(6, 9); _ASSERT(idx_a.rank == 2); _ASSERT(idx_a == idx_b); _ASSERT(idx_a != idx_c); idx_a += 5; idx_a[1] += 3; idx_a++; _ASSERT(idx_a != idx_b); _ASSERT(idx_a == idx_c); idx_b = idx_b + 10; idx_b -= index<2>(4, 1); _ASSERT(idx_a == idx_b); Usage You'll most commonly use index<N> objects to index into data types that we'll cover in future posts (namely array and array_view). Also when we look at the new parallel_for_each function we'll see that an index<N> object is the single parameter to the lambda, representing the (multi-dimensional) thread index… In the next post we'll go beyond being able to represent an N-dimensional point in space, and we'll see how to define the N-dimensional space itself through the extent<N> class. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Is Data Science “Science”?

    - by BuckWoody
    I hold the term “science” in very high esteem. I grew up on the Space Coast in Florida, and eventually worked at the Kennedy Space Center, surrounded by very intelligent people who worked in various scientific fields. Recently a new term has entered the computing dialog – “Data Scientist”. Since it’s not a standard term, it has a lot of definitions, and in fact has been disputed as a correct term. After all, the reasoning goes, if there’s no such thing as “Data Science” then how can there be a Data Scientist? This argument has been made before, albeit with a different term – “Computer Science”. In Peter Denning’s excellent article “Is Computer Science Science” (April  2005/Vol. 48, No. 4 COMMUNICATIONS OF THE ACM) there are many points that separate “science” from “engineering” and even “art”.  I won’t repeat the content of that article here (I recommend you read it on your own) but will leverage the points he makes there. Definition of Science To ask the question “is data science ‘science’” then we need to start with a definition of terms. Various references put the definition into the same basic areas: Study of the physical world Systematic and/or disciplined study of a subject area ...and then they include the things studied, the bodies of knowledge and so on. The word itself comes from Latin, and means merely “to know” or “to study to know”. Greek divides knowledge further into “truth” (episteme), and practical use or effects (tekhne). Normally computing falls into the second realm. Definition of Data Science And now a more controversial definition: Data Science. This term is so new and perhaps so niche that the major dictionaries haven’t yet picked it up (my OED reference is older – can’t afford to pop for the online registration at present). Researching the term's general use I created an amalgam of the definitions this way: “Studying and applying mathematical and other techniques to derive information from complex data sets.” Using this definition, data science certainly seems to be science - it's learning about and studying some object or area using systematic methods. But implicit within the definition is the word “application”, which makes the process more akin to engineering or even technology than science. In fact, I find that using these techniques – and data itself – part of science, not science itself. I leave out the concept of studying data patterns or algorithms as part of this discipline. That is actually a domain I see within research, mathematics or computer science. That of course is a type of science, but does not seek for practical applications. As part of the argument against calling it “Data Science”, some point to the scientific method of creating a hypothesis, testing with controls, testing results against the hypothesis, and documenting for repeatability.  These are not steps that we often take in working with data. We normally start with a question, and fit patterns and algorithms to predict outcomes and find correlations. In this way Data Science is more akin to statistics (and in fact makes heavy use of them) in the process rather than starting with an assumption and following on with it. So, is Data Science “Science”? I’m uncertain – and I’m uncertain it matters. Even if we are facing rampant “title inflation” these days (does anyone introduce themselves as a secretary or supervisor anymore?) I can tolerate the term at least from the intent that we use data to study problems across a wide spectrum, rather than restricting it to a single domain. And I also understand those who have worked hard to achieve the very honorable title of “scientist” who have issues with those who borrow the term without asking. What do you think? Science, or not? Does it matter?

    Read the article

  • Jersey 2 in GlassFish 4 - First Java EE 7 Implementation Now Integrated (TOTD #182)

    - by arungupta
    The JAX-RS 2.0 specification released their Early Draft 3 recently. One of my earlier blogs explained as the features were first introduced in the very first draft of the JAX-RS 2.0 specification. Last week was another milestone when the first Java EE 7 specification implementation was added to GlassFish 4 builds. Jakub blogged about Jersey 2 integration in GlassFish 4 builds. Most of the basic functionality is working but EJB, CDI, and Validation are still a TBD. Here is a simple Tip Of The Day (TOTD) sample to get you started with using that functionality. Create a Java EE 6-style Maven project mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee6 -DgroupId=example -DartifactId=jersey2-helloworld -DarchetypeVersion=1.5 -DinteractiveMode=false Note, this is still a Java EE 6 archetype, at least for now. Open the project in NetBeans IDE as it makes it much easier to edit/add the files. Add the following <respositories> <repositories> <repository> <id>snapshot-repository.java.net</id> <name>Java.net Snapshot Repository for Maven</name> <url>https://maven.java.net/content/repositories/snapshots/</url> <layout>default</layout> </repository></repositories> Add the following <dependency>s <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope></dependency><dependency> <groupId>javax.ws.rs</groupId> <artifactId>javax.ws.rs-api</artifactId> <version>2.0-m09</version> <scope>test</scope></dependency><dependency> <groupId>org.glassfish.jersey.core</groupId> <artifactId>jersey-client</artifactId> <version>2.0-m05</version> <scope>test</scope></dependency> The complete list of Maven coordinates for Jersey2 are available here. An up-to-date status of Jersey 2 can always be obtained from here. Here is a simple resource class: @Path("movies")public class MoviesResource { @GET @Path("list") public List<Movie> getMovies() { List<Movie> movies = new ArrayList<Movie>(); movies.add(new Movie("Million Dollar Baby", "Hillary Swank")); movies.add(new Movie("Toy Story", "Buzz Light Year")); movies.add(new Movie("Hunger Games", "Jennifer Lawrence")); return movies; }} This resource publishes a list of movies and is accessible at "movies/list" path with HTTP GET. The project is using the standard JAX-RS APIs. Of course, you need the trivial "Movie" and the "Application" class as well. They are available in the downloadable project anyway. Build the project mvn package And deploy to GlassFish 4.0 promoted build 43 (download, unzip, and start as "bin/asadmin start-domain") as asadmin deploy --force=true target/jersey2-helloworld.war Add a simple test case by right-clicking on the MoviesResource class, select "Tools", "Create Tests", and take defaults. Replace the function "testGetMovies" to @Testpublic void testGetMovies() { System.out.println("getMovies"); Client client = ClientFactory.newClient(); List<Movie> movieList = client.target("http://localhost:8080/jersey2-helloworld/webresources/movies/list") .request() .get(new GenericType<List<Movie>>() {}); assertEquals(3, movieList.size());} This test uses the newly defined JAX-RS 2 client APIs to access the RESTful resource. Run the test by giving the command "mvn test" and see the output as ------------------------------------------------------- T E S T S-------------------------------------------------------Running example.MoviesResourceTestgetMoviesTests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.561 secResults :Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 GlassFish 4 contains Jersey 2 as the JAX-RS implementation. If you want to use Jersey 1.1 functionality, then Martin's blog provide more details on that. All JAX-RS 1.x functionality will be supported using standard APIs anyway. This workaround is only required if Jersey 1.x functionality needs to be accessed. The complete source code explained in this project can be downloaded from here. Here are some pointers to follow JAX-RS 2 Specification Early Draft 3 Latest status on specification (jax-rs-spec.java.net) Latest JAX-RS 2.0 Javadocs Latest status on Jersey (Reference Implementation of JAX-RS 2 - jersey.java.net) Latest Jersey API Javadocs Latest GlassFish 4.0 Promoted Build Follow @gf_jersey Provide feedback on Jersey 2 to [email protected] and JAX-RS specification to [email protected].

    Read the article

  • EM CLI, diving in and beyond!

    - by Maureen Byrne
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Doing more in less time… Isn’t that what we all strive to do? With this in mind, I put together two screen watches on Oracle Enterprise Manager 12c command line interface, or EM CLI as it is also known. There is a wealth of information on any topic that you choose to read about, from manual pages to coding documents…might I even say blog posts? In our busy lives it is so nice to just sit back with a short video, watch and learn enough to dive in. Doing more in less time, is the essence of EM CLI. It enables you to script fundamental and complex administrative tasks in an elegant way, thanks to the Jython scripting language. Repetitive tasks can be scripted and reused again and again. Sure, a Graphical User Interface provides a more intuitive step by step approach to tasks, and it provides a way of quickly becoming familiar with a product and its many features, and it is definitely the way to go when viewing performance data and historical trending…but for repetitive and complex tasks, scripting is the way to go! Lets us take the everyday task of creating an administrator. Using EM CLI in interactive mode the command could look like this.. emcli>create_user(name='jan.doe', type='EXTERNAL_USER') This command creates an administrator called jan.doe which is an externally authenticated user, possibly LDAP or SSO, defined by the EXTERNAL_USER tag. The create_user procedure takes many arguments; see the documentation for more information. Now, where EM CLI really shines and shows power is in creating multiple users. Regardless of the number, tens or thousands, the effort is the same. With the use of a standard programming construct, a loop, you can place your create_user() procedure within it. Using a loop allows you to iterate through a previously created list, creating new users until the list is complete. Using EM CLI in Script mode, your Jython loop would look something like this… for user in list_of_users:       create_user(name=user, expire=’true’, password=’welcome123’) This Jython code snippet iterates through a previously defined list of names, list_of_users, and iterates through the list, taking each name, user in this case, and creates an administrator sets the password to welcome123, but forces the user to reset it when they first login. This is only one of over four hundred procedures created to expose Oracle Enterprise Manager 12c functionality in a powerful and programmatic way. It is a few months since we released EM CLI with scripting option. We are seeing many users adapt to this fun and powerful way of using Oracle Enterprise Manager 12c. What are the first steps? Watch these screen watches, and dive in. The first screen watch steps you through where and how to download and install and how to run your first few commands. The Second screen watch steps you through a few scripts. Next time, I am going to show you the basic building blocks to writing a Jython script to perform Oracle Enterprise Manager 12c administrative tasks. Join this growing group of EM CLI users…. Dive in! Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 4: Application Development Considerations

    - by Tim Murphy
    This is the final part in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Application Development Considerations Now we get to the actual building of your solutions.  What are the skills and resources that will be needed in order to develop a smartphone application in the enterprise? Language Knowledge One of the first things you need to consider when you are deciding which platform language do you either have the most in house skill base or can you easily acquire.  If you already have developers who know Java or C# you may want to use either Android or Windows Phone.  You should also take into consideration the market availability of developers.  If your key developer leaves how easy is it to find a knowledgeable replacement? A second consideration when it comes to programming languages is the qualities exposed by the languages of a particular platform.  How well does that development language and its associated frameworks support things like security and access to the features of the smartphone hardware?  This will play into your overall cost of ownership if you have to create this infrastructure on your own. Manage Limited Resources Everything is limited on a smartphone: battery, memory, processing power, network bandwidth.  When developing your applications you will have to keep your footprint as small as possible in every way.  This means not running unnecessary processes in the background that will drain the battery or pulling more data over the airwaves than you have to.  You also want to keep your on device in as compact a format as possible. Mobile Design Patterns There are a number of design patterns that have either come to life because of smartphone development or have been adapted for this use.  The main pattern in the Windows Phone environment is the MVVM (Model-View-View-Model).  This is great for overall application structure and separation of concerns.  The fun part is trying to keep that separation as pure as possible.  Many of the other patterns may or may not have strict definitions, but some that you need to be concerned with are push notification, asynchronous communication and offline data storage. Real estate is limited on smartphones and even tablets. You are also limited in the type of controls that can be represented in the UI. This means rethinking how you modularize your application. Typing is also much harder to do so you want to reduce this as much as possible.  This leads to UI patterns.  While not what we would traditionally think of as design patterns the guidance each platform has for UI design is critical to the success of your application.  If user find the application difficult navigate they will not use it. Development Process Because of the differences in development tools required, test devices and certification and deployment processes your teams will need to learn new way of working together.  This will include the need to integrate service contracts of back-end systems with mobile applications.  You will also want to make sure that you present consistency across different access points to corporate data.  Your web site may have more functionality than your smartphone application, but it should have a consistent core set of functionality.  This all requires greater communication between sub-teams of your developers. Testing Process Testing of smartphone apps has a lot more to do with what happens when you lose connectivity or if the user navigates away from your application. There are a lot more opportunities for the user or the device to perform disruptive acts.  This should be your main testing concentration aside from the main business requirements.  You will need to do things like setting the phone to airplane mode and seeing what the application does in order to weed out any gaps in your handling communication interruptions. Need For Outside Experts Since this is a development area that is new to most companies the need for experts is a lot greater. Whether these are consultants, vendor representatives or just development community forums you will need to establish expert contacts. Nothing is more dangerous for your project timelines than a lack of knowledge.  Make sure you know who to call to avoid lengthy delays in your project because of knowledge gaps. Security Security has to be a major concern for enterprise applications. You aren't dealing with just someone's game standings. You are dealing with a companies intellectual property and competitive advantage. As such you need to start by limiting access to the application itself.  Once the user is in the app you need to ensure that the data is secure at all times.  This includes both local storage and across the wire.  This means if a platform doesn’t natively support encryption for these functions you will need to find alternatives to secure your data.  You also need to keep secret (encryption) keys obfuscated or locked away outside of the application. People can disassemble the code otherwise and break your encryption. Offline Capabilities As we discussed earlier one your biggest concerns is not having connectivity.  Because of this a good portion of your code may be dedicated to handling loss of connection and reconnection situations.  What do you do if you lose the network?  Back up all your transactions and store of any supporting data so that operations can continue off line. In order to support this you will need to determine the available flat file or local data base capabilities of the platform.  Any failed transactions will need to support a retry mechanism whether it is automatic or user initiated.  This also includes your services since they will need to be able to roll back partially completed transactions.  What ever you do, don’t ignore this area when you are designing your system. Deployment Each platform has different deployment capabilities. Some are more suited to enterprise situations than others. Apple's approach is probably the most mature at the moment. Prior to the current generation of smartphone platforms it would have been Windows CE. Windows Phone 7 has the limitation that the app has to be distributed through the same network as public facing applications. You mark them as private which means that they are only accessible by a direct URL. Unfortunately this does not make them undiscoverable (although it is very difficult). This will change with Windows Phone 8 where companies will be able to certify their own applications and distribute them.  Given this Windows Phone applications need to be more diligent with application access in order to keep them restricted to the company's employees. My understanding of the Android deployment schemes is that it is much less standardized then either iOS or Windows Phone. Someone would have to confirm or deny that for me though since I have not yet put the time into researching this platform further. Given my limited exposure to the iOS and Android platforms I have not been able to confirm this, but there are varying degrees of user involvement to install and keep applications updated. At one extreme the user just goes to a website to do the install and in other case they may need to download files and perform steps to install them. Future Bluetooth Today we use Bluetooth for keyboards, mice and headsets.  In the future it could be used to interrogate car computers or manufacturing systems or possibly retail machines by service techs.  This would open smartphones to greater use as a almost a Star Trek Tricorder.  You would get you all your data as well as being able to use it as a universal remote for just about any device or machine. Better corporation controlled deployment At least in the Windows Phone world the upcoming release of Windows Phone 8 will include a private certification and deployment option that is currently not available with Windows Phone 7 (Mango). We currently have to run the apps through the Marketplace certification process and use a targeted distribution method. Platform independent approaches HTML5 and JavaScript with Web Service has become a popular topic lately for not only creating flexible web site, but also creating cross platform mobile applications.  I’m not yet convinced that this lowest common denominator approach is viable in most cases, but it does have it’s place and seems to be growing.  Be sure to keep an eye on it. Summary From my perspective enterprise smartphone applications can offer a great competitive advantage to many companies.  They are not cheap to build and should be approached cautiously.  Understand the factors I have outlined in this series, do you due diligence and see if there is a portion of your business that can benefit from the mobile experience. del.icio.us Tags: Architecture,Smartphones,Windows Phone,iOS,Android

    Read the article

  • Oracle NoSQL Database Exceeds 1 Million Mixed YCSB Ops/Sec

    - by Charles Lamb
    We ran a set of YCSB performance tests on Oracle NoSQL Database using SSD cards and Intel Xeon E5-2690 CPUs with the goal of achieving 1M mixed ops/sec on a 95% read / 5% update workload. We used the standard YCSB parameters: 13 byte keys and 1KB data size (1,102 bytes after serialization). The maximum database size was 2 billion records, or approximately 2 TB of data. We sized the shards to ensure that this was not an "in-memory" test (i.e. the data portion of the B-Trees did not fit into memory). All updates were durable and used the "simple majority" replica ack policy, effectively 'committing to the network'. All read operations used the Consistency.NONE_REQUIRED parameter allowing reads to be performed on any replica. In the past we have achieved 100K ops/sec using SSD cards on a single shard cluster (replication factor 3) so for this test we used 10 shards on 15 Storage Nodes with each SN carrying 2 Rep Nodes and each RN assigned to its own SSD card. After correcting a scaling problem in YCSB, we blew past the 1M ops/sec mark with 8 shards and proceeded to hit 1.2M ops/sec with 10 shards.  Hardware Configuration We used 15 servers, each configured with two 335 GB SSD cards. We did not have homogeneous CPUs across all 15 servers available to us so 12 of the 15 were Xeon E5-2690, 2.9 GHz, 2 sockets, 32 threads, 193 GB RAM, and the other 3 were Xeon E5-2680, 2.7 GHz, 2 sockets, 32 threads, 193 GB RAM.  There might have been some upside in having all 15 machines configured with the faster CPU, but since CPU was not the limiting factor we don't believe the improvement would be significant. The client machines were Xeon X5670, 2.93 GHz, 2 sockets, 24 threads, 96 GB RAM. Although the clients had 96 GB of RAM, neither the NoSQL Database or YCSB clients require anywhere near that amount of memory and the test could have just easily been run with much less. Networking was all 10GigE. YCSB Scaling Problem We made three modifications to the YCSB benchmark. The first was to allow the test to accommodate more than 2 billion records (effectively int's vs long's). To keep the key size constant, we changed the code to use base 32 for the user ids. The second change involved to the way we run the YCSB client in order to make the test itself horizontally scalable.The basic problem has to do with the way the YCSB test creates its Zipfian distribution of keys which is intended to model "real" loads by generating clusters of key collisions. Unfortunately, the percentage of collisions on the most contentious keys remains the same even as the number of keys in the database increases. As we scale up the load, the number of collisions on those keys increases as well, eventually exceeding the capacity of the single server used for a given key.This is not a workload that is realistic or amenable to horizontal scaling. YCSB does provide alternate key distribution algorithms so this is not a shortcoming of YCSB in general. We decided that a better model would be for the key collisions to be limited to a given YCSB client process. That way, as additional YCSB client processes (i.e. additional load) are added, they each maintain the same number of collisions they encounter themselves, but do not increase the number of collisions on a single key in the entire store. We added client processes proportionally to the number of records in the database (and therefore the number of shards). This change to the use of YCSB better models a use case where new groups of users are likely to access either just their own entries, or entries within their own subgroups, rather than all users showing the same interest in a single global collection of keys. If an application finds every user having the same likelihood of wanting to modify a single global key, that application has no real hope of getting horizontal scaling. Finally, we used read/modify/write (also known as "Compare And Set") style updates during the mixed phase. This uses versioned operations to make sure that no updates are lost. This mode of operation provides better application behavior than the way we have typically run YCSB in the past, and is only practical at scale because we eliminated the shared key collision hotspots.It is also a more realistic testing scenario. To reiterate, all updates used a simple majority replica ack policy making them durable. Scalability Results In the table below, the "KVS Size" column is the number of records with the number of shards and the replication factor. Hence, the first row indicates 400m total records in the NoSQL Database (KV Store), 2 shards, and a replication factor of 3. The "Clients" column indicates the number of YCSB client processes. "Threads" is the number of threads per process with the total number of threads. Hence, 90 threads per YCSB process for a total of 360 threads. The client processes were distributed across 10 client machines. Shards KVS Size Clients Mixed (records) Threads OverallThroughput(ops/sec) Read Latencyav/95%/99%(ms) Write Latencyav/95%/99%(ms) 2 400m(2x3) 4 90(360) 302,152 0.76/1/3 3.08/8/35 4 800m(4x3) 8 90(720) 558,569 0.79/1/4 3.82/16/45 8 1600m(8x3) 16 90(1440) 1,028,868 0.85/2/5 4.29/21/51 10 2000m(10x3) 20 90(1800) 1,244,550 0.88/2/6 4.47/23/53

    Read the article

  • Top Questions and Answers for Pluging into Oracle Database as a Service

    - by David Swanger
    Yesterday we hosted a comprehensive online forum that shared a comprehensive path to help your organization design, deploy, and deliver a Database as a Service cloud. If you missed the online forum, you can watch it on demand by registering here. We received numerous questions.  Below are highlights of the most informative: DBaaS requires a lengthy and careful design efforts. What is the minimum requirements of setting up a scaled-down environment and test it out? You should have an OEM 12c environment for DBaaS administration and then a target database deployment platform that has the key characteristics of what your production environment will look like. This could be a single server or it could be a small pool of hosts if your production DBaaS will be larger and you want to test a more robust / real world configuration with Zones and Pools or DR capabilities for example. How does this benefit companies having their own data center? This allows companies to transform their internal IT to a service delivery model for the database. The benefits to the company are significant cost savings, improved business agility and reduced risk. The benefits to the consumers (internal) of services if much fast provisioning, and response to change in business requirements. From a deployment perspective, is DBaaS's job solely DBA's job? The best deployment model enables the DBA (or end-user) to control the entire process. All resources required to deploy the service are pre-provisioned, and there are no external dependencies (on network, storage, sysadmins teams). The service is created either via a self-service portal or by the DBA. The purpose of self service seems to be that the end user does not rely on the DBA. I just need to give him a template. He decides how much AMM he needs. Why shall I set it one by one. That doesn't seem to be the purpose of self service. Most customers we have worked with define a standardized service catalog, with a few (2 to 5) different classes of service. For each of these classes, there is a pre-defined deployment template, and the user has the ability to select from some pre-defined service sizes. The administrator only has to create this catalog once. Each user then simply selects from the options offered in the catalog.  Looking at DBaaS service definition, it seems to be no different from a service definition provided by a well defined DBA team. Why do you attribute it to DBaaS? There are a couple of perspectives. First, some organizations might already be operating with a high level of standardization and a higher level of maturity from an ITIL or Service Management perspective. Their journey to DBaaS could be shorter and their Service Definition will evolve less but they still might need to add capabilities such as Self Service and Metering/Chargeback. Other organizations are still operating in highly siloed environments with little automation and their formal Service Definition (if they have one) will be a lot less mature today. Therefore their future state DBaaS will look a lot different from their current state, as will their Service Definition. How database as a service impact or help with "Click to Compute" or deploying "Database in cloud infrastructure" DBaaS enables Click to Compute. Oracle DBaaS can be implemented using three architecture models: Oracle Multitenant 12c, native consolidation using Oracle Database and consolidation using virtualization in infrastructure cloud. As Deploy session showed, you get higher consolidating density and efficiency using Multitenant and higher isolation using infrastructure cloud. Depending upon your business needs, DBaaS can be implemented using any of these models. How exactly is the DBaaS different from the traditional db? Storage/OS/DB all together to 'transparently' provide service to applications? Will there be across-databases access by application/user. Some key differences are: 1) The services run on a shared platform. 2) The services can be rapidly provisioned (< 15 minutes). 3) The services are dynamic and can be relocated, grown, shrunk as needed to meet business needs without disruption and rapidly. 4) The user is able to provision the services directly from a standardized service catalog.. With 24x7x365 databases its difficult to find off peak hrs to do basic admin tasks such as gathering stats, running backups, batch jobs. How does pluggable database handle this and different needs/patching downtime of apps databases might be serving? You can gather stats in Oracle Multitenant the same way you had been in regular databases. Regarding patching/upgrading, Oracle Multitenant makes patch/upgrade very efficient in that you can pre-provision a new version/patched multitenant db in a different ORACLE_HOME and then unplug a PDB from its CDB and plug it into the newer/patched CDB in seconds.  Thanks for all the great questions!  If you'd like to learn more and missed the online forum, you can watch it on demand here.

    Read the article

  • Alternative way of developing for ASP.NET to WebForms - Any problems with this?

    - by John
    So I have been developing in ASP.NET WebForms for some time now but often get annoyed with all the overhead (like ViewState and all the JavaScript it generates), and the way WebForms takes over a lot of the HTML generation. Sometimes I just want full control over the markup and produce efficient HTML of my own so I have been experimenting with what I like to call HtmlForms. Essentially this is using ASP.NET WebForms but without the form runat="server" tag. Without this tag, ASP.NET does not seem to add anything to the page at all. From some basic tests it seems that it runs well and you still have the ability to use code-behind pages, and many ASP.NET controls such as repeaters. Of course without the form runat="server" many controls won't work. A post at Enterprise Software Development lists the controls that do require the tag. From that list you will see that all of the form elements like TextBoxes, DropDownLists, RadioButtons, etc cannot be used. Instead you use normal HTML form controls. But how do you access these HTML controls from the code behind? Retrieving values on post back is easy, you just use Request.QueryString or Request.Form. But passing data to the control could be a little messy. Do you use a ASP.NET Literal control in the value field or do you use <%= value % in the markup page? I found it best to add runat="server" to my HTML controls and then you can access the control in your code-behind like this: ((HtmlInputText)txtName).Value = "blah"; Here's a example that shows what you can do with a textbox and drop down list: Default.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="NoForm.Default" %> <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="NoForm.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form action="" method="post"> <label for="txtName">Name:</label> <input id="txtName" name="txtName" runat="server" /><br /> <label for="ddlState">State:</label> <select id="ddlState" name="ddlState" runat="server"> <option value=""></option> </select><br /> <input type="submit" value="Submit" /> </form> </body> </html> Default.aspx.cs using System; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; namespace NoForm { public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { //Default values string name = string.Empty; string state = string.Empty; if (Request.RequestType == "POST") { //If form submitted (post back) name = Request.Form["txtName"]; state = Request.Form["ddlState"]; //Server side form validation would go here //and actions to process form and redirect } ((HtmlInputText)txtName).Value = name; ((HtmlSelect)ddlState).Items.Add(new ListItem("ACT")); ((HtmlSelect)ddlState).Items.Add(new ListItem("NSW")); ((HtmlSelect)ddlState).Items.Add(new ListItem("NT")); ((HtmlSelect)ddlState).Items.Add(new ListItem("QLD")); ((HtmlSelect)ddlState).Items.Add(new ListItem("SA")); ((HtmlSelect)ddlState).Items.Add(new ListItem("TAS")); ((HtmlSelect)ddlState).Items.Add(new ListItem("VIC")); ((HtmlSelect)ddlState).Items.Add(new ListItem("WA")); if (((HtmlSelect)ddlState).Items.FindByValue(state) != null) ((HtmlSelect)ddlState).Value = state; } } } As you can see, you have similar functionality to ASP.NET server controls but more control over the final markup, and less overhead like ViewState and all the JavaScript ASP.NET adds. Interestingly you can also use HttpPostedFile to handle file uploads using your own input type="file" control (and necessary form enctype="multipart/form-data"). So my question is can you see any problems with this method, and any thoughts on it's usefulness? I have further details and tests on my blog.

    Read the article

  • What Counts For a DBA: Fitness

    - by Louis Davidson
    If you know me, you can probably guess that physical exercise is not really my thing. There was a time in my past when it a larger part of my life, but even then never in the same sort of passionate way as a number of our SQL friends.  For me, I find that mental exercise satisfies what I believe to be the same inner need that drives people to run farther than I like to drive on most Saturday mornings, and it is certainly just as addictive. Mental fitness shares many common traits with physical fitness, especially the need to attain it through repetitive training. I only wish that mental training burned off a bacon cheeseburger in the same manner as does jogging around a dewy park on Saturday morning. In physical training, there are at least two goals, the first of which is to be physically able to do a task. The second is to train the brain to perform the task without thinking too hard about it. No matter how long it has been since you last rode a bike, you will be almost certainly be able to hop on and start riding without thinking about the process of pedaling or balancing. If you’ve never ridden a bike, you could be a physics professor /Olympic athlete and still crash the first few times you try, even though you are as strong as an ox and your knowledge of the physics of bicycle riding makes the concept child’s play. For programming tasks, the process is very similar. As a DBA, you will come to know intuitively how to backup, optimize, and secure database systems. As a data programmer, you will work to instinctively use the clauses of Transact-SQL DML so that, when you need to group data three ways (and not four), you will know to use the GROUP BY clause with GROUPING SETS without resorting to a search engine.  You have the skill. Making it naturally then requires repetition and experience is the primary requirement, not just simply learning about a topic. The hardest part of being really good at something is this difference between knowledge and skill. I have recently taken several informative training classes with Kimball University on data warehousing and ETL. Now I have a lot more knowledge about designing data warehouses than before. I have also done a good bit of data warehouse designing of late and have started to improve to some level of proficiency with the theory. Yet, for all of this head knowledge, it is still a struggle to take what I have learned and apply it to the designs I am working on.  Data warehousing is still a task that is not yet deeply ingrained in my brain muscle memory. On the other hand, relational database design is something that no matter how much or how little I may get to do it, I am comfortable doing it. I have done it as a profession now for well over a decade, I teach classes on it, and I also have done (and continue to do) a lot of mental training beyond the work day. Sometimes the training is just basic education, some reading blogs and attending sessions at PASS events.  My best training comes from spending time working on other people’s design issues in forums (though not nearly as much as I would like to lately). Working through other people’s problems is a great way to exercise your brain on problems with which you’re not immediately familiar. The final bit of exercise I find useful for cultivating mental fitness for a data professional is also probably the nerdiest thing that I will ever suggest you do.  Akin to running in place, the idea is to work through designs in your head. I have designed more than one database system that would revolutionize grocery store operations, sales at my local Target store, the ordering process at Amazon, and ways to improve Disney World operations to get me through a line faster (some of which they are starting to implement without any of my help.) Never are the designs truly fleshed out, but enough to work through structures and processes.  On “paper”, I have designed database systems to catalog things as trivial as my Lego creations, rental car companies and my audio and video collections. Once I get the database designed mentally, sometimes I will create the database, add some data (often using Red-Gate’s Data Generator), and write a few queries to see if a concept was realistic, but I will rarely fully flesh out the database since I have no desire to do any user interface programming anymore.  The mental training allows me to keep in practice for when the time comes to do the work I love the most for real…even if I have been spending most of my work time lately building data warehouses.  If you are really strong of mind and body, perhaps you can mix a mental run with a physical run; though don’t run off of a cliff while contemplating how you might design a database to catalog the trees on a mountain…that would be contradictory to the purpose of both types of exercise.

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • Farseer tutorial for the absolute beginners

    - by Bil Simser
    This post is inspired (and somewhat a direct copy) of a couple of posts Emanuele Feronato wrote back in 2009 about Box2D (his tutorial was ActionScript 3 based for Box2D, this is C# XNA for the Farseer Physics Engine). Here’s what we’re building: What is Farseer The Farseer Physics Engine is a collision detection system with realistic physics responses to help you easily create simple hobby games or complex simulation systems. Farseer was built as a .NET version of Box2D (based on the Box2D.XNA port of Box2D). While the constructs and syntax has changed over the years, the principles remain the same. This tutorial will walk you through exactly what Emanuele create for Flash but we’ll be doing it using C#, XNA and the Windows Phone platform. The first step is to download the library from its home on CodePlex. If you have NuGet installed, you can install the library itself using the NuGet package that but we’ll also be using some code from the Samples source that can only be obtained by downloading the library. Once you download and unpacked the zip file into a folder and open the solution, this is what you will get: The Samples XNA WP7 project (and content) have all the demos for Farseer. There’s a wealth of info here and great examples to look at to learn. The Farseer Physics XNA WP7 project contains the core libraries that do all the work. DebugView XNA contains an XNA-ready class to let you view debug data and information in the game draw loop (which you can copy into your project or build the source and reference the assembly). The downloaded version has to be compiled as it’s only available in source format so you can do that now if you want (open the solution file and rebuild everything). If you’re using the NuGet package you can just install that. We only need the core library and we’ll be copying in some code from the samples later. Your first Farseer experiment Start Visual Studio and create a new project using the Windows Phone template can call it whatever you want. It’s time to edit Game1.cs 1 public class Game1 : Game 2 { 3 private readonly GraphicsDeviceManager _graphics; 4 private DebugViewXNA _debugView; 5 private Body _floor; 6 private SpriteBatch _spriteBatch; 7 private float _timer; 8 private World _world; 9 10 public Game1() 11 { 12 _graphics = new GraphicsDeviceManager(this) 13 { 14 PreferredBackBufferHeight = 800, 15 PreferredBackBufferWidth = 480, 16 IsFullScreen = true 17 }; 18 19 Content.RootDirectory = "Content"; 20 21 // Frame rate is 30 fps by default for Windows Phone. 22 TargetElapsedTime = TimeSpan.FromTicks(333333); 23 24 // Extend battery life under lock. 25 InactiveSleepTime = TimeSpan.FromSeconds(1); 26 } 27 28 protected override void LoadContent() 29 { 30 // Create a new SpriteBatch, which can be used to draw textures. 31 _spriteBatch = new SpriteBatch(_graphics.GraphicsDevice); 32 33 // Load our font (DebugViewXNA needs it for the DebugPanel) 34 Content.Load<SpriteFont>("font"); 35 36 // Create our World with a gravity of 10 vertical units 37 if (_world == null) 38 { 39 _world = new World(Vector2.UnitY*10); 40 } 41 else 42 { 43 _world.Clear(); 44 } 45 46 if (_debugView == null) 47 { 48 _debugView = new DebugViewXNA(_world); 49 50 // default is shape, controller, joints 51 // we just want shapes to display 52 _debugView.RemoveFlags(DebugViewFlags.Controllers); 53 _debugView.RemoveFlags(DebugViewFlags.Joint); 54 55 _debugView.LoadContent(GraphicsDevice, Content); 56 } 57 58 // Create and position our floor 59 _floor = BodyFactory.CreateRectangle( 60 _world, 61 ConvertUnits.ToSimUnits(480), 62 ConvertUnits.ToSimUnits(50), 63 10f); 64 _floor.Position = ConvertUnits.ToSimUnits(240, 775); 65 _floor.IsStatic = true; 66 _floor.Restitution = 0.2f; 67 _floor.Friction = 0.2f; 68 } 69 70 protected override void Update(GameTime gameTime) 71 { 72 // Allows the game to exit 73 if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) 74 Exit(); 75 76 // Create a random box every second 77 _timer += (float) gameTime.ElapsedGameTime.TotalSeconds; 78 if (_timer >= 1.0f) 79 { 80 // Reset our timer 81 _timer = 0f; 82 83 // Determine a random size for each box 84 var random = new Random(); 85 var width = random.Next(20, 100); 86 var height = random.Next(20, 100); 87 88 // Create it and store the size in the user data 89 var box = BodyFactory.CreateRectangle( 90 _world, 91 ConvertUnits.ToSimUnits(width), 92 ConvertUnits.ToSimUnits(height), 93 10f, 94 new Point(width, height)); 95 96 box.BodyType = BodyType.Dynamic; 97 box.Restitution = 0.2f; 98 box.Friction = 0.2f; 99 100 // Randomly pick a location along the top to drop it from 101 box.Position = ConvertUnits.ToSimUnits(random.Next(50, 400), 0); 102 } 103 104 // Advance all the elements in the world 105 _world.Step(Math.Min((float) gameTime.ElapsedGameTime.TotalMilliseconds*0.001f, (1f/30f))); 106 107 // Clean up any boxes that have fallen offscreen 108 foreach (var box in from box in _world.BodyList 109 let pos = ConvertUnits.ToDisplayUnits(box.Position) 110 where pos.Y > _graphics.GraphicsDevice.Viewport.Height 111 select box) 112 { 113 _world.RemoveBody(box); 114 } 115 116 base.Update(gameTime); 117 } 118 119 protected override void Draw(GameTime gameTime) 120 { 121 GraphicsDevice.Clear(Color.FromNonPremultiplied(51, 51, 51, 255)); 122 123 _spriteBatch.Begin(); 124 125 var projection = Matrix.CreateOrthographicOffCenter( 126 0f, 127 ConvertUnits.ToSimUnits(_graphics.GraphicsDevice.Viewport.Width), 128 ConvertUnits.ToSimUnits(_graphics.GraphicsDevice.Viewport.Height), 0f, 0f, 129 1f); 130 _debugView.RenderDebugData(ref projection); 131 132 _spriteBatch.End(); 133 134 base.Draw(gameTime); 135 } 136 } 137 Lines 4: Declare the debug view we’ll use for rendering (more on that later). Lines 8: Declare _world variable of type class World. World is the main object to interact with the Farseer engine. It stores all the joints and bodies, and is responsible for stepping through the simulation. Lines 12-17: Create the graphics device we’ll be rendering on. This is an XNA component and we’re just setting it to be the same size as the phone and toggling it to be full screen (no system tray). Lines 34: We create a SpriteFont here by adding it to the project. It’s called “font” because that’s what the DebugView uses but you can name it whatever you want (and if you’re not using DebugView for your production app you might have several fonts). Lines 37-44: We create the physics environment that Farseer uses to contain all the objects by specifying it here. We’re using Vector2.UnitY*10 to represent the gravity to be used in the environment. In other words, 10 units going in a downward motion. Lines 46-56: We create the DebugViewXNA here. This is copied from the […] from the code you downloaded and provides the ability to render all entities onto the screen. In a production release you’ll be doing the rendering yourself of each object but we cheat a bit for the demo and let the DebugView do it for us. The other thing it can provide is to render out a panel of debugging information while the simulation is going on. This is useful in tracking down objects, figuring out how something works, or just keeping track of what’s in the engine. Lines 49-67: Here we create a rigid body (Farseer only supports rigid bodies) to represent the floor that we’ll drop objects onto. We create it by using one of the Farseer factories and specifying the width and height. The ConvertUnits class is copied from the samples code as-is and lets us toggle between display units (pixels) and simulation units (usually metres). We’re creating a floor that’s 480 pixels wide and 50 pixels high (converting them to SimUnits for the engine to understand). We also position it near the bottom of the screen. Values are in metres and when specifying values they refer to the centre of the body object. Lines 77-78: The game Update method fires 30 times a second, too fast to be creating objects this quickly. So we use a variable to track the elapsed seconds since the last update, accumulate that value, then create a new box to drop when 1 second has passed. Lines 89-94: We create a box the same way we created our floor (coming up with a random width and height for the box). Lines 96-101: We set the box to be Dynamic (rather than Static like the floor object) and position it somewhere along the top of the screen. And now you created the world. Gravity does the rest and the boxes fall to the ground. Here’s the result: Farseer Physics Engine Demo using XNA Lines 105: We must update the world at every frame. We do this with the Step method which takes in the time interval. [more] Lines 108-114: Body objects are added to the world but never automatically removed (because Farseer doesn’t know about the display world, it has no idea if an item is on the screen or not). Here we just loop through all the entities and anything that’s dropped off the screen (below the bottom) gets removed from the World. This keeps our entity count down (the simulation never has more than 30 or 40 objects in the world no matter how long you run it for). Too many entities and the app will grind to a halt. Lines 125-130: Farseer knows nothing about the UI so that’s entirely up to you as to how to draw things. Farseer is just tracking the objects and moving them around using the physics engine and it’s rules. You’ll still use XNA to draw items (using the SpriteBatch.Draw method) so you can load up your usual textures and draw items and pirates and dancing zombies all over the screen. Instead in this demo we’re going to cheat a little. In the sample code for Farseer you can download there’s a project called DebugView XNA. This project contains the DebugViewXNA class which just handles iterating through all the bodies in the world and drawing the shapes. So we call the RenderDebugData method here of that class to draw everything correctly. In the case of this demo, we just want to draw Shapes so take a look at the source code for the DebugViewXNA class as to how it extracts all the vertices for the shapes created (in this case simple boxes) and draws them. You’ll learn a *lot* about how Farseer works just by looking at this class. That’s it, that’s all. Simple huh? Hope you enjoy the code and library. Physics is hard and requires some math skills to really grok. The Farseer Physics Engine makes it pretty easy to get up and running and start building games. In future posts we’ll get more in-depth with things you can do with the engine so this is just the beginning. Enjoy!

    Read the article

  • How to Implement Single Sign-On between Websites

    - by hmloo
    Introduction Single sign-on (SSO) is a way to control access to multiple related but independent systems, a user only needs to log in once and gains access to all other systems. a lot of commercial systems that provide Single sign-on solution and you can also choose some open source solutions like Opensso, CAS etc. both of them use centralized authentication and provide more robust authentication mechanism, but if each system has its own authentication mechanism, how do we provide a seamless transition between them. Here I will show you the case. How it Works The method we’ll use is based on a secret key shared between the sites. Origin site has a method to build up a hashed authentication token with some other parameters and redirect the user to the target site. variables Status Description ssoEncode required hash(ssoSharedSecret + , + ssoTime + , + ssoUserName) ssoTime required timestamp with format YYYYMMDDHHMMSS used to prevent playback attacks ssoUserName required unique username; required when a user is logged in Note : The variables will be sent via POST for security reasons Building a Single Sign-On Solution Origin Site has function to 1. Create the URL for your Request. 2. Generate required authentication parameters 3. Redirect to target site. using System; using System.Web.Security; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string postbackUrl = "http://www.targetsite.com/sso.aspx"; string ssoTime = DateTime.Now.ToString("yyyyMMddHHmmss"); string ssoUserName = User.Identity.Name; string ssoSharedSecret = "58ag;ai76"; // get this from config or similar string ssoHash = FormsAuthentication.HashPasswordForStoringInConfigFile(string.Format("{0},{1},{2}", ssoSharedSecret, ssoTime, ssoUserName), "md5"); string value = string.Format("{0}:{1},{2}", ssoHash,ssoTime, ssoUserName); Response.Clear(); StringBuilder sb = new StringBuilder(); sb.Append("<html>"); sb.AppendFormat(@"<body onload='document.forms[""form""].submit()'>"); sb.AppendFormat("<form name='form' action='{0}' method='post'>", postbackUrl); sb.AppendFormat("<input type='hidden' name='t' value='{0}'>", value); sb.Append("</form>"); sb.Append("</body>"); sb.Append("</html>"); Response.Write(sb.ToString()); Response.End(); } } Target Site has function to 1. Get authentication parameters. 2. Validate the parameters with shared secret. 3. If the user is valid, then do authenticate and redirect to target page. 4. If the user is invalid, then show errors and return. using System; using System.Web.Security; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { if (User.Identity.IsAuthenticated) { Response.Redirect("~/Default.aspx"); } } if (Request.Params.Get("t") != null) { string ticket = Request.Params.Get("t"); char[] delimiters = new char[] { ':', ',' }; string[] ssoVariable = ticket.Split(delimiters, StringSplitOptions.None); string ssoHash = ssoVariable[0]; string ssoTime = ssoVariable[1]; string ssoUserName = ssoVariable[2]; DateTime appTime = DateTime.MinValue; int offsetTime = 60; // get this from config or similar try { appTime = DateTime.ParseExact(ssoTime, "yyyyMMddHHmmss", null); } catch { //show error return; } if (Math.Abs(appTime.Subtract(DateTime.Now).TotalSeconds) > offsetTime) { //show error return; } bool isValid = false; string ssoSharedSecret = "58ag;ai76"; // get this from config or similar string hash = FormsAuthentication.HashPasswordForStoringInConfigFile(string.Format("{0},{1},{2}", ssoSharedSecret, ssoTime, ssoUserName), "md5"); if (string.Compare(ssoHash, hash, true) == 0) { if (Math.Abs(appTime.Subtract(DateTime.Now).TotalSeconds) > offsetTime) { //show error return; } else { isValid = true; } } if (isValid) { //Do authenticate; } else { //show error return; } } else { //show error } } } Summary This is a very simple and basic SSO solution, and its main advantage is its simplicity, only needs to add a single page to do SSO authentication, do not need to modify the existing system infrastructure.

    Read the article

  • CodePlex Daily Summary for Sunday, July 21, 2013

    CodePlex Daily Summary for Sunday, July 21, 2013Popular ReleasesMagick.NET: Magick.NET 6.8.6.601: Magick.NET linked with ImageMagick 6.8.6.6.MISAO: Ver. 5.33: Latest app and add-insC# Intellisense for Notepad++: Initial release: Members auto-complete Integration with native Notepad++ Auto-Completion Auto "open bracket" for methods Right-arrow to accept suggestions51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.19.4: One Click Install from NuGet This release introduces the 51Degrees.mobi IIS Vary Header Fix. When Compression and Caching is used in IIS, the Vary header is overwritten, making intelligent caching with dynamic content impossible. Find out more about installing the Vary Header fix. Changes to Version 2.1.19.4Handlers now have a ‘Count’ property. This is an integer value that shows how many devices in the dataset that use that handler. Provider.cs -> GetDeviceInfoByID to address a problem w...SalarDbCodeGenerator: SalarDbCodeGenerator v2.1.2013.0719: Version 2.1.2013.0719 2013/7/19 Pattern Changes: * DapperContext pattern is added. * All patterns are updated to work with one-to-one relations. Changes: * One-to-one relation is supported. * Minor bug fixes.Player Framework by Microsoft: Player Framework for Windows and WP (v1.3 beta 2): Includes all changes in v1.3 beta 1 Additional support for Windows 8.1 Preview New API (JS): addTextTrack New API (JS): msKeys New API (JS): msPlayToPreferredSourceUri New API (JS): msSetMediaKeys New API (JS): onmsneedkey New API (Xaml): SetMediaStreamSource method New API (Xaml): Stretch property New API (Xaml): StretchChanged event New API (Xaml): AreTransportControlsEnabled property New API (Xaml): IsFullWindow property New API (Xaml): PlayToPreferredSourceUri proper...Outlook 2013 Add-In: Multiple Calendars: As per popular request, this new version includes: - Support for multiple calendars. This can be enabled in the configuration by choosing which ones to show/hide appointments from. In some cases (public folders) it may time out and crash, and so far it only supports "My Calendars", so not shared ones yet. Also they're currently shown in the same font/color so there are no confusions with color categories, but please drop me a line on any suggestions you'd like to see implemented. - Added fri...Circuit Diagram: Circuit Diagram 2.0 Beta 2: New in this release: Show grid in editor Cut/copy/paste support Bug fixesDaRenamer: Renamer 2.1.0.5: Version 2.1.0.5 -fixed minor bugInstall Verify Tool: Install Verify Tool V 1.0: Win Service Web Service Win Service Client Web Service ClientOrchard Project: Orchard 1.7 RC: Planning releasedTerminals: Version 3.1 - Release: Changes since version 3.0:15992 Unified usage of icons in user interface Added context menu in Organize favorites grid Fixed:34219 34210 34223 33981 34209 Install notes:No changes in database (use database from release 3.0) No upgrade of configuration, passwords, credentials or favorites See also upgrade notes for release 3.0PMU Connection Tester: PMU Connection Tester v4.4.0: This is the current release build of the PMU Connection Tester, version 4.4.0 This version of the connection tester was released with openPDC 1.5 SP1 and openPDC 2.0 BETA. This application requires that .NET 4.0 already be installed on your system. Note this is the last release of the PMU Connection Tester that will built on .NET 4.0 using the TVA Code Library and the Time-series Framework. Future releases of the PMU Connection Tester will be built on .NET 4.5 (or later) using the Grid Sol...HiUpdateTools - easy publish and update your app: HiUpdateTools Add-in 1.0.0.5: - Generate ClientConfig.xml and adding to the project - Set ClientConfig.xml option "CopyToOutputDirectory"= Copy if newer - Fix client path not ending the backslash - Add Client assembly to VSX package - On first use, the tool is added to the reference to the client assembly - Fix client application - Multi-instance application - Run single instance of update applicationopen gaze and mouse analyzer: Ogama 4.4 BETA: This beta was published on 16.07.2013 and includes fixes and improvements since last 4.3 release, mainly in the recording section which solves problems with tobii and mirametrix devices, see the source code tab for details. Please test it, if you have one of this devices and give me feedback using the issue tracker or discussion tabs. Don´t forget to install .Net 4 framework and SQL Express before installing Ogama. When using Tobii tracking devices, you have to install apple bonjour also. On...SpaceFlight: SpaceFlight_v1.1: Added VCRedist.exe , run this first if you get the "MSVCP100.dll is missing" issueAdvanced Resource Tab for Blend: Advanced Resource Tab 2.0: Added filtering of (sub)-resource items and collapsing / expanding of all resource dictionaries.Media Companion: Media Companion MC3.573b: XBMC Link - Let MC update your XBMC Library Fixes in place, Enjoy the XBMC Link function Well, Phil's been busy in the background, and come up with a Great new feature for Media Companion. Currently only implemented for movies. Once we're happy that's working with no issues, we'll extend the functionality to include TV shows. All the help for this is build into the application. Go to General Preferences - XBMC Link for details. Help us make it better* Currently only tested on local and ...Wsus Package Publisher: Release v1.2.1307.15: Fix a bug where WPP crash if 'ShowPendingUpdates' is start with wrong credentials. Fix a bug where WPP crash if ArrivalDateAfter and ArrivalDateBefore is equal in the ComputerView. Add a filter in the ComputerView. (Thanks to NorbertFe for this feature request) Add an option, when right-clicking on a computer, you can ask for display the current logon user of the remote computer. Add an option in settings to choose if WPP ping remote computers using IPv4, IPv6 or IPv6 and, if fail, IP...Lab Of Things: vBeta1: Initial release of LoTNew ProjectsBlindspot: This project aims to create a fully-functional windowless desktop application allowing blind/visually impaired music fans the chance to access Spotify.HelloReading: ???????MonteMediaCC: The Monte Media Library is a Java library for processing media data. Supported media formats include still images, video, audio and meta-data. NETDeob: Deobfuscator and Unpacker for .NET Files.NthCatalanNumber: Write a program to calculate the Nth Catalan number by given N. http://en.wikipedia.org/wiki/Catalan_numberPokemon Battle Online 0791: ???????project site: the-west minimapPSeG Server FIles: PSeG Server FilesSample VariableSizedWrapGrid: This is an example of the use of particular VariableSizedWrapGrid GridView control. Where we can set the size of each item as needed It can make an appearance oServices Monitoring Management Pack: La supervision des services automatiques est un élément qui est déficient dans Operations Manager. Ce « Management Pack » sert à surveiller les services automatSinaIsTestingHisNewProject: this project is only for testing this site and any copying without my permission will be sewed by me and will be tracked by CIA and FBI and will be HeadShotSum of a sequence: Write a program that, for a given two integer numbers N and X, calculates the sum S = 1 + 1!/X + 2!/X2 + … + N!/XN Synchrophasor Analytics: Synchrophasor Analytics is a front end data processing and conditioning for downstream phasor based applications and an extension for development and analysis.tcp-bridge: a tcp bridge service to redirect incoming connections to another machine by using another incoming or outgoing connectionTypeScript Class Library: The TypeScript Class Library WPDialog: Library for developing app dialogs for Windows phone similar to Monotouch.DialogWPF File Renamer: Simple file renaming application made to brush up on my WPF data binding and MVVM skills.

    Read the article

  • Cocos2d: Moving background on update: offsett issue

    - by mm24
    working with Objective C, iOS and Cocos2d I am developing a vertical scrolling shooter game for iPhone (retina display models with 640 width x 960 height pixel resolution). My basic algorithm works as following: I create two instances of an image that has exactly 640 width x 960 height pixel of resolution, which we will call imageA and imageB I then set the two imags with exactly 480.0f of offset from each other, as the screenSize of a CCScene is set by default to 480.0f. At each update method call I move the two images by the same value. I make sure that their offsett stays to 480.0f However when running the game I see a 1 pixel height line between the two images. This literally bugs me and would like to adjust this. What am I doing wrong? This is a zoom in on the background when the "offsett line" is visible. The white line you can see divides the two background images and is not meant to exist as both images are completely black :): If I change the yPositionOfSecondElement value to 479.0f until the first loop the two images overlap correctly, but as soon as the loop starts the two images starts having an offsett of -1.0f. Here is the initialization code: -(void) init { //... screenHeight = 480.0f; yPositionOfSecondElement= screenHeight;//I tried subtracting an offsett of -1 but eventually the image would go wrong again yPositionOfFirstElement = 0.0f; loopedBackgroundImageInstanceA = [BackgroundLoopedImage loopImageForLevel:levelName]; loopedBackgroundImageInstanceA.anchorPoint = CGPointMake(0.5f, 0.0f); loopedBackgroundImageInstanceA.position = CGPointMake(160.0f, yPositionOfFirstElement); [node addChild:loopedBackgroundImageInstanceA z:zLevelBackground]; //loopedBackgroundImageInstanceA.color= ccRED; loopedBackgroundImageInstanceB = [BackgroundLoopedImage loopImageForLevel:levelName]; loopedBackgroundImageInstanceB.anchorPoint = CGPointMake(0.5f, 0.0f); loopedBackgroundImageInstanceB.position = CGPointMake(160.0f, yPositionOfSecondElement); [node addChild:loopedBackgroundImageInstanceB z:zLevelBackground]; //.... } And here is the move code called at each update: -(void) moveBackgroundSprites:(BackgroundLoopedImage*)imageA :(BackgroundLoopedImage*)imageB :(ccTime)delta { isEligibleToMove=false; //This is done to avoid rounding errors float yStep = delta * [GameController sharedGameController].currentBackgroundSpeed; NSString* formattedNumber = [NSString stringWithFormat:@"%.02f", yStep]; yStep = atof([formattedNumber UTF8String]); //First should adjust position of images [self adjustPosition:imageA :imageB]; //The can get the actual image position CGPoint posA = imageA.position; CGPoint posB = imageB.position; //Here could verify if the checksum is equal to the required difference (should be 479.0f) if (![self verifyCheckSum:posA :posB]) { CCLOG(@"does not comply A"); } //At this stage can compute the hypotetical new position CGPoint newPosA = CGPointMake(posA.x, posA.y - yStep); CGPoint newPosB = CGPointMake(posB.x, posB.y - yStep); // Reposition stripes when they're out of bounds if (newPosA.y <= -yPositionOfSecondElement) { newPosA.y = yPositionOfSecondElement; [imageA shuffle]; if (timeElapsed>=endTime && hasReachedEndLevel==FALSE) { hasReachedEndLevel=TRUE; shouldMoveImageEnd=TRUE; } } else if (newPosB.y <= -yPositionOfSecondElement) { newPosB.y = yPositionOfSecondElement; [imageB shuffle]; if (timeElapsed>=endTime && hasReachedEndLevel==FALSE) { hasReachedEndLevel=TRUE; shouldMoveImageEnd=TRUE; } } //Here should verify that the check sum is equal to 479.0f if (![self verifyCheckSum:posA :posB]) { CCLOG(@"does not comply B"); } imageA.position = newPosA; imageB.position = newPosB; //Here could verify that the check sum is equal to 479.0f if (![self verifyCheckSum:posA :posB]) { CCLOG(@"does not comply C"); } isEligibleToMove=true; } -(BOOL) verifyCheckSum:(CGPoint)posA :(CGPoint)posB { BOOL comply = false; float sum = 0.0f; if (posA.y > posB.y) { sum = posA.y - posB.y; } else if (posB.y > posA.y){ sum = posB.y - posA.y; } else{ return false; } if (sum!=yPositionOfSecondElement) { comply= false; } else{ comply=true; } return comply; } And here is what happens on the update: if(shouldMoveImageA && shouldMoveImageB) { if (isEligibleToMove) { [self moveBackgroundSprites:loopedBackgroundImageInstanceA :loopedBackgroundImageInstanceB :delta]; } Forget about shouldMoveImageA and shouldMoveImageB, this is just for when the background reaches the end of level, this works.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >