Search Results

Search found 3870 results on 155 pages for 'fill'.

Page 132/155 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • FY12 Partner Kickoff – Are you Ready?

    - by user715249
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Want to know what Oracle has up its sleeve for FY12? Join us on June 28th (or 29th depending on your region) as we kick off the new year and fill you in on our latest plans. This live, interactive session will be hosted by Judson Althoff and he’s bringing some top Oracle executives into the studio with him. Take a peek below as Lydia Smyers, Oracle WW A&C vice president, talks about this upcoming event. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} As if that weren’t enough, Oracle President Mark Hurd will update you on his focus for partners in FY12 and OPN will be making a special announcement for our ISV partners highlighting some exciting new offerings on how we will go to market together.  But wait, there’s more – you’ll also hear from Oracle product executives and regional sales leaders outlining their priorities for the upcoming year.  Phew – We’re tired just thinking about all of the great content that will be shared on June 28th/29th.  There are a lot of exciting announcements in store for you later this month so tune in for the latest updates! Register now to get your name on ‘the list’, The OPN Communications Team

    Read the article

  • Selling Visual Studio ALM

    - by Tarun Arora
    Introduction As a consultant I have been selling Application Lifecycle Management services using Visual Studio and Team Foundation Server. I’ve been contacted various times by friends working in organization telling me that ALM processes in their company were benchmarked when dinosaurs walked the earth. Most of these individuals already know the great features Microsoft ALM tools offer and are keen to start a conversation with the CIO but don’t exactly know where to start. It is very important how you engage in your first conversation, if you start the conversation with ‘There is this great tooling from Microsoft which offers amazing features to boost developer productivity, … ‘ from experience I can tell you the reply from your CIO would be ‘I already know! Our existing landscape has a combination of bleeding edge open source and cutting edge licensed tools which already cover these features quite well, more over Microsoft products have a high licensing cost associated to them.’ You will always find it harder to sell by feature, the trick is to highlight the gap in the existing processes & tools and then highlight the impact of these gaps to the overall development processes, by now you would have captured enough attention to show off how the ALM tooling offered by Microsoft not only fills those gaps but offers great value adds to take their development practices to the next level. Rangers ALM Assessment Guide Image 1 – Welcome! First look at the Rangers ALM assessment guide Most organization already have some processes in place to cover aspects of ALM. How do you go about proving that there isn’t enough cover in place? This is where Visual Studio ALM Rangers ALM Assessment guide can help. The ALM assessment guide is really a tool that helps you gather information about Development practices and processes within a customer's environment. Several questionnaires are used to identify the current state of individual development lifecycle areas and decide on a desired state for those processes. It also presents guidance and roll-up summaries to help with recommendations moving forward. The ALM Rangers assessment guide can be downloaded from here. Image 2 – ALM Assessment guide divided into different functions of SDLC The assessment guide is divided into different functions of Software Development Lifecycle (listed below), this gives you the ability to access how mature the company is in different areas of SDLC. Architecture & Design Requirement Engineering & UX Development Software Configuration Management Governance Deployment & Operations Testing & Quality Assurance Project Planning & Management Each section has a set of questions, fill in the assessment by selecting “Never/Sometimes/Always” from the Answer column in the question sheets.  Each answer has weightage to the overall score. Each question has a link next to it, clicking the link takes you to the Reference sheet which gives you more details about the question along with a reason for “why you need to ask this question?”, “other ways to phrase the question” and “what to expect as an answer from the customer”. The trick is to engage the customer in a discussion. You need to probe a lot, listen to the customer and have a discussion with several team members, preferably without management to ensure that you receive candid feedback. This reminds me of a funny incident when during an ALM review a customer told me that they have a sophisticated semi-automated application deployment process, further discussions revealed that deployment actually involved 72 manual configuration steps per production node. Such observations can be recorded in the Issue Brainstorming worksheet for further consideration later. It is also worth mentioning the different levels of ALM maturity to the customer. By default the desired state of ALM maturity is set to Standard, it is possible to set a desired state by area, you should strive for Advanced or Dynamic, it always helps by explaining the classification and advantages. Image 3 – ALM levels by description The ALM assessment guide helps you arrive at a quantitative measure of the company’s ALM maturity. The resultant graph plotted on a spider’s web shows you the company’s current state of ALM maturity and the desired state of ALM maturity. Further since the results are classified by area you can immediately spot the areas where the customer needs immediate help. Image 4 – The spiders web! The red cross icons are areas shouting out for immediate attention, the yellow exclamation icons are areas that need improvement. These icons are calculated on the difference between the Current State of ALM maturity VS the Desired state of ALM maturity. Image 5 – Results by area Conclusion To conclude the Rangers ALM assessment guide gives you the ability to, Measure the customer’s current ALM maturity level Understand the ALM maturity level the customer desires to achieve Capture a healthy list of issues the customer wants to brainstorm further Now What’s next…? Download and get started with the Rangers ALM Assessment Guide. If you have successfully captured the above listed three pieces of information you are in a great state to make recommendations on the identified areas highlighting the benefits that Visual Studio ALM tools would offer. In the next post I will be covering how to take the ALM assessment results as the base to actually convert your recommendation into a sell.  Remember to subscribe to http://feeds.feedburner.com/TarunArora. I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. *** A special thanks goes out to fellow ranges Willy, Ethem and Philip for reviewing the blog post and providing valuable feedback. ***

    Read the article

  • Advanced donut caching: using dynamically loaded controls

    - by DigiMortal
    Yesterday I solved one caching problem with local community portal. I enabled output cache on SharePoint Server 2007 to make site faster. Although caching works fine I needed to do some additional work because there are some controls that show different content to different users. In this example I will show you how to use “donut caching” with user controls – powerful way to drive some content around cache. About donut caching Donut caching means that although you are caching your content you have some holes in it so you can still affect the output that goes to user. By example you can cache front page on your site and still show welcome message that contains correct user name. To get better idea about donut caching I suggest you to read ScottGu posting Tip/Trick: Implement "Donut Caching" with the ASP.NET 2.0 Output Cache Substitution Feature. Basically donut caching uses ASP.NET substitution control. In output this control is replaced by string you return from static method bound to substitution control. Again, take a look at ScottGu blog posting I referred above. Problem If you look at Scott’s example it is pretty plain and easy by its output. All it does is it writes out current user name as string. Here are examples of my login area for anonymous and authenticated users:    It is clear that outputting mark-up for these views as string is pretty lame to implement in code at string level. Every little change in design will end up with new version of controls library because some parts of design “live” there. Solution: using user controls I worked out easy solution to my problem. I used cache substitution and user controls together. I have three user controls: LogInControl – this is the proxy control that checks which “real” control to load. AnonymousLogInControl – template and logic for anonymous users login area. AuthenticatedLogInControl – template and logic for authenticated users login area. This is the control we render for each user separately because it contains user name and user profile fill percent. Anonymous control is not very interesting because it is only about keeping mark-up in separate file. Interesting parts are LogInControl and AuthenticatedLogInControl. Creating proxy control The first thing was to create control that has substitution area where “real” control is loaded. This proxy control should also be available to decide which control to load. The definition of control is very primitive. <%@ Control EnableViewState="false" Inherits="MyPortal.Profiles.LogInControl" %> <asp:Substitution runat="server" MethodName="ShowLogInBox" /> But code is a little bit tricky. Based on current user instance we decide which login control to load. Then we create page instance and load our control through it. When control is loaded we will call DataBind() method. In this method we evaluate all fields in loaded control (it was best choice as Load and other events will not be fired). Take a look at the code. public static string ShowLogInBox(HttpContext context) {     var user = SPContext.Current.Web.CurrentUser;     string controlName;       if (user != null)         controlName = "AuthenticatedLogInControl.ascx";     else         controlName = "AnonymousLogInControl.ascx";       var path = "~/_controltemplates/" + controlName;     var output = new StringBuilder(10000);       using(var page = new Page())     using(var ctl = page.LoadControl(path))     using(var writer = new StringWriter(output))     using(var htmlWriter = new HtmlTextWriter(writer))     {         ctl.DataBind();         ctl.RenderControl(htmlWriter);     }     return output.ToString(); } When control is bound to data we ask to render it its contents to StringBuilder. Now we have the output of control as string and we can return it from our method. Of course, notice how correct I am with resources disposing. :) The method that returns contents for substitution control is static method that has no connection with control instance because hen page is read from cache there are no instances of controls available. Conclusion As you saw it was not very hard to use donut caching with user controls. Instead of writing mark-up of controls to static method that is bound to substitution control we can still use our user controls.

    Read the article

  • Understanding each other in web development

    - by Pete Hotchkin
    During my career I have been lucky enough to work in several different roles within web development with many extremely talented people, from incredible designers who were passionate about the placement of every pixel right through to server administrators and DBAs who were always measuring the improvements they were making to their queries in the smallest possible unit. The problem I always faced was that more often than not I was stuck in the middle trying to mediate between these different functions and enable each side to understand the other’s point of view. The main areas of contention that there have always been between these functional groups in my experience have been at 2 key points: during the build phase and then when there is a problem post-build. During both of these times it is often easier for someone to pass the buck onto someone else than spend the time to understand the other person’s perspective. Below is a quick look at two upcoming tools that will not only speed up the build phase for each function, but  also help when it comes to the issues faced once a site has been pushed live. In my experience a web project goes through several phases of development. The first of these is design, generally handled as Photoshop files which are then passed onto a front-end developer. This is the first point at which heated discussions can arise. One problem I’ve seen several times is that the designer doesn’t fully understand the platform constraints that need to be considered, and as a result has designed something that does not translate very well or is simply not possible. Working at Red Gate, I am lucky enough to be able to meet some amazing people and this happened just the other day when I was introduced to Neil Kinnish and Pete Nelson, the creators of what I believe could be a great asset in this designer-developer relationship, Mixture. Mixture allows the front end developer to quickly prototype a web page with built-in frameworks such as bootstrap. It’s not an IDE however, it just sits there in the background and monitors the project files in the background so every time you save a file from your favorite IDE, it will compile things like LESS, compact your JavaScript and the automatically refresh your test browser so you can see the changes instantly. I think one of the best parts of this however is a single button that pushes the changed files up to the web so the designer can instantly see how far the developer has got and the problem that he is facing at that time without the need to spend time setting up a remote server. I can see this being a real asset to remote teams where there needs to be a compromise between the designer and the front-end developer, or just to allow the designer to see how the build is progressing and suggest small alterations. Once the design has been built into the front end the designer’s job is generally done and there are no other points of contention between the designer and the other functions involved in building these web projects. As the project moves into the stage of integrating it into the back end and deploying it to the production server other functions start to be pulled in and other issues arise such as the back-end developer understanding the frameworks that they are using such as the routes that are in place in an MVC application or the number of database calls that the ORM layer is actually making. There are many tools out there that can actually help with these problems such as mini profiler that gives you a quick snapshot of what is going on directly in the browser. For a slightly more in-depth look at what is happening and to gain a deeper understanding of an application you may be working on though, you may want to consider Glimpse. Created by Nik and Anthony, it is an application that sits at the bottom of your browser (installed via NuGet) which can show you information about how your application is pieced together and how the information on screen is being delivered as it happens. With a wealth of community-built plugins such as one for nHibernate and linq2SQL (full list of plugins on NuGet). It can be customized directly to your own setup to truly delve into the code to see what is happening, and can help to reduce the number of confusing moments about whether it is your code that is going wrong or whether there is something more sinister happening directly on the server. All the tools that I have mentioned in this post help to do one thing above all, and that is to ease the barrier of understanding between the different functions that are involved in building and maintaining a web application. In my experience it is very easy to say “Well, that’s not my problem”, simply because the two functions involved don’t truly understand the other’s point of view. Software should not only be seen as a way to streamline our own working process or as a debugging tool but also a communication aid to improve the entire lifecycle of a web project. Glimpse is actually the project that I am the designer on and I would love to get your feedback if you do decide to try it out or if you would like to share your own experiences of working on web projects please fill in your details at https://www.surveymk.com/s/joinGlimpse  or add a comment below and I will get in touch with you.

    Read the article

  • How to install Sweetcron on XAMPP

    - by Sushaantu
    This tutorial will take you to the installation steps required to install Sweetcron in the XAMPP. I am taking the liberty to assume that you have already installed XAMPP. I First of all download sweetcron and copy the extracted “sweetcron” folder inside the htdocs folder in the XAMPP directory. You have to get few things in place before installing sweetcron: 1. Create a sweetcron database using Mysql. You can just put a name “sweetcron” and leave all the other settings such as Mysql connection collation as it is. Now that you have the database ready you can configure few settings before making sweetcron to work on XAMPP. 2. Open the config-sample PHP file with your text editor (something like Notepad++ or Komodo Edit is recommended) which is located in Sweetcron/system/application/config. You have to make few changes in it. a. In the first settings you have to edit the value of $config ['base_url']. The default value is             “http://www.your-site.com”; and you have to change that into “http://localhost/sweetcron/”; b. You also have to change the deafult settings of $config ['uri_protocol']. The value that you will see is “REQUEST_URI”; but you need to change that into “AUTO”; c. Now that we have made all the changes you can rename the file from config_sample to just config. 3. Open the database-sample.php in the text editor. You need to make the edits regarding the databse in here. a. The values of the databse at the moment are like this: $db['default']['hostname'] = “localhost”; $db['default']['username'] = “”; $db['default']['password'] = “”; $db['default']['database'] = “”; $db['default']['dbdriver'] = “mysql”; $db['default']['dbprefix'] = “”; $db['default']['pconnect'] = TRUE; $db['default']['db_debug'] = TRUE; $db['default']['cache_on'] = FALSE; $db['default']['cachedir'] = “”; $db['default']['char_set'] = “utf8″; $db['default']['dbcollat'] = “utf8_general_ci”; You have to change that into $db['default']['hostname'] = “localhost”; $db['default']['username'] = “root”; $db['default']['password'] = “”; $db['default']['database'] = “sweetcron”; $db['default']['dbdriver'] = “mysql”; $db['default']['dbprefix'] = “”; $db['default']['pconnect'] = TRUE; $db['default']['db_debug'] = TRUE; $db['default']['cache_on'] = FALSE; $db['default']['cachedir'] = “”; $db['default']['char_set'] = “utf8″; $db['default']['dbcollat'] = “utf8_general_ci”; We have written the username as root along with the name of database (sweetcron in my case). Since I was not using any password in xampp for the sweetcron database so I have left the password option empty. You can make suitable changes according to your system. Write down your password in the third line if you are using one in xampp. b. Now that we have made all the edits we can change the file name from database-sample to just database. 4. That leaves us with only one setting and that is editing values in the .htaccess file with our text editor. The default values you will have in the .htaccess file are: Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] and you just have to add “sweetcron” after Rewritebase in the third line. Options +FollowSymLinks RewriteEngine On RewriteBase /sweetcron RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] 5. Now you are all set and done. You can access sweetcron in the localhost by going to http://localhost/sweetcron/ you will see some text on the top which would prompt you to click on one script. Click that script and behold, you have your sweetcrom installation on xampp ready. Further it will ask you to add deatils such as Lifestream name, username and email address. Fill those deatils and you will reach the admin panel.

    Read the article

  • My Right-to-Left Foot (T-SQL Tuesday #13)

    - by smisner
    As a business intelligence consultant, I often encounter the situation described in this month's T-SQL Tuesday, hosted by Steve Jones ( Blog | Twitter) – “What the Business Says Is Not What the  Business Wants.” Steve posed the question, “What issues have you had in interacting with the business to get your job done?” My profession requires me to have one foot firmly planted in the technology world and the other foot planted in the business world. I learned long ago that the business never says exactly what the business wants because the business doesn't have the words to describe what the business wants accurately enough for IT. Not only do technological-savvy barriers exist, but there are also linguistic barriers between the two worlds. So how do I cope? The adage "a picture is worth a thousand words" is particularly helpful when I'm called in to help design a new business intelligence solution. Many of my students in BI classes have heard me explain ("rant") about left-to-right versus right-to-left design. To understand what I mean about these two design options, let's start with a picture: When we design a business intelligence solution that includes some sort of traditional data warehouse or data mart design, we typically place the data sources on the left, the new solution in the middle, and the users on the right. When I've been called in to help course-correct a failing BI project, I often find that IT has taken a left-to-right approach. They look at the data sources, decide how to model the BI solution as a _______ (fill in the blank with data warehouse, data mart, cube, etc.), and then build the new data structures and supporting infrastructure. (Sometimes, they actually do this without ever having talked to the business first.) Then, when they show what they've built to the business, the business says that is not what we want. Uh-oh. I prefer to take a right-to-left approach. Preferably at the beginning of a project. But even if the project starts left-to-right, I'll do my best to swing it around so that we’re back to a right-to-left approach. (When circumstances are beyond my control, I carry on, but it’s a painful project for everyone – not because of me, but because the approach just doesn’t get to what the business wants in the most effective way.) By using a right to left approach, I try to understand what it is the business is trying to accomplish. I do this by having them explain reports to me, and explaining the decision-making process that relates to these reports. Sometimes I have them explain to me their business processes, or better yet show me their business processes in action because I need pictures, too. I (unofficially) call this part of the project "getting inside the business's head." This is starting at the right side of the diagram above. My next step is to start moving leftward. I do this by preparing some type of prototype. Depending on the nature of the project, this might mean that I simply mock up some data in a relational database and build a prototype report in Reporting Services. If I'm lucky, I might be able to use real data in a relational database. I'll either use a subset of the data in the prototype report by creating a prototype database to hold the sample data, or select data directly from the source. It all depends on how much data there is, how complex the queries are, and how fast I need to get the prototype completed. If the solution will include Analysis Services, then I'll build a prototype cube. Analysis Services makes it incredibly easy to prototype. You can sit down with the business, show them the prototype, and have a meaningful conversation about what the BI solution should look like. I know I've done a good job on the prototype when I get knocked out of my chair so that the business user can explore the solution further independently. (That's really happened to me!) We can talk about dimensions, hierarchies, levels, members, measures, and so on with something tangible to look at and without using those terms. It's not helpful to use sample data like Adventure Works or to use BI terms that they don't really understand. But when I show them their data using the BI technology and talk to them in their language, then they truly have a picture worth a thousand words. From that, we can fine tune the prototype to move it closer to what they want. They have a better idea of what they're getting, and I have a better idea of what to build. So right to left design is not truly moving from the right to the left. But it starts from the right and moves towards the middle, and once I know what the middle needs to look like, I can then build from the left to meet in the middle. And that’s how I get past what the business says to what the business wants.

    Read the article

  • Open World Day 3

    - by Antony Reynolds
    A Day in the Life of an Oracle OpenWorld Attendee Part IV My third day was exhibition day for me!  I took the opportunity to wander around the JavaOne and OpenWorld exhibitions to see what might be useful for me when selling WebLogic, Coherence & SOA Suite.  I found a number of interesting vendors and thought I would share what I found here.  These are not necessarily endorsements, but observations on companies that I thought had interesting looking products that fill a need I have seen at customers. Highly Available EBS Upgrades A few years ago I worked with a customer that was a port authority.  They wanted to tie E-Business Suite into their operations to provide faster processing of cargo and passengers.  However they only had a 2 hour downtime window to perform upgrades.  This was not a problem for core database and middleware technology, this could accommodate those upgrade timescales easily.  It was a problem for EBS however so I intrigued to find Rapid E-Suite Inc offering an 11i to 12i upgrade service that claims to require no outage.  This could be a real boon to EBS customers like my port friends that need to upgrade without disruption to their business. Mobile on WebLogic I have come across a number of customers who want a comprehensive mobile solution, connected and disconnected operation and so forth.  ADF only addresses part of these requirements currently so I was excited to discover mFrontiers Inc offering an apparently comprehensive solution that should integrate easily with Oracle SOA Suite to mobile enable a SOA infrastructure.  The ability to operate without a network is important for many applications, particularly in industries that require their engineers to enter buildings to perform maintenance or repairs, because network access is not always available – many of my colleagues don’t have mobile access from their homes because they live in the middle of nowhere – and disconnected support is crucial in these situations. Sharepoint Connector for WebCenter Content Obviously Sharepoint is an evil pernicious intrusion into a companies IT estate but it is widely deployed and many people like it but also would like to take advantage of Oracle products such as WebCenter Content.  So I was encouraged to see that Fishbowl Solutions have created a connector for Sharepoint that allows it to bring in content from WebCenter, it looks like a valuable way to maintain the Sharepoint interface end users are used to but extend the range of content by pulling stuff (technical term for content) from WebCenter.   Load Balancing The Enterprise Deployment Guides are Oracles bible on building highly available FMW environments, and each of them requires a front end load balancer.  I have been asked to help configure F5 Load Balancers on a number of occasions over my time at Oracle and each time I come back to it I find more useful features have been added to the BigIP line of load balancers that F5 sell, many of their documents are tailored to FMW.  I like F5, they provide (relatively) easy to use products that do what they say on the side of the box.  They may not have all the bells and whistles of some of their more expensive competitors but they do the job and do it well!  Besides which I like their logo! Other Stuff I saw lots of other interesting products and services, such as a lightweight monitoring tool for Coherence, Forms migration services, JCAPS migration services and lots of cool freebies to take home to the children! A Quiet Night Wednesday night was the partner appreciation event and I had decided to go back to the hotel and have an early night.  I decided to attend the last session of the day – a Maven/Hudson/WebLogic tutorial.  I got the wrong hotel for the session and snuck in 20 minutes late at the back and starting working on the hands on workshop.  One of my co-attendees raised his hand for help and as the presenter came over to help he suddenly stopped and yelled – “Is that Antony”!  It was my old friend Steve Button who used to be based in Redwood Shores but is now a WebLogic guru PM in Australia.  It was good to catch up with him.  As he yelled out a guy with really bad posture turned around to see who he was talking to, this turned out to be my friend Simon Haslan, Oracle ACE from the UK.  After the tutorial Simon and I retired to the coffee shop to catch up and share stories.  2 and half hours later we decided it was time to retire, so much for an early night but great to renew old friendships and find out what real customers are worrying about.

    Read the article

  • Off center projection

    - by N0xus
    I'm trying to implement the code that was freely given by a very kind developer at the following link: http://forum.unity3d.com/threads/142383-Code-sample-Off-Center-Projection-Code-for-VR-CAVE-or-just-for-fun Right now, all I'm trying to do is bring it in on one camera, but I have a few issues. My class, looks as follows: using UnityEngine; using System.Collections; public class PerspectiveOffCenter : MonoBehaviour { // Use this for initialization void Start () { } // Update is called once per frame void Update () { } public static Matrix4x4 GeneralizedPerspectiveProjection(Vector3 pa, Vector3 pb, Vector3 pc, Vector3 pe, float near, float far) { Vector3 va, vb, vc; Vector3 vr, vu, vn; float left, right, bottom, top, eyedistance; Matrix4x4 transformMatrix; Matrix4x4 projectionM; Matrix4x4 eyeTranslateM; Matrix4x4 finalProjection; ///Calculate the orthonormal for the screen (the screen coordinate system vr = pb - pa; vr.Normalize(); vu = pc - pa; vu.Normalize(); vn = Vector3.Cross(vr, vu); vn.Normalize(); //Calculate the vector from eye (pe) to screen corners (pa, pb, pc) va = pa-pe; vb = pb-pe; vc = pc-pe; //Get the distance;; from the eye to the screen plane eyedistance = -(Vector3.Dot(va, vn)); //Get the varaibles for the off center projection left = (Vector3.Dot(vr, va)*near)/eyedistance; right = (Vector3.Dot(vr, vb)*near)/eyedistance; bottom = (Vector3.Dot(vu, va)*near)/eyedistance; top = (Vector3.Dot(vu, vc)*near)/eyedistance; //Get this projection projectionM = PerspectiveOffCenter(left, right, bottom, top, near, far); //Fill in the transform matrix transformMatrix = new Matrix4x4(); transformMatrix[0, 0] = vr.x; transformMatrix[0, 1] = vr.y; transformMatrix[0, 2] = vr.z; transformMatrix[0, 3] = 0; transformMatrix[1, 0] = vu.x; transformMatrix[1, 1] = vu.y; transformMatrix[1, 2] = vu.z; transformMatrix[1, 3] = 0; transformMatrix[2, 0] = vn.x; transformMatrix[2, 1] = vn.y; transformMatrix[2, 2] = vn.z; transformMatrix[2, 3] = 0; transformMatrix[3, 0] = 0; transformMatrix[3, 1] = 0; transformMatrix[3, 2] = 0; transformMatrix[3, 3] = 1; //Now for the eye transform eyeTranslateM = new Matrix4x4(); eyeTranslateM[0, 0] = 1; eyeTranslateM[0, 1] = 0; eyeTranslateM[0, 2] = 0; eyeTranslateM[0, 3] = -pe.x; eyeTranslateM[1, 0] = 0; eyeTranslateM[1, 1] = 1; eyeTranslateM[1, 2] = 0; eyeTranslateM[1, 3] = -pe.y; eyeTranslateM[2, 0] = 0; eyeTranslateM[2, 1] = 0; eyeTranslateM[2, 2] = 1; eyeTranslateM[2, 3] = -pe.z; eyeTranslateM[3, 0] = 0; eyeTranslateM[3, 1] = 0; eyeTranslateM[3, 2] = 0; eyeTranslateM[3, 3] = 1f; //Multiply all together finalProjection = new Matrix4x4(); finalProjection = Matrix4x4.identity * projectionM*transformMatrix*eyeTranslateM; //finally return return finalProjection; } // Update is called once per frame public void FixedUpdate () { Camera cam = camera; //calculate projection Matrix4x4 genProjection = GeneralizedPerspectiveProjection( new Vector3(0,1,0), new Vector3(1,1,0), new Vector3(0,0,0), new Vector3(0,0,0), cam.nearClipPlane, cam.farClipPlane); //(BottomLeftCorner, BottomRightCorner, TopLeftCorner, trackerPosition, cam.nearClipPlane, cam.farClipPlane); cam.projectionMatrix = genProjection; } } My error lies in projectionM = PerspectiveOffCenter(left, right, bottom, top, near, far); The debugger states: Expression denotes a `type', where a 'variable', 'value' or 'method group' was expected. Thus, I changed the line to read: projectionM = new PerspectiveOffCenter(left, right, bottom, top, near, far); But then the error is changed to: The type 'PerspectiveOffCenter' does not contain a constructor that takes '6' arguments. For reasons that are obvious. So, finally, I changed the line to read: projectionM = new GeneralizedPerspectiveProjection(left, right, bottom, top, near, far); And the error I get is: is a 'method' but a 'type' was expected. With this last error, I'm not sure what it is I should do / missing. Can anyone see what it is that I'm missing to fix this error?

    Read the article

  • WebCenter Content (WCC) Trace Sections

    - by Kevin Smith
    Kyle has a good post on how to modify the size and number of WebCenter Content (WCC) trace files. His post reminded me I have been meaning to write a post on WCC trace sections for a while. searchcache - Tells you if you query was found in the WCC search cache. searchquery - Shows the processing of the query as it is converted form what the user submitted to the end query that will be sent to the database. Shows conversion from the universal query syntax to the syntax specific to the search solution WCC is configured to use. services (verbose) - Lists the filters that are called for each service. This will let you know what filters are available for each service and will also tell you what filters are used by WCC add-on components and any custom components you have installed. The How To Component Sample has a list of filters, but it has not been updated since 7.5, so it is a little outdated now. With each new release WCC adds more filters. If you have a filter that has no code attached to it you will see output like this: services/6    09.25 06:40:26.270    IdcServer-423    Called filter event computeDocName with no filter plugins registered When a WCC add-on or custom component uses a filter you will see trace output like this: services/6    09.25 06:40:26.275    IdcServer-423    Calling filter event postValidateCheckinData on class collections.CollectionValidateCheckinData with parameter postValidateCheckinDataservices/6    09.25 06:40:26.275    IdcServer-423    Calling filter event postValidateCheckinData on class collections.CollectionFilters with parameter postValidateCheckinData As you can see from this sample output it is possible to have multiple code points using the same filter. systemdatabase - Dumps the database call AFTER it executes. This can be somewhat troublesome if you are trying to track down some weird database problems. We had a problem where WCC was getting into a deadlock situation. We turned on the systemdatabase trace section and thought we had the problem database call, but it turned out since it printed out the database call after it was executed we were looking at the database call BEFORE the one causing the deadlock. We ended up having to turn on tracing at the database level to see the database call WCC was making that was causing the deadlock. socketrequests (verbose) - dumps the actual messages received and sent over the socket connection by WCC for a service. If you have gzip enabled you will see junk on the response coming back from WCC. For debugging disable the gzip of the WCC response.Here is an example of the dump of the request for a GET_SEARCH_RESULTS service call. socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: REMOTE_USER=sysadmin.USER-AGENT=Java;.Stel socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: lent.CIS.11g.CONTENT_TYPE=text/html.HEADER socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: _ENCODING=UTF-8.REQUEST_METHOD=POST.CONTEN socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: T_LENGTH=270.HTTP_HOST=CIS.$$$$.NoHttpHead socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: ers=0.IsJava=1.IdcService=GET_SEARCH_RESUL socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: [email protected] socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: calData.SortField=dDocName.ClientEncoding= socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: UTF-8.IdcService=GET_SEARCH_RESULTS.UserTi socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: meZone=UTC.UserDateFormat=iso8601.SortDesc socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: =ASC.QueryText=dDocType..matches..`Documen socketrequests/6 09.25 06:46:02.501 IdcServer-6 request: t`.@end. userstorage, jps - Provides trace details for user authentication and authorization. Includes information on the determination of what roles and accounts a user has access to. In 11g a new trace section, jps, was added with the addition of the JpsUserProvider to communicate with WebLogic Server. The WCC developers decide when to use the verbose option for their trace output, so sometime you need to try verbose to see what different information you get. One of the things I would always have liked to see if the ability to turn on verbose output selectively for individual trace sections. When you turn on verbose output you get it for all trace sections you have enabled. This can quickly fill up your trace files with a lot of information if you have the socket trace section turned on.

    Read the article

  • 5 Things I Learned About the IT Labor Shortage

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein | Sr. Principal Product Marketing Director | Oracle Midsize Programs | @JimLein Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} 5 Things I Learned About the IT Labor Shortage A gentle autumn breeze is nudging the last golden leaves off the aspen trees. It’s time to wrap up the series that I started back in April, “The Growing IT Labor Shortage: Are You Feeling It?” Even in a time of relatively high unemployment, labor shortages exist depending on many factors, including location, industry, IT requirements, and company size. According to Manpower Groups 2013 Talent Shortage Survey, 35% of hiring managers globally are having difficulty filling jobs. Their top three challenges in filling jobs are: 1. lack of technical competencies (hard skills) 2. Lack of available applicants 3. Lack of experience The same report listed Technicians as the most difficult position to fill in the United States For most companies, Human Capital and Talent Management have never been more strategic and they are striving for ways streamline processes, reduce turnover, and lower costs (see this Oracle whitepaper, “ Simplify Workforce Management and Increase Global Agility”). Everyone I spoke to—partner, customer, and Oracle experts—agreed that it can be extremely challenging to hire and retain IT talent in today’s labor market. And they generally agreed on the causes: a. IT is so pervasive that there are myriad moving parts requiring support and expertise, b. thus, it’s hard for university graduates to step in and contribute immediately without experience and specialization, c. big IT companies generally aren’t the talent incubators that they were in the freewheeling 90’s due to bottom line pressures that require hiring talent that can hit the ground running, and d. it’s often too expensive for resource-strapped midsize companies to invest the time and money required to get graduates up to speed. Here are my top lessons learned from my conversations with the experts. 1. A Better Title Would Have Been, “The Challenges of Finding and Retaining IT Talent That Matches Your Requirements” There are more applicants than jobs but it’s getting tougher and tougher to find individuals that perfectly fit each and every role. Top performing companies are increasingly looking to hire the “almost ready”, striving to keep their existing talent more engaged, and leveraging their employee’s social and professional networks to quickly narrow down candidate searches (here’s another whitepaper, “A Strategic Approach to Talent Management”). 2. Size Matters—But So Does Location Midsize companies must strive to build cultures that compete favorably with what large enterprises can offer, especially when they aren’t within commuting distance of IT talent strongholds. They can’t always match the compensation and benefits offered by large enterprises so it's paramount to offer candidates high quality of life and opportunities to build their resumes in alignment with their long term career aspirations. 3. Get By With a Little Help From Your Friends It doesn’t always make sense to invest time and money in training an employee on a task they will not perform frequently. Or get in a bidding war for talent with skills that are rare and in high demand. Many midsize companies are finding that it makes good economic sense to contract with partners for remote support rather than trying to divvy up each and every role amongst their lean staff. Internal staff can be assigned to roles that will have the highest positive impact on achieving organizational goals. 4. It’s Actually Both “What You Know” AND “Who You Know” If I was hiring someone today I would absolutely leverage the social and professional networks of my co-workers. Period. Most research shows that hiring in this manner is less expensive and time consuming AND produces better results. There is also some evidence that suggests new hires from employees’ networks have higher job performance and retention rates. 5. I Have New Respect for Recruiters and Hiring Managers My hats off to them—it’s not easy hiring and retaining top talent with today’s challenges. Check out the infographic, “A New Day: Taking HR from Chaos to Control”, on Oracle’s Human Capital Management solutions home page. You can also explore all of Oracle’s HCM solutions from that page based on your role. You can read all the posts in this series by clicking on the links in the right sidebar. Stay tuned…we’ll continue to post thought leadership on HCM and Talent Management topics.

    Read the article

  • Say What? Podcasting As Part of Your Content Marketing

    - by Mike Stiles
    What do you usually do in your car on the way to work?  Sing along to radio? Stream Pandora or iHeartRadio? Talk on the phone? Sit in total silence? Whatever it is you do, you could be using that time to make yourself an expert in any range of topics…using podcasts. We invite you to follow or subscribe to the daily Oracle Social Spotlight podcast, a quick roundup of the day’s top stories around social marketing and the social networks. After podcasts arrived in 2004, growth was steady but slow. The concept was strong: anyone with a passion for any subject could make a show for anyone who cared to listen. Enter the smartphone, iTunes, new podcasting platforms, and social, and podcasting became easier than ever and made more sense for both podcasters and listeners. Stats show 1 in 5 smartphone owners are podcast consumers and 29% of Americans have listened to a podcast. The potential audience is also larger than ever. “Baked in” podcast apps on over 200 million devices expose users to volumes of audio content with just a tap. 97 million Americans are driving to work every day by themselves. And 38% of Americans listen to audio on a digital device each week, a number that’s projected to double by 2015. Does that mean your brand should be podcasting? That’s part of a larger discussion about your overall content strategy, provided you have one. But if you do and podcasting is a component of it, here are some things to keep in mind: Don’t podcast just to do it. Podcast because you thought of a show customers and prospects will like that they can’t get anywhere else. Sound quality matters. Good microphones are not expensive. Bad sound is annoying, makes your brand feel cheap, and will turn today’s sophisticated ears off. The host matters. Many think they belong on the radio. Few actually do. Your brand’s host should be comfortable & likeable. A top advantage of a podcast is people can bond with a real person. It’s a trust opportunity, so don’t take it lightly. The content matters. “All killer, no filler” means don’t allow babbling just to fill enough time for an episode. Value the listeners’ time, because that time is hard to get. Put time, effort and creativity into it. Sure you’re a business, but you’re competing with content from professional media and showbiz producers. If you can include music, sound effects, and things that amuse the ears, do it. If you start, be consistent. The #1 flaw in podcasting is when listeners can’t count on another episode or don’t know when it’s coming. Don’t skip doing shows just because you can. Get committed. Get your cover art right. Podcasting is about audio, but people shop for podcasts by glancing through graphics. Yours has to be professional, cool, and informative to get listeners interested. Cross-promote your podcast on all your channels. The competition for listeners is fierce, so if you have existing audiences you can leverage to launch your show, use them. Optimize it for mobile. Assume that’s where most listening will take place. If you’re using one of the podcast platform apps, you should be in good shape. Frankly, the percentage of brands that are podcasting is quite low, and that’s okay. Once you move beyond blogging and start connecting with real voices, poor execution can do damage. But more (32%) marketers want to learn how to use podcasting, and more (23%) were increasing their podcasting throughout this year. Bottom line, you want to share your brand’s message and stories wherever your audience might be and in whatever way they prefer to take in content. Many prefer to do that while driving or working out, using the eyes and hands-free medium of audio. @mikestilesPhoto: stock.xchng

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • My Red Gate Experience

    - by Colin Rothwell
    I’m Colin, and I’ve been an intern working with Mike in publishing on Simple-Talk and SQLServerCentral for the past ten weeks. I’ve mostly been working “behind the scenes”, making improvements to the spam filtering, along with various other small tweaks. When I arrived at Red Gate, one of the first things Mike asked me was what I wanted to get out of the internship. It wasn’t a question I’d given a great deal of thought to, but my immediate response was the same as almost anybody: to support my growing family. Well, ok, not quite that, but money was certainly a motivator, along with simply making sure that I didn’t get bored over the summer. Three months is a long time to fill, and many of my friends end up getting bored, or worse, knitting obsessively. With the arrogance which seems fairly common among Cambridge people, I wasn’t expecting to really learn much here! In my mind, the part of the year where I am at Uni is the part where I learn things, whilst Red Gate would be an opportunity to apply what I’d learnt. Thankfully, the opposite is true: I’ve learnt a lot during my time here, and there has been a definite positive impact on the way I write code. The first thing I’ve really learnt is that test-driven development is, in general, a sensible way of working. Before coming, I didn’t really get it: how could you test something you hadn’t yet written? It didn’t make sense! My problem was seeing a test as having to test all the behaviour of a given function. Writing tests which test the bare minimum possible and building them up is a really good way of crystallising the direction the code needs to grow in, and ensures you never attempt to write too much code at time. One really good experience of this was early on in my internship when Mike and I were working on the query used to list active authors: I’d written something which I thought would do the trick, but by starting again using TDD we grew something which revealed that there were several subtle mistakes in the query I’d written. I’ve also been awakened to the value of pair programming. Whilst I could sort of see the point before coming, I also thought that it was impossible that two people would ever get more done at the same computer than if they were working separately. I still think that this is true for projects with pieces that developers can easily work on independently, and with developers who both know the codebase, but I’ve found that pair programming can be really good for learning a code base, and for building up small projects to the point where you can start working on separate components, as well as solving particularly difficult problems. Later on in my internship, for my down tools week project, I was working on adding Python support to Glimpse. Another intern and I we pair programmed the entire project, using ping pong pair programming as much as possible. One bonus that this brought which I wasn’t expecting was that I found myself less prone to distraction: with someone else peering over my shoulder, I didn’t have the ever-present temptation to open gmail, or facebook, or yammer, or twitter, or hacker news, or reddit, and so on, and so forth. I’m quite proud of this project: I think it’s some of the best code I’ve written. I’ve also been really won over to the value of descriptive variables names. In my pre-Red Gate life, as a lone-ranger style cowboy programmer, I’d developed a tendency towards laziness in variable names, sometimes abbreviating or, worse, using acronyms. I’ve swiftly realised that this is a bad idea when working with a team: saving a few key strokes is inevitably not worth it when it comes to reading code again in the future. Longer names also mean you can do away with a majority of comments. I appreciate that if you’ve come up with an O(n*log n) algorithm for something which seemed O(n^2), you probably want to explain how it works, but explaining what a variable name means is a big no no: it’s so very easy to change the behaviour of the code, whilst forgetting about the comments. Whilst at Red Gate, I took the opportunity to attend a code retreat, which really helped me to solidify all the things I’d learnt. To be completely free of any existing code base really lets you focus on best practises and think about how you write code. If you get a chance to go on a similar event, I’d highly recommend it! Cycling to Red Gate, I’ve also become much better at fitting inner tubes: if you’re struggling to get the tube out, or re-fit the tire, letting a bit of air out usually helps. I’ve also become quite a bit better at foosball and will miss having a foosball table! I’d like to finish off by saying thank you to everyone at Red Gate for having me. I’ve really enjoyed working with, and learning from, the team that brings you this web site. If you meet any of them, buy them a drink!

    Read the article

  • Drawing on a webpage – HTML5 - IE9

    - by nmarun
    So I upgraded to IE9 and continued exploring HTML5. Now there’s this ‘thing’ called Canvas in HTML5 with which you can do some cool stuff. Alright what IS this Canvas thing anyways? The Web Hypertext Application Technology Working Group says this: “The canvas element provides scripts with a resolution-dependent bitmap canvas, which can be used for rendering graphs, game graphics, or other visual images on the fly.” The Canvas element has two only attributes – width and height and when not specified they take up the default values of 300 and 150 respectively. Below is what my HTML file looks like: 1: <!DOCTYPE html> 2: <html lang="en-US"> 3: <head> 4: <script type="text/javascript" src="CustomScript.js"></script> 5: <script src="jquery-1.4.4.js" type="text/javascript"></script 6:  7: <title>Draw on a webpage</title> 8: </head> 9: <body> 10: <canvas id="canvas" width="500" height="500"></canvas> 11: <br /> 12: <input type="submit" id="submit" value="Clear" /> 13: <h4 id="currentPosition"> 14: 0, 0 15: </h4> 16: <div id="mousedownCoords"></div> 17: </body> 18: </html> In case you’re wondering, this is not a MVC or any kind of web application. This is plain ol’ HTML even though I’m writing all this in VS 2010. You see this is a very simple, ‘gimmicks-free’ html page. I have declared a Canvas element on line 10 and a button on line 11 to clear the drawing board. I’m using jQuery / JavaScript show the current position of the mouse on the screen. This will get updated in the ‘currentPosition’ <h4> tag and I’m using the ‘mousedownCoords’ to write all the places where the mouse was clicked. This is what my page renders as: The rectangle with a background is our canvas. The coloring is due to some javascript (which we’ll see in a moment). Now let’s get to our CustomScript.js file. 1: jQuery(document).ready(function () { 2: var isFirstClick = true; 3: var canvas = document.getElementById("canvas"); 4: // getContext: Returns an object that exposes an API for drawing on the canvas 5: var canvasContext = canvas.getContext("2d"); 6: fillBackground(); 7:  8: $("#submit").click(function () { 9: clearCanvas(); 10: fillBackground(); 11: }); 12:  13: $(document).mousemove(function (e) { 14: $('#currentPosition').html(e.pageX + ', ' + e.pageY); 15: }); 16: $(document).mouseup(function (e) { 17: // on the first click 18: // set the moveTo 19: if (isFirstClick == true) { 20: canvasContext.beginPath(); 21: canvasContext.moveTo(e.pageX - 7, e.pageY - 7); 22: isFirstClick = false; 23: } 24: else { 25: // on subsequent clicks, draw a line 26: canvasContext.lineTo(e.pageX - 7, e.pageY - 7); 27: canvasContext.stroke(); 28: } 29:  30: $('#mousedownCoords').text($('#mousedownCoords').text() + '(' + e.pageX + ',' + e.pageY + ')'); 31: }); 32:  33: function fillBackground() { 34: canvasContext.fillStyle = '#a1b1c3'; 35: canvasContext.fillRect(0, 0, 500, 500); 36: canvasContext.fill(); 37: } 38:  39: function clearCanvas() { 40: // wipe-out the canvas 41: canvas.width = canvas.width; 42: // set the isFirstClick to true 43: // so the next shape can begin 44: isFirstClick = true; 45: // clear the text 46: $('#mousedownCoords').text(''); 47: } 48: })   The script only looks long and complicated, but is not. I’ll go over the main steps. Get a ‘hold’ of your canvas object and retrieve the ‘2d’ context out of it. On mousemove event, write the current x and y coordinates to the ‘currentPosition’ element. On mouseup event, check if this is the first time the user has clicked on the canvas. The coloring of the canvas is done in the fillBackground() function. We first need to start a new path. This is done by calling the beginPath() function on our context. The moveTo() function sets the starting point of our path. The lineTo() function sets the end point of the line to be drawn. The stroke() function is the one that actually draws the line on our canvas. So if you want to play with the demo, here’s how you do it. First click on the canvas (nothing visible happens on the canvas). The second click draws a line from the first click to the current coordinates and so on and so forth. Click on the ‘Clear’ button, to reset the canvas and to give your creativity a clean slate. Here’s a sample output: Happy drawing! Verdict: HTML5 and IE9 – I think we’re on to something big and great here!

    Read the article

  • Build 2012, some thoughts..

    - by Dennis Vroegop
    I think you probably read my rant about the logistics at Build 2012, as posted here, so I am not going into that anymore. Instead, let’s look at the content. (BTW If you did read that post and want some more info then read Nia Angelina’s post about Build. I have nothing to add to that.) As usual, there were good speakers and some speakers who could benefit from some speaker training. I find it hard to understand why Microsoft allows certain people on stage, people who speak English with such strong accents it’s hard for people, especially from abroad, to understand. Some basic training might be useful for some of them. However, it is nice to see that most speakers are project managers, program managers or even devs on the teams that build the stuff they talk about: there was a lot of knowledge on stage! And that means when you ask questions you get very relevant information. I realize I am not the average audience member here, I am regular speaker myself so I tend to look for other things when I am in a room than most audience members so my opinion might differ from others. All in all the knowledge of the speakers was above average but the presentation skills were most of the times below what I would describe as adequate. But let us look at the contents. Since the official name of the conference is Build Windows 2012 it is not surprising most of the talks were focused on building Windows 8 apps. Next to that, there was a lot of focus on Azure and of course Windows Phone 8 that launched the day before Build started. Most sessions dealt with C# and JavaScript although I did see a tendency to use C++ more. Touch. Well, that was the focus on a lot of sessions, that goes without saying. Microsoft is really betting on Touch these days and being a Touch oriented developer I can only applaud this. The term NUI is getting a bit outdated but the principles behind it certainly aren’t. The sessions did cover quite a lot on how to make your applications easy to use and easy to understand. However, not all is touch nowadays; still the majority of people use keyboard and mouse to interact with their machines (or, as I do, use keyboard, mouse AND touch at the same time). Microsoft understands this and has spend some serious thoughts on this as well. It was all about making your apps run everywhere on all sorts of devices and in all sorts of scenarios. I have seen a couple of sessions focusing on the portable class library and on sharing code between Windows 8 and Windows Phone 8. You get the feeling Microsoft is enabling us devs to write software that will be ubiquitous. They want your stuff to be all over the place and they do anything they can to help. To achieve that goal they provide us with brilliant SDK’s, great tooling, a very, very good backend in the form of Windows Azure (I was particularly impressed by the Mobility part of Azure) and some fantastic hardware. And speaking of hardware: the partners such as Acer, Lenovo and Dell are making hardware that puts Apple to a shame nowadays. To illustrate: in Bellevue (very close to Redmond where Microsoft HQ is) they have the Microsoft Store located very close to the Apple Store, so it’s easy to compare devices. And I have to say: the Microsoft offerings are much, much more appealing that what the Cupertino guys have to offer. That was very visible by the number of people visiting the stores: even on the day that Apple launched the iPad Mini there were more people in the Microsoft store than in the Apple store. So, the future looks like it’s going to be fun. Great hardware (did I mention the Nokia Lumia 920? No? It’s brilliant), great software (Windows 8 is in a league of its own), the best dev tools (Visual Studio 2012 is still the champion here) and a fantastic backend (Azure.. need I say more?). It’s up to us devs to fill up the stores with applications that matches this. To summarize: it is great to be a Windows developer. PS. Did I mention Surface RT? Man….. People were drooling all over it wherever I went. It is fantastic :-) Technorati Tags: Build,Windows 8,Windows Phone,Lumia,Surface,Microsoft

    Read the article

  • Detecting Duplicates Using Oracle Business Rules

    - by joeywong-Oracle
    Recently I was involved with a Business Process Management Proof of Concept (BPM PoC) where we wanted to show how customers could use Oracle Business Rules (OBR) to easily define some rules to detect certain conditions, such as duplicate account numbers, duplicate names, high transaction amounts, etc, in a set of transactions. Traditionally you would have to loop through the transactions and compare each transaction with each other to find matching conditions. This is not particularly nice as it relies on more traditional approaches (coding) and is not the most efficient way. OBR is a great place to house these types’ of rules as it allows users/developers to externalise the rules, in a simpler manner, externalising the rules from the message flows and allows users to change them when required. So I went ahead looking for some examples. After quite a bit of time spent Googling, I did not find much out in the blogosphere. In fact the best example was actually from...... wait for it...... Oracle Documentation! (http://docs.oracle.com/cd/E28271_01/user.1111/e10228/rules_start.htm#ASRUG228) However, if you followed the link there was not much explanation provided with the example. So the aim of this article is to provide a little more explanation to the example so that it can be better understood. Note: I won’t be covering the BPM parts in great detail. Use case: Payment instruction file is required to be processed. Before instruction file can be processed it needs to be approved by a business user. Before the approval process, it would be useful to run the payment instruction file through OBR to look for transactions of interest. The output of the OBR can then be used to flag the transactions for the approvers to investigate. Example BPM Process So let’s start defining the Business Rules Dictionary. For the input into our rules, we will be passing in an array of payments which contain some basic information for our demo purposes. Input to Business Rules And for our output we want to have an array of rule output messages. Note that the element I am using for the output is only for one rule message element and not an array. We will configure the Business Rules component later to return an array instead. Output from Business Rules Business Rule – Create Dictionary Fill in all the details and click OK. Open the Business Rules component and select Decision Functions from the side. Modify the Decision Function Configuration Select the decision function and click on the edit button (the pencil), don’t worry that JDeveloper indicates that there is an error with the decision function. Then click the Ouputs tab and make sure the checkbox under the List column is checked, this is to tell the Business Rules component that it should return an array of rule message elements. Updating the Decision Service Next we will define the actual rules. Click on Ruleset1 on the side and then the Create Rule in the IF/THEN Rule section. Creating new rule in ruleset Ok, this is where some detailed explanation is required. Remember that the input to this Business Rules dictionary is a list of payments, each of those payments were of the complex type PaymentType. Each of those payments in the Oracle Business Rules engine is treated as a fact in its working memory. Implemented rule So in the IF/THEN rule, the first task is to grab two PaymentType facts from the working memory and assign them to temporary variable names (payment1 and payment2 in our example). Matching facts Once we have them in the temporary variables, we can then start comparing them to each other. For our demonstration we want to find payments where the account numbers were the same but the account name was different. Suspicious payment instruction And to stop the rule from comparing the same facts to each other, over and over again, we have to include the last test. Stop rule from comparing endlessly And that’s it! No for loops, no need to keep track of what you have or have not compared, OBR handles all that for you because everything is done in its working memory. And once all the tests have been satisfied we need to assert a new fact for the output. Assert the output fact Save your Business Rules. Next step is to complete the data association in the BPM process. Pay extra care to use Copy List instead of the default Copy when doing data association at an array level. Input and output data association Deploy and test. Test data Rule matched Parting words: Ideally you would then use the output of the Business Rules component to then display/flag the transactions which triggered the rule so that the approver can investigate. Link: SOA Project Archive [Download]

    Read the article

  • Updating a database connection password using a script

    - by Tim Dexter
    An interesting customer requirement that I thought was worthy of sharing today. Thanks to James for the requirement and Bryan for the proposed solution and me for testing the solution and proving it works :0) A customers implementation of Sarbanes Oxley requires them to change all database account passwords every 90 days. This is scripted leveraging shell scripts today for most of their environments. But how can they manage the BI Publisher connections? Now, the customer is running 11g and therefore using weblogic on the middle tier, which is the first clue to Bryans proposed solution. To paraphrase and embellish Bryan's solution a little; why not use a JNDI connection from BIP to the database. Then employ the web logic scripting engine to make updates to the JNDI as needed? BIP is completely uninvolved and with a little 'timing' users will be completely unaware of the password updates i.e. change the password when reports are not being executed. Perfect! James immediately tracked down the WLST script that could be used here, http://middlewaremagic.com/weblogic/?p=4261 (thanks Ravish) Now it was just a case of testing the theory. Some steps: Create the JNDI connection in WLS Create the JNDI connection in BI Publisher pointing to the WLS connection Build new data models using or re-point data sources to use the JNDI connection. Create the WLST script to update the WLS JNDI password as needed. Test! Some details. Creating the JNDI connection in web logic is pretty straightforward. Log into hte console and look for Data Sources under the Services section of the home page and click it Click New >> Generic Datasource Give the connection a name. For the JNDI name, prefix it with 'jdbc/' so I have 'jdbc/localdb' - this name is important you'll need it on the BIP side. Select your db type - this will influence the drivers and information needed on the next page. Being a company man, Im using an Oracle db. Click Next Select the driver of choice, theres lots I know, you can read about them I just chose 'Oracle's Driver (Thin) for Instance connections; Versions 9.0.1 and later' Click Next >> Next Fill out the db name (SID), server, port, username to connect and password >> Next Test the config to ensure you can connect. >> Next Now you need to deploy the connection to your BI server, select it and click Next. You're done with the JNDI config. Creating the JNDI connection on the Publisher side is covered here. Just remember to the connection name you created in WLS e.g. 'jdbc/localdb' Not gonna tell you how to do this, go read the user guide :0) Suffice to say, it works. This requires a little reading around the subject to understand the scripting engine and how to execute scripts. Nicely covered here. However a bit of googlin' and I found an even easier way of running the script. ${ServerHome}/common/bin/wlst.sh updatepwd.py Where updatepwd.py is my script file, it can be in another directory. As part of the wlst.sh script your environment is set up for you so its very simple to execute. The nitty gritty: Need to take Ravish's script above and create a file with a .py extension. Its going to need some modification, as he explains on the web page, to make it work in your environment. I played around with it for a while but kept running into errors. The script as is, tries to loop through all of your connections and modify the user and passwords for each. Not quite what we are looking for. Remember our requirement is to just update the password for a given connection. I also found another issue with the script. WLS 10.x does not allow updates to passwords using clear type ie un-encrypted text while the server is in production mode. Its a bit much to set it back to developer mode bounce it, change the passwords and then bounce and then change back to production and bounce again. After lots of messing about I finally came up with the following: ############################################################################# # # Update password for JNDI connections # ############################################################################# print("*** Trying to Connect.... *****") connect('weblogic','welcome1','t3://localhost:7001') print("*** Connected *****") edit() startEdit() print ("*** Encrypt the password ***") en = encrypt('hr') print "Encrypted pwd: ", en print ("*** Changing pwd for LocalDB ***") dsName = 'LocalDB' print 'Changing Password for DataSource ', dsName cd('/JDBCSystemResources/'+dsName+'/JDBCResource/'+dsName+'/JDBCDriverParams/'+dsName) set('PasswordEncrypted',en) save() activate() Its pretty simple and you can expand on it to loop through the data sources and change each as needed. I have hardcoded the password into the file but you can pass it as a parameter as needed using the properties file method. Im not going to get into the detail of that here but its covered with an example here. Couple of points to note: 1. The change to the password requires a server bounce to get the changes picked up. You can add that to the shell script you will use to call the script above. 2. The script above needs to be run from the MW_HOME\user_projects\domains\bifoundation_domain directory to get the encryption libraries set correctly. My command to run the whole script was: d:\oracle\bi_mw\wlserver_10.3\common\bin\wlst.cmd updatepwd.py - where wlst.cmd is the scripting command line and updatepwd.py was my update password script above. I have not quite spoon fed everything you need to make it a robust script but at least you know you can do it and you can work out the rest I think :0)

    Read the article

  • Building a plug-in for Windows Live Writer

    - by mbcrump
    This tutorial will show you how to build a plug-in for Windows Live Writer. Windows Live Writer is a blogging tool that Microsoft provides for free. It includes an open API for .NET developers to create custom plug-ins. In this tutorial, I will show you how easy it is to build one. Open VS2008 or VS2010 and create a new project. Set the target framework to 2.0, Application Type to Class Library and give it a name. In this tutorial, we are going to create a plug-in that generates a twitter message with your blog post name and a TinyUrl link to the blog post.  It will do all of this automatically after you publish your post. Once, we have a new projected created. We need to setup the references. Add a reference to the WindowsLive.Writer.Api.dll located in the C:\Program Files (x86)\Windows Live\Writer\ folder, if you are using X64 version of Windows. You will also need to add a reference to System.Windows.Forms System.Web from the .NET tab as well. Once that is complete, add your “using” statements so that it looks like whats shown below: Live Writer Plug-In "Using" using System; using System.Collections.Generic; using System.Text; using WindowsLive.Writer.Api; using System.Web; Now, we are going to setup some build events to make it easier to test our custom class. Go into the Properties of your project and select Build Events, click edit the Post-build and copy/paste the following line: XCOPY /D /Y /R "$(TargetPath)" "C:\Program Files (x86)\Windows Live\Writer\Plugins\" Your screen should look like the one pictured below: Next, we are going to launch an external program on debug. Click the debug tab and enter C:\Program Files (x86)\Windows Live\Writer\WindowsLiveWriter.exe Your screen should look like the one pictured below:   Now we have a blank project and we need to add some code. We start with adding the attributes for the Live Writer Plugin. Before we get started creating the Attributes, we need to create a GUID. This GUID will uniquely identity our plug-in. So, to create a GUID follow the steps in VS2008/2010. Click Tools from the VS Menu ->Create GUID It will generate a GUID like the one listed below: GUID <Guid("56ED8A2C-F216-420D-91A1-F7541495DBDA")> We only want what’s inside the quotes, so your final product should be: "56ED8A2C-F216-420D-91A1-F7541495DBDA". Go ahead and paste this snipped into your class just above the public class. Live Writer Plug-In Attributes [WriterPlugin("56ED8A2C-F216-420D-91A1-F7541495DBDA",    "Generate Twitter Message",    Description = "After your new post has been published, this plug-in will attempt to generate a Twitter status messsage with the Title and TinyUrl link.",    HasEditableOptions = false,    Name = "Generate Twitter Message",    PublisherUrl = "http://michaelcrump.net")] [InsertableContentSource("Generate Twitter Message")] So far, it should look like the following: Next, we need to implement the PublishNotifcationHook class and override the OnPostPublish. I’m not going to dive into what the code is doing as you should be able to follow pretty easily. The code below is the entire code used in the project. PublishNotificationHook public class Class1 :  PublishNotificationHook  {      public override void OnPostPublish(System.Windows.Forms.IWin32Window dialogOwner, IProperties properties, IPublishingContext publishingContext, bool publish)      {          if (!publish) return;          if (string.IsNullOrEmpty(publishingContext.PostInfo.Permalink))          {              PluginDiagnostics.LogError("Live Tweet didn't execute, due to blank permalink");          }          else          {                var strBlogName = HttpUtility.UrlEncode("#blogged : " + publishingContext.PostInfo.Title);  //Blog Post Title              var strUrlFinal = getTinyUrl(publishingContext.PostInfo.Permalink); //Blog Permalink URL Converted to TinyURL              System.Diagnostics.Process.Start("http://twitter.com/home?status=" + strBlogName + strUrlFinal);            }      } We are going to go ahead and create a method to create the short url (tinyurl). TinyURL Helper Method private static string getTinyUrl(string url) {     var cmpUrl = System.Globalization.CultureInfo.InvariantCulture.CompareInfo;     if (!cmpUrl.IsPrefix(url, "http://tinyurl.com"))     {         var address = "http://tinyurl.com/api-create.php?url=" + url;         var client = new System.Net.WebClient();         return (client.DownloadString(address));     }     return (url); } Go ahead and build your project, it should have copied the .DLL into the Windows Live Writer Plugin Directory. If it did not, then you will want to check your configuration. Once that is complete, open Windows Live Writer and select Tools-> Options-> Plug-ins and enable your plug-in that you just created. Your screen should look like the one pictured below: Go ahead and click OK and publish your blog post. You should get a pop-up with the following: Hit OK and It should open a Twitter and either ask for a login or fill in your status as shown below:   That should do it, you can do so many other things with the API. I suggest that if you want to build something really useful consult the MSDN pages. This plug-in that I created was perfect for what I needed and I hope someone finds it useful.

    Read the article

  • To My 24 Year Old Self, Wherever You Are&hellip;

    - by D'Arcy Lussier
    A decade is a milestone in one’s life, regardless of when it occurs. 2011 might seem like a weird year to mark a decade, but 2001 was a defining year for me. It marked my emergence into the technology industry, an unexpected loss of innocence, and triggered an ongoing struggle with faith and belief. Once you go through a valley, climbing the mountain and looking back over where you travelled, you can take in the entirety of the journey. Over the last 10 years I kept journals, and in this new year I took some time to review them. For those today that are me a decade ago, I share with you what I’ve gleamed from my experiences. Take it for what it’s worth, and safe travels on your own journeys through life. Life is a Performance-Based Sport Have confidence, believe you’re capable, but realize that life is a performance-based sport. Everything you get in life is based on whether you can show that you deserve it. Performance is also your best defense against personal attacks. Just make sure you know what standards you’re expected to hit and if people want to poke holes at you let them do the work of trying to find them. Sometimes performance won’t matter though. Good things will happen to bad people, and bad things to good people. What’s important is that you do the right things and ensure the good and bad even out in your own life. How you finish is just as important as how you start. Start strong, end strong. Respect is Your Most Prized Reward Respect is more important than status or ego. The formula is simple: Performing Well + Building Trust + Showing Dedication = Respect Focus on perfecting your craft and helping your team and respect will come. Life is a Team Sport Whatever aspect of your life, you can’t do it alone. You need to rely on the people around you and ensure you’re a positive aspect of their lives; even those that may be difficult or unpleasant. Avoid criticism and instead find ways to help colleagues and superiors better whatever environment you’re in (work, home, etc.). Don’t just highlight gaps and issues, but also come to the table with solutions. At the same time though, stand up for yourself and hold others accountable for the commitments they make to the team. A healthy team needs accountability. Give feedback early and often, and make it verbal. Issues should be dealt with immediately, and positives should be celebrated as they happen. Life is a Contact Sport Difficult moments will happen. Don’t run from them or shield yourself from experiencing them. Embrace them. They will further mold you and reveal who you will become. Find Your Tribe and Embrace Your Community We all need a tribe: a group of people that we gravitate to for support, guidance, wisdom, and friendship. Discover your tribe and immerse yourself in them. Don’t look for a non-existent tribe just to fill the need of belonging though that will leave you empty and bitter when they don’t meet your unrealistic expectations. Try to associate with people more experienced and more knowledgeable than you. You’ll always learn, and you’ll always remember you have much to learn. Put yourself out there, get involved with the community. Opportunities will present themselves. When we open ourselves up to be vulnerable, we also give others the chance to do the same. This helps us all to grow and help each other, it’s very important. And listen to your wife. (Easter *is* a romantic holiday btw, regardless of what you may think.) Don’t Believe Your Own Press Clippings (and by that I mean the ones you write) Until you have a track record of performance to refer to, any notions of grandeur are just that: notions. You lose your rookie status through trials and tribulations, not by the number of stamps in your passport. Be realistic about your own “experience and leadership” and be honest when you aren’t ready for something. And always remember: nobody really cares about you as much as you think they do. Don’t Let Assholes Get You Down The world isn’t evil, but there is evil in the world. Know the difference and don’t paint all people with the same brush. Do be wary of those that use personal beliefs to describe their business (i.e. “We’re a [religion] company”). What matters is the culture of the organization, and that will tell you the moral compass and what is truly valued. Don’t make someone or something a priority that only makes you an option. Life is unfair and enemies/opponents will succeed when you fail. Don’t waste your energy getting upset at this; the only one that will lose out is you. As mentioned earlier, nobody really cares about you as much as you think they do. Misc Ecclesiastes is bullshit. Everything is certainly *not* meaningless. Software development is about delivery, not the process. Having a great process means nothing if you don’t produce anything. Watch “The Weatherman” (“It’s not easy, but easy doesn’t enter into grownup life.”). Read Tony Dungee’s autobiography, even if you don’t like football, and even if you aren’t a Christian. Say no, don’t feel like you have to commit right away when someone asks you to.

    Read the article

  • Add SQL Azure database to Azure Web Role and persist data with entity framework code first.

    - by MagnusKarlsson
    In my last post I went for a warts n all approach to set up a web role on Azure. In this post I’ll describe how to add an SQL Azure database to the project. This will be described with an as minimal as possible amount of code and screen dumps. All questions are welcome in the comments area. Please don’t email since questions answered in the comments field is made available to other visitors. As an example we will add a comments section to the site we used in the previous post (Länk här). Steps: 1. Create a Comments entity and then use Scaffolding to set up controller and view, and add ConnectionString to web.config. 2. Create SQL Azure database in Management Portal and link the new database 3. Test it online!   1. Right click Models folder, choose add, choose “class…” . Name the Class Comment. 1.1 Replace the Code in the class with the following: using System.Data.Entity; namespace MvcWebRole1.Models { public class Comment {    public int CommentId { get; set; }    public string Name { get; set; }      public string Content { get; set; } } public class CommentsDb : DbContext { public DbSet<Comment> CommentEntries { get; set; } } } Now Entity Framework can create a database and a table named Comment. Build your project to assert there are no build errors.   1.2 Right click Controllers folder, choose add, choose “class…” . Name the Class CommentController and fill out the values as in the example below.     1.3 Click Add. Visual Studio now creates default View for CRUD operations and a Controller adhering to these and opens them. 1.3 Open Web.config and add the following connectionstring in <connectionStrings> node. <add name="CommentsDb” connectionString="data source=(LocalDB)\v11.0;Integrated Security=SSPI;AttachDbFileName=|DataDirectory|\CommentsDb.mdf;Initial Catalog=CommentsDb;MultipleActiveResultSets=True" providerName="System.Data.SqlClient" />   1.4 Save All and press F5 to start the application. 1.5 Go to http://127.0.0.1:81/Comments which will redirect you through CommentsController to the Index View which looks like this:     Click Create new. In the Create-view, add name and content and press Create.   1: // 2: // POST: /Comments/Create 3:  4: [HttpPost] 5: public ActionResult Create(Comment comment) 6: { 7: if (ModelState.IsValid) 8: { 9: db.CommentEntries.Add(comment); 10: db.SaveChanges(); 11: return RedirectToAction("Index"); 12: } 13:  14: return View(comment); 15: } 16:    The default View() is Index so that is the View you will come to. Looking like this: 1: // 2: // GET: /Comments/ 3: 4: public ActionResult Index() 5: { 6: return View(db.CommentEntries.ToList()); 7: } Resulting in the following screen dump(success!):   2. Now, go to the Management portal and Create a new db.   2.1 With the new database created. Click the DB icon in the left most menu. Then click the newly created database. Click DASHBOARD in the top menu. Finally click Connections strings in the right menu to get the connection string we need to add in our web.debug.config file.   2.2 Now, take a copy of the connection String earlier added to the web.config and paste in web.debug.conifg in the connectionstrings node. Replace everything within “ “ in the copied connectionstring with that you got from SQL Azure. You will have something like this:   2.3 Rebuild the application, right click the cloud project and choose “Package…” (if you haven’t set up publishing profile which we will do in our next blog post). Remember to choose the right config file, use debug for staging and release for production so your databases won’t collide. You should see something like this:   2.4 Go to Management Portal and click the Web Services menu, choose your service and click update in the bottom menu.   2.5 Link the newly created database to your application. Click the LINKED RESOURCES in the top menu and then click “Link” in the bottom menu. You should get something like this. 3. Alright then. Under the Dashboard you can find the link to your application. Click it to open it in a browser and then go to ~/Comments to try it out just the way we did locally. Success and end of this story!

    Read the article

  • How to properly add texture to multi-fixture/shape b2Body

    - by Blazej Wdowikowski
    Hello to everyone this is my first poste here I hope that will be not fail start. At start I must say I make part 1 in Ray's Tutorial "How To Make A Game Like Fruit Ninja With Box2D and Cocos2D". But I wonder what when I want make more complex body with texture? Simple just add n b2FixtureDef to the same body. OK but what about texture? If I will take code from that tutorial it only fill last fixture. Probably it does not takes every b2Vec2 point. I was right, it did not. So quick refactor and from that -(id)initWithTexture:(CCTexture2D*)texture body:(b2Body*)body original:(BOOL)original { // gather all the vertices from our Box2D shape b2Fixture *originalFixture = body->GetFixtureList(); b2PolygonShape *shape = (b2PolygonShape*)originalFixture->GetShape(); int vertexCount = shape->GetVertexCount(); NSMutableArray *points = [NSMutableArray arrayWithCapacity:vertexCount]; for(int i = 0; i < vertexCount; i++) { CGPoint p = ccp(shape->GetVertex(i).x * PTM_RATIO, shape->GetVertex(i).y * PTM_RATIO); [points addObject:[NSValue valueWithCGPoint:p]]; } if ((self = [super initWithPoints:points andTexture:texture])) { _body = body; _body->SetUserData(self); _original = original; // gets the center of the polygon _centroid = self.body->GetLocalCenter(); // assign an anchor point based on the center self.anchorPoint = ccp(_centroid.x * PTM_RATIO / texture.contentSize.width, _centroid.y * PTM_RATIO / texture.contentSize.height); } return self; } I came up with that -(id)initWithTexture:(CCTexture2D*)texture body:(b2Body*)body original:(BOOL)original { int vertexCount = 0; //gather total number of b2Vect2 points b2Fixture *currentFixture = body->GetFixtureList(); while (currentFixture) { //new b2PolygonShape *shape = (b2PolygonShape*)currentFixture->GetShape(); vertexCount += shape->GetVertexCount(); currentFixture = currentFixture->GetNext(); } NSMutableArray *points = [NSMutableArray arrayWithCapacity:vertexCount]; // gather all the vertices from our Box2D shape b2Fixture *originalFixture = body->GetFixtureList(); while (originalFixture) { //new NSLog((NSString*)@"-"); b2PolygonShape *shape = (b2PolygonShape*)originalFixture->GetShape(); int currentVertexCount = shape->GetVertexCount(); for(int i = 0; i < currentVertexCount; i++) { CGPoint p = ccp(shape->GetVertex(i).x * PTM_RATIO, shape->GetVertex(i).y * PTM_RATIO); [points addObject:[NSValue valueWithCGPoint:p]]; } originalFixture = originalFixture->GetNext(); } if ((self = [super initWithPoints:points andTexture:texture])) { _body = body; _body->SetUserData(self); _original = original; // gets the center of the polygon _centroid = self.body->GetLocalCenter(); // assign an anchor point based on the center self.anchorPoint = ccp(_centroid.x * PTM_RATIO / texture.contentSize.width,_centroid.y * PTM_RATIO / texture.contentSize.height); } return self; } I was working for simple two fixtures body like b2BodyDef bodyDef; bodyDef.type = b2_dynamicBody; bodyDef.position = position; bodyDef.angle = rotation; b2Body *body = world->CreateBody(&bodyDef); b2FixtureDef fixtureDef; fixtureDef.density = 1.0; fixtureDef.friction = 0.5; fixtureDef.restitution = 0.2; fixtureDef.filter.categoryBits = 0x0001; fixtureDef.filter.maskBits = 0x0001; b2Vec2 vertices[] = { b2Vec2(0.0/PTM_RATIO,50.0/PTM_RATIO), b2Vec2(0.0/PTM_RATIO,0.0/PTM_RATIO), b2Vec2(50.0/PTM_RATIO,30.1/PTM_RATIO), b2Vec2(60.0/PTM_RATIO,60.0/PTM_RATIO) }; b2PolygonShape shape; shape.Set(vertices, 4); fixtureDef.shape = &shape; body->CreateFixture(&fixtureDef); b2Vec2 vertices2[] = { b2Vec2(20.0/PTM_RATIO,50.0/PTM_RATIO), b2Vec2(20.0/PTM_RATIO,0.0/PTM_RATIO), b2Vec2(70.0/PTM_RATIO,30.1/PTM_RATIO), b2Vec2(80.0/PTM_RATIO,60.0/PTM_RATIO) }; shape.Set(vertices2, 4); fixtureDef.shape = &shape; body->CreateFixture(&fixtureDef); But if I try put secondary shape upper than first it starting wierd, texture goes crazy. For example not mention about more complex shapes. What's more if shapes have one common point texture will not render for them at all [For that I use Physics Edytor like in tutorial part1] BTW. I use PolygonSprite and in method createWithWorld... another shapes. Uff.. Question So my question is, why texture coords are in such a mess up? It's my modify method or just wrong approach? Maybe I should remove duplicated from points array?

    Read the article

  • Adaptive Layout for ADF Faces on Tablets

    - by Shay Shmeltzer
    In the 11.1.16 version of Oracle ADF we started adding specific features to the ADF Faces components so they'll work better on iPad tablets. In this entry I'm going to highlight some new capabilities that we have added to the 11.1.2.3 release. (note if you are still on the 11.1.1.* branch - you'll need to wait for 11.1.1.7 to get the features discussed here). The two key additions in the 11.1.2.3 version compared to the 11.1.1.6 features for iPad support include: pagination for tables and adaptive flow layout. The pagination for table is self explanatory, basically since iPad don't support scroll bars, we automatically switch the table component to render with a pagination toolbar that allow you to scroll set of records or directly jump to a specific set. See the image below. The adaptive flow layout takes a bit more explanation. On regular desktops the UI that you usually build for ADF Faces screens is going to use stretch layout - meaning that it stretches to fill the whole area of the browser window. If you resize the browser windoe, the ADF Faces page resizes with it. If your browser window is too small, scroll bars will appear to allow you to scroll to areas that are "hidden". However on an iPad, this is probably not the type of layout you want - you would rather have a flow layout that eliminates scroll bars and instead allows you to scroll down the page. Basically your want the page to be sized based on its content, rather then based on the browser window size. In ADF Faces terminology this can be done with the dimensionsFrom property set to "children". And here comes the tricky part, since in the past(and also today) when you create an ADF Faces page and add a stretchable component to it, the dimensionsFrom property is set to parent by default. This will be true to other layout components you'll add as well. At this point you might be wondering "Does this mean I'll need to go to each of the layout components in my page and modify the dimensionsFrom property value to be children?" ADF Faces to the rescue... To eliminate the need to do this tedious manual changes, we introduced a new web.xml parameter "oracle.adf.view.rich.geometry.DEFAULT_DIMENSIONS" You'll basically add the following to your web.xml <context-param>    <description>      This parameter controls the default value for component geometry on the page.      Supported values are:        legacy - component attributes use the default values as specified for the attributes                 in the tag documentation (default value)        auto   - component attributes use the correct default value given the value of their                 parent component. For example, with this setting, the panelStretchLayout                 will use "auto" as the default value for its "dimensionsFrom" attribute                 instead of "parent".    </description>    <param-name>oracle.adf.view.rich.geometry.DEFAULT_DIMENSIONS</param-name>    <param-value>auto</param-value>  </context-param> Once you set this parameter, you only need to set the dimensionsFrom attribute for the top level layout component on your page, and the rest of the components will adjust accordingly. One trick that you can use, and that is used in the demo below, is to have the dimensionsFrom property depend on the type of client that access your application. This way you can switch between stretch or flow layout based on the device accessing your application. For example I use the following in my page: <af:panelStretchLayout topHeight="70px" startWidth="0px" endWidth="0px"                                       dimensionsFrom="#{adfFacesContext.agent.capabilities['touchScreen'] eq 'none'  ? 'parent' : 'children' }"> Which results in a flow layout for iPads and a stretch layout for regular browsers. Check out the result in the below demo: &amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;

    Read the article

  • Access Control Service v2: Registering Web Identities in your Applications [concepts]

    - by Your DisplayName here!
    ACS v2 support two fundamental types of client identities– I like to call them “enterprise identities” (WS-*) and “web identities” (Google, LiveID, OpenId in general…). I also see two different “mind sets” when it comes to application design using the above identity types: Enterprise identities – often the fact that a client can present a token from a trusted identity provider means he is a legitimate user of the application. Trust relationships and authorization details have been negotiated out of band (often on paper). Web identities – the fact that a user can authenticate with Google et al does not necessarily mean he is a legitimate (or registered) user of an application. Typically additional steps are necessary (like filling out a form, email confirmation etc). Sometimes also a mixture of both approaches exist, for the sake of this post, I will focus on the web identity case. I got a number of questions how to implement the web identity scenario and after some conversations it turns out it is the old authentication vs. authorization problem that gets in the way. Many people use the IsAuthenticated property on IIdentity to make security decisions in their applications (or deny user=”?” in ASP.NET terms). That’s a very natural thing to do, because authentication was done inside the application and we knew exactly when the IsAuthenticated condition is true. Been there, done that. Guilty ;) The fundamental difference between these “old style” apps and federation is, that authentication is not done by the application anymore. It is done by a third party service, and in the case of web identity providers, in services that are not under our control (nor do we have a formal business relationship with these providers). Now the issue is, when you switch to ACS, and someone with a Google account authenticates, indeed IsAuthenticated is true – because that’s what he is! This does not mean, that he is also authorized to use the application. It just proves he was able to authenticate with Google. Now this obviously leads to confusion. How can we solve that? Easy answer: We have to deal with authentication and authorization separately. Job done ;) For many application types I see this general approach: Application uses ACS for authentication (maybe both enterprise and web identities, we focus on web identities but you could easily have a dual approach here) Application offers to authenticate (or sign in) via web identity accounts like LiveID, Google, Facebook etc. Application also maintains a database of its “own” users. Typically you want to store additional information about the user In such an application type it is important to have a unique identifier for your users (think the primary key of your user database). What would that be? Most web identity provider (and all the standard ACS v2 supported ones) emit a NameIdentifier claim. This is a stable ID for the client (scoped to the relying party – more on that later). Furthermore ACS emits a claims identifying the identity provider (like the original issuer concept in WIF). When you combine these two values together, you can be sure to have a unique identifier for the user, e.g.: Facebook-134952459903700\799880347 You can now check on incoming calls, if the user is already registered and if yes, swap the ACS claims with claims coming from your user database. One claims would maybe be a role like “Registered User” which can then be easily used to do authorization checks in the application. The WIF claims authentication manager is a perfect place to do the claims transformation. If the user is not registered, show a register form. Maybe you can use some claims from the identity provider to pre-fill form fields. (see here where I show how to use the Facebook API to fetch additional user properties). After successful registration (which may include other mechanisms like a confirmation email), flip the bit in your database to make the web identity a registered user. This is all very theoretical. In the next post I will show some code and provide a download link for the complete sample. More on NameIdentifier Identity providers “guarantee” that the name identifier for a given user in your application will always be the same. But different applications (in the case of ACS – different ACS namespaces) will see different name identifiers. This is by design to protect the privacy of users because identical name identifiers could be used to create “profiles” of some sort for that user. In technical terms they create the name identifier approximately like this: name identifier = Hash((Provider Internal User ID) + (Relying Party Address)) Why is this important to know? Well – when you change the name of your ACS namespace, the name identifiers will change as well and you will will lose your “connection” to your existing users. Oh an btw – never use any other claims (like email address or name) to form a unique ID – these can often be changed by users.

    Read the article

  • Speakers, Please Check Your Time

    - by AjarnMark
    Woodrow Wilson was once asked how long it would take him to prepare for a 10 minute speech. He replied "Two weeks". He was then asked how long it would take for a 1 hour speech. "One week", he replied. 2 hour speech? "I'm ready right now," he replied.  Whether that is a true story or an urban legend, I don’t really know, but either way, it is a poignant reminder for all speakers, and particularly apropos this week leading up to the PASS Community Summit. (Cross-posted to the PASS Professional Development Virtual Chapter blog #PASSProfDev.) What’s the point of that story?  Simply this…if you have plenty of time to do your presentation, you don’t need to prepare much because it is easy to throw in more and more material to stretch out to your allotted time.  But if you are on a tight time constraint, then it will take significant preparation to distill your talk down to only the essential points. I have attended seven of the last eight North American Summit events, and every one of them has been fantastic.  The speakers are great, the material is timely and relevant, and the networking opportunities are awesome.  And every year, there is one little thing that just bugs me…speakers going over their allotted time.  Why does it bother me so?  Well, if you look at a typical schedule for a Summit, you’ll see that there are six or more sessions going on at the same time, and only 15 minutes to move from one to another.  If you’re trying to maximize your training dollar by attending something during every session time slot, and you don’t want to be the last guy trying to squeeze into the middle of the row, then those 15 minutes can be critical.  All the more so if you need to stop and use the bathroom or if you have to hike to the opposite end of the convention center.  It is really a bad position to find yourself having to choose between learning the last key points of Speaker A who is going over time, and getting over to Speaker B on time so you don’t miss her key opening remarks. And frankly, I think it is just rude.  Yes, the speakers are the function, after all they are bringing the content that the rest of us are paying to learn.  But it is also an honor to be given the opportunity to speak at a conference like this, and no one speaker is so important that the conference would be a disaster without him.  Speakers know when they submit their abstract, long before the conference, how much time they will have.  It has been the same pattern at the Summit for at least the last eight years.  Program Sessions are 75 minutes long.  Some speakers who have a good track record, and meet other qualifying criteria, are extended an invitation to present a Spotlight Session which is 90 minutes (a 20% increase).  So there really is no excuse.  It’s not like you were promised a 2-hour segment and then discovered when you got here that it was only 75 minutes.  In fact, it’s not like PASS advertised 90-minute sessions for everyone and then a select few were cut back to only 75.  As a speaker, you know well before you get here which type of session you are doing and how long it is, so as a professional, you should plan accordingly. Now you might think that this only happens to rookies, but I’ll tell you that some of the worst offenders are big-name veterans who draw huge attendance numbers for their sessions.  Some attendees blow this off as, “Hey, it’s so-and-so, and I’d stay here for hours and listen to him/her talk.”  To which I would reply, “Then they should have submitted for a pre- or post-conference day-long seminar instead, but don’t try to squeeze your day-long talk into a 90-minute session.”  Now I don’t really believe that these speakers are being malicious or just selfishly trying to extend their time in the spotlight.  I think that most of them are merely being undisciplined and did not trim their presentation sufficiently, or allowed themselves to get off-track (often in a generous attempt to help someone in the audience with a question or problem that really should have been noted for further discussion after the session). So here is my recommendation…my plea, even.  TRIM THE FAT!  Now.  Before it’s too late.  Before you even get on the airplane, take a long, hard look at your presentation and eliminate some of the points that you originally thought you had to make, but in reality are not truly crucial to your main topic.  Delete a few slides.  Test your demos and have them already scripted rather than typing them during your talk.  It is better to cut out too much and end up with plenty of time at the end for Questions & Answers.  And you can always keep some notes on the stuff that you cut out so that you could fill it back in at the end as bonus material if you really do end up with a whole bunch of time on your hands.  But I don’t think you will.  And if you do, that will look even better to the audience as it will look like you’re giving them something extra that not every audience gets.  And they will thank you for that.

    Read the article

  • Web Services Example - Part 2: Programmatic

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 2 of our Web Service examples.  In this posting we'll take a look at using a SOAP Web Service but calling it programmatically in code and parsing the return into a bean. Getting the sample code: Just click here to download a zip of the entire project.  You can unzip it and load it into JDeveloper and deploy it either to iOS or Android.  Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed.  Note: This is a different workspace than WS-Part1 Defining our Web Service: Just like our first installment, we are using the same public weather forecast web service provided free by CDYNE Corporation.  Sometimes this service goes down so please ensure you know it's up before reporting this example isn't working. We're going to concentrate on the same two web service methods, GetCityForecastByZIP and GetWeatherInformation. Defing the Application: The application setup is identical to the Weather1 version.  There are some improvements to the data that is displayed as part of this example though.  Now we are able to show the associated image along with each forecast line when using the Forecast By Zip feature.  We've also added the temperature Hi/Low values into the UI. Summary of Fundamental Changes In This Application The most fundamental change is that we're binding the UI to the Bean Data Controls instead of directly to the Web Service Data Controls.  This gives us much more flexibility to control the shape of the data and allows us to do caching of the data outside of the Web Service.  This way if your application is, say offline, your bean could still populate with data from a local cache and still show you some UI as opposed to completely failing because you don't have any connectivity. In general we promote this type of programming technique with ADF Mobile to insulate your application from any issues with network connectivity. What's different with this example? We have setup the Web Service DC the same way but now we have managed beans to process the data.  The following classes define the "Model" of our application:  CityInformation-CityForecast-Forecast, WeatherInformation-WeatherDescription.  We use WeatherBean for UI interaction to the model layer.  If you look through this example, we don't really do that much with the java code except use it to grab the image URL from the weather description.  In a more realistic example, you might be using some JDBC classes to persist the data to a local database. To have a good architecture it is always good to keep your model and UI layers separate.  This gets muddied if you start to use bindings on a page invoked from Java code and this java code starts to become your "model" layer.  Since bindings are page specific, your model layer starts to become entwined with your UI.  Not good!  To help with this, we've added some utility functions that let you invoke DC methods without having a binding and thus execute methods from your "model" layer without requiring a binding in your page definition.  We do this with the invokeDataControlMethod of the AdfmfJavaUtilities class.  An example of this method call is available in line 95 of WeatherInformation.java and line 93 of CityInformation.Java. What's a GenericType? Because Web Service Data Controls (and also URL Data Controls AKA REST) use generic name/value pairs to define their structure and don't have strongly typed objects, these are actually stored internally as GenericType objects.  The GenericType class is simply a property map of name/value pairs that can be hierarchical.  There are methods like getAttribute where you supply the index of the attribute or it's string property name.  Why is this important to know?  Because invokeDataControlMethod returns GenericType objects and developers either need to parse these GenericType objects themselves or use one of our helper functions. GenericTypeBeanSerializationHelper This class does exactly what it's name implies.  It's a helper class for developers to aid in serialization of GenericTypes to/from java objects.  This is extremely handy if you have a large GenericType object with many attributes (or you're just lazy like me!) and you just want to parse it out into a real java object you can use more easily.  Here you would use the fromGenericType method.  This method takes the class of the Java object you wish to return and the GenericType as parameters.  The method then parses through each attribute in the GenericType and uses reflection to set that same attribute in the Java class.  Then the method returns that new object of the class you specified.  This is obviously very handy to avoid a lot of shuffling code between GenericType and your own Java classes.  The reverse method, toGenericType is also available when you want to go the other way.  In this case you supply the string that represents the package location in the DataControl definition (Example: "MyDC.myParams.MyCollection") and then pass in the Java object you have that holds the data and a GenericType is returned to you.  Again, it will use reflection to calculate the attributes that match between the java class and the GenericType and call the getters/setters on those. Issues and Possible Improvements: In the next installment we'll show you how to make your web service calls asynchronously so your UI will fill dynamically when the service call returns but in the meantime you show the data you have locally in your bean fed from some local cache.  This gives your users instant delivery of some data while you fetch other data in the background.

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >