Search Results

Search found 15912 results on 637 pages for 'cross language'.

Page 507/637 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • Simple tips to design a Customer Journey Map

    - by Isabel F. Peñuelas
    “A model can abstract to a level that is comprehensible to humans, without getting lost in details.” -The Unified Modeling Language Reference Manual. Inception using Post-it, StoryBoards, Lego or Mindmaping Techniques The first step in a Customer Experience project is to describe customer interactions creating a customer journey map. Modeling is never easy, so to succeed on this effort, it is very convenient that your CX´s team have some “abstract thinking” skills. Besides is very helpful to consult a Business Service Design offered by an Interactive Agency to lead your inception process. Initially, you may start by a free discussion using post-it cards; storyboards; even lego or any other brainstorming technique you like. This will help you to get your mind into the path followed by the customer to purchase your product or to consume any business service you actually offer to your customers, or plan to offer in the near future. (from www.servicedesigntools.org) Colorful Mind Maps are very useful to document and share meeting ideas. Some Mind Maps software providers as ThinkBuzzan provide trial versions, and you will find more mindmapping options on this post by Mashable. Finally to produce a quick one, I do recommend Wise, an entirely online mindmaping service. On my view the best results in terms of communication will always come for an artistic hand-made drawing. Customer Experience Mind Map Example Making your first Customer Journey Map To add some more formalization to your thoughts, there is a wide offering for designing Customer Journey Maps. A Customer Map can be represented as an oriented graph in which another follows each step. The one below is the most simple Customer Journey you can draw. Nothing more than a couple of pictures, numbers and lines to design the customer steps sequence in the purchase process. Very simple Customer Journey for Social Mobile Shopping There are a lot of Customer Journey templates much more sophisticated available  in the Web using a variety of styles, as per example this one with a focus on underlining emotional experience, or this other worksheet template. Representing different interaction devices on the vertical axis, and touchpoints / requirements and existing gaps horizontally  is today´s most common format for Customer Journeys. From Customer Journey Maps to CX Technology Adoption Plans Once you have your map ready, you can start to identify the IT infrastructure requirements for your CXProject. By analyzing customer problems and improvement opportunities with maps, you will then identify the technology gaps and the new investment requirements in your IT infrastructure. Deeping step by step from the more abstract to the more concrete is the best guarantee to take the right IT investment decisions.  ¡Remember to keep your initial customer journey safe on your pocket in every one of your CX´s project meetings- that´s you map to success!

    Read the article

  • Analysis Services Tabular books #ssas #tabular

    - by Marco Russo (SQLBI)
    Many people are looking for books about Analysis Services Tabular. Today there are two books available and they complement each other: Microsoft SQL Server 2012 Analysis Services: The BISM Tabular Model by Marco Russo, Alberto Ferrari and Chris Webb Applied Microsoft SQL Server 2012 Analysis Services: Tabular Modeling by Teo Lachev The book I wrote with Alberto and Chris is a complete guide to create tabular models and has a good coverage about DAX, including how to use it for enriching a semantic model with calculated columns and measures and how to use it for querying a Tabular model. In my experience, DAX as a query language is a very interesting option for custom analytical applications that requires a fast calculation engine, or simply for standard reports running in Reporting Services and accessing a Tabular model. You can freely preview the table of content and read some excerpts from the book on Safari Books Online. The book is in printing and should be shipped within mid-July, so finally it will be very soon on the shelf of all the people already preordered it! The Teo Lachev’s book, covers the full spectrum of Tabular models provided by Microsoft: starting with self-service BI, you have users creating a model with PowerPivot for Excel, publishing it to PowerPivot for SharePoint and exploring data by using Power View; then, the PowerPivot for Excel model can be imported in a Tabular model and published in Analysis Services, adding more control on the model through row-level security and partitioning, for example. Teo’s book follows a step-by-step approach describing each feature that is very good for a beginner that is new to PowerPivot and/or to BISM Tabular. If you need to get the big picture and to start using the products that are part of the new Microsoft wave of BI products, the Teo’s book is for you. After you read the book from Teo, or if you already have a certain confidence with PowerPivot or BISM Tabular and you want to go deeper about internals, best practices, design patterns in just BISM Tabular, then our book is a suggested read: it contains several chapters about DAX, includes discussions about new opportunities in data model design offered by Tabular models, and also provides examples of optimizations you can obtain in DAX and best practices in data modeling and queries. It might seem strange that an author write a review of a book that might seem to compete with his one, but in reality these two books complement each other and are not alternatives. If you have any doubt, buy both: you will be not disappointed! Moreover, Amazon usually offers you a deal to buy three books, including the Visualizing Data with Microsoft Power View, another good choice for getting all the details about Power View.

    Read the article

  • IIS Logfile Visualization with XNA

    - by BobPalmer
    In my office, I have a wall mounted monitor who's whole purpose in life is to display perfmon stats from our various servers.  And on a fairly regular basis, I have folks walk by asking what the lines mean.    After providing the requisite explaination about CPU utilization, disk I/O bottlenecks, etc. this is usually followed by some blank stares from the user in question, and a distillation of all of our engineering wizardry down to the phrase 'So when the red line goes up that's bad then?'   This of course would not do.  So I talked to my friends and our network admin about an option to show something more eye catching and visual, with which we could catch at a glance a feel for what was up with our site.    He initially pointed me out to a video showing GLTail and Chipmunk done in Ruby.  Realizing this was both awesome, and that I needed an excuse to do something in XNA, I decided to knock out a proof of concept for something very similar, but with a few tweaks.   Here's a link to a video of the current prototype:   http://www.youtube.com/watch?v=jM_PWZbtH2I   Essentially this app opens up a log file (even an active one) and begins pulling out the lines of text.  (Here's a good Code Project link that covers how to do tail reading from an active text file: http://www.codeproject.com/KB/files/tail.aspx).   As new data is added, a bubble is generated in the application - a GET statement comes from the left, and a POST from the right.  I then run it through a series of expression checkers, and based on the kind of statement and the pattern, a bubble of an appropriate color is generated.   For example, if I get a 500, a huge red bubble pops out.  Others are based on the part of the system the page is from - i.e. green bubbles are from our claims management subsystem, and blue bubbles are from the pages our scheduling staff use to schedule patients.  Others include the purple bubbles for security and login, and yellow bubbles for some miscellaneous pages.   The little grey bubbles represent things like images, JS, CSS, etc - and their small size makes them work like grease to keep the larger page bubbles moving.   The app is also smart enough that if it is starting to bog down with handling the physics and interactions, it will suspend new bubbles until enough have dropped off that performance can resume (you can see this slight stuttering in the sample video).   The net result is that anyone will be able to look up on the wall monitor, and instantly get a quick feel for how things are going on the floor.  Website slow?  You can get a feel for both volume and utilized modules with one glance.  Website crashing?  Look for a wall of giant red bubbles.  No activity at all?  Maybe the site is down.  Now couple this with utilization within a farm, and cross referenced with a second app showing the same kind of data from your SQL database...   As for the app itself, it's a windows XNA project with the code in C#.   The physics are handled by the Farseer physicis eingine for XNA (http://www.codeplex.com/FarseerPhysics) which is just pure goodness.  The samples are great, and I had the app up and working in two evenings (half of that was fine tuning, and the other was me coding with a kid in my lap).   My next steps include wiring this to SQL (I have some ideas...), and adding a nice configuration module.  For example, you could use polygons, etc to tie to your regex - or more entertaining things like having a little human ragdoll to represent a user login.     Once that's wrapped up and I have a chance to complete some hardening, I will be releasing the whole thing into the wild as opensource.     Feel free to ping me if you have any questions! -Bob

    Read the article

  • Unable to update/ install any files [closed]

    - by Surya
    Possible Duplicate: “Problem with MergeList” error when trying to do an update Just now I installed ubuntu 12.04 on my Lenovo G570 laptop. First I got an error at the time of installation (don't know about it) and I restarted the system and next time, it went well. So, after installing problems started.. There was a error with "Language recognition" and I tried to fix it but didn't work. I tried to install powerTop to check the status of power management. at terminal: sudo apt-get install powertop This is the error I got surya@surya-Lenovo-G570:~$ sudo apt-get powertop install [sudo] password for surya: E: Invalid operation powertop surya@surya-Lenovo-G570:~$ sudo apt-get install powertop Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages E: The package lists or status file could not be parsed or opened. surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ ^C surya@surya-Lenovo-G570:~$ I downloaded Google Chrome .deb one and tried to install but its not working. Software center is opened and its not loading. There was a notification on the status bar which says: An error occurred please run the package manager from the right-click menu ... .... ... E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages "Copy & Paste" from terminal is not really working... When I press Ctrl + C; its showing ^C on terminal but its not working.. The most important error: I am unable to see a "chip" icon on the status bar so as to install proprietary drivers for my ATI drivers... The interesting part is, powertop worked will on live cd and it even detected my ATI card. Update When I opened "Software Up to Date", this showed a error: Could not initialize the package information An unresolvable problem occurred while initializing the package information. Please report this bug against the 'update-manager' package and include the following error message: 'E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/extras.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages, E:The package lists or status file could not be parsed or opened.' : My laptop details Lenovo G570; Intel 2nd Gen i5 processor 4GB DDR3 RAM Intel in-build graphics + AMD Radeon HD 6370M 1GB graphics. I need help ASAP.

    Read the article

  • Announcement: Employee Info Starter Kit (v5.0) is Released

    - by Mohammad Ashraful Alam
    Ever wanted to have a simple jQuery menu bound with ASP.NET web site map file? Ever wanted to have cool css design stuffs implemented on your ASP.NET data bound controls? Ever wanted to let Visual Studio generate logical layers for you, which can be easily tested, customized and bound with ASP.NET data controls? If your answers with respect to above questions are ‘yes’, then you will probably happy to try out latest release (v5.0) of Employee Starter Kit, which is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, the current release illustrates how to utilize Microsoft ASP.NET 4.0 Web Form Data Controls, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is an open source ASP.NET project template that is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. This project template is titled as “Employee Info Starter Kit”, which was initially hosted on Microsoft Code Gallery and been downloaded 1, 50,000+ of copies afterword.  The latest version of this starter kit is hosted in Codeplex. Release Highlights User End Functional Specification The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee records Update an existing employee record Delete existing employee records Architectural Overview Simple 3 layer architecture (presentation, business logic and data access layer) ASP.NET web form based user interface Built-in code generators for logical layers, implemented in Visual Studio default template engine (T4) Built-in Entity Framework entities as business entities (aka: data containers) Data Mapper design pattern based Data Access Layer, implemented in C# and Entity Framework Domain Model design pattern based Business Logic Layer, implemented in C# Object Model for Cross Cutting Concerns (such as validation, logging, exception management) Minimum System Requirements Visual Studio 2010 (Web Developer Express Edition) or higher Sql Server 2005 (Express Edition) or higher Technology Utilized Programming Languages/Scripts Browser side: JavaScript Web server side: C# Code Generation Template: T-4 Template Frameworks .NET Framework 4.0 JavaScript Framework: jQuery 1.5.1 CSS Framework: 960 grid system .NET Framework Components .NET Entity Framework .NET Optional/Named Parameters (new in .net 4.0) .NET Tuple (new in .net 4.0) .NET Extension Method .NET Lambda Expressions .NET Anonymous Type .NET Query Expressions .NET Automatically Implemented Properties .NET LINQ .NET Partial Classes and Methods .NET Generic Type .NET Nullable Type ASP.NET Meta Description and Keyword Support (new in .net 4.0) ASP.NET Routing (new in .net 4.0) ASP.NET Grid View (CSS support for sorting - (new in .net 4.0)) ASP.NET Repeater ASP.NET Form View ASP.NET Login View ASP.NET Site Map Path ASP.NET Skin ASP.NET Theme ASP.NET Master Page ASP.NET Object Data Source ASP.NET Role Based Security Getting Started Guide To see Employee Info Starter Kit in action is pretty easy! Download the latest version. Extract the file. From the extracted folder click the C# project file (Eisk.Web.csproj) to open it in Visual Studio 2010 Hit Ctrl+F5! The current release (v5.0) of Employee Info Starter Kit is properly packaged, fully documented and well tested. If you want to learn more about it in details, just check the following links: Release Home Page Installation Walkthrough Hand on Coding Walkthrough Technical Reference Enjoy!

    Read the article

  • Knowledge Pathways Designer - Recommended Settings

    - by ted.henson
    The General page of the Options dialog box contains the application preferences for Knowledge Pathways Designer. It is recommended that you leave certain settings as they are, unless you have a specific reason for changing them. The following are a few of the settings on the General page with an explanation of the recommended setting. They are in the order they appear on the page: Allow version 2.0 style links: This option should remain disabled unless you were using content that was created using version 2.0 of Knowledge Pathways and you want the same linking functionality that existed in that version 2.0. This feature enables you to reuse parts of titles that contain no AUs. However, keep in mind that this type of link is not a true link, but a cross between a copy and a link. To create a 2.0 style link, you drag and drop sections between titles. You can only create 2.0 style links to sections that belong to the Title AU. When creating a version 2.0 style link, your mouse pointer will change to indicate a 2.0 link is being created. Confirm deletion of outline items and Confirm deletion of titles: It is recommended that these options remain enabled to avoid deleting something by accident. Display tracking data loss warning when opening a published title: It recommended that this option be enabled so you will receive the warning message when you open the development copy of a title, reminding you of the implications of your changes. ulCopy files when converting a Section to an Assignable Unit: This option should remain enabled unless you have a specific reason for not copying the files. If this is disabled, you will (in effect) lose your content files upon converting because they will not be copied to the new AU directory on the content root. In this case, you would need to use Windows Explorer to copy your files manually. Working with Spelling Options All of the spelling options are enabled by default. Your design team can review these options to determine if you want to make changes, depending upon your specific needs. Understanding Dictionary Options You should leave the dictionary options as they are, unless you have a specific reason for changing them. While you can delete the user (customizable) dictionary, doing so is not recommended. Setting Check In/Check Out Options The ability to check in and check out titles and AUs will impact the efficiency of your design team. Decide what your check in and check out processes are before you start developing titles. The Check In/Check Out page of the Options dialog box contains two options that affect what happens when you open a title using the Open Title dialog box. Both of these options are enabled by default and are described below: Check Out for editing enabled: This option ensures that the Check Out for editing option will be selected when you open the development copy of a title from the Open Title dialog box. If this option is disabled, you must select the Check Out for editing option every time you want to check out a title for editing. Attempt to Check Out for entire branch: When this option is enabled, Designer checks out the selected title and all AUs and sections that are part of that title, provided they are available for check out. If this option is disabled, you will only check out the Title AU and anything that belongs to that Title AU (e.g., sections, questions, etc.), but not other AUs. The Check In/Check Out page of the Options dialog box also contains options that control what happens when you close a title. You can choose one option in the Check In when Closing a Title area. The option selected is a matter of preference and you should determine which option is most appropriate for your design team.

    Read the article

  • A Guide to Fusion SCM at Oracle OpenWorld 2012

    - by Pam Petropoulos
    Are you attending next week’s Oracle OpenWorld 2012 conference? Then you won’t want to miss the Fusion SCM activities and customer presenters from leading companies like Boeing and Fideltronik. Below you’ll find a day by day guide of the various Fusion SCM sessions, demos and activities during OpenWorld 2012, September 30 – October 4 in San Francisco, CA. Tuesday, October 2 All of the Fusion SCM sessions during OpenWorld will take place in various rooms at Moscone West, a convenience you are sure to appreciate, as will your feet.   The first session at 10:15 – 11:15 am (Moscone West, Room 2006), entitled “Oracle Fusion Supply Chain Management: Overview, Strategy, Customer Experiences, and Roadmap”, provides an overview of Fusion Supply Chain Management applications and will discuss Fusion SCM strategy, future roadmap, and highlights of customer examples. The next session at 11:45 am – 12:45 pm (Moscone West, Room 2022), entitled “Enabling Trusted Enterprise Product Data with Oracle Fusion Product Hub”, may be the session for you if you’re struggling with achieving consistent, high-quality product data that provides significant business value. This session will discuss how Oracle Fusion Product Hub and Oracle Enterprise Data Quality can help you to achieve this vision. A customer presenter from Fideltronik will share their experiences with Oracle Fusion Product Hub. At the end of the day unwind at the Supply Chain Management customer reception from 6:00 – 8:00 pm at the Roe Lounge, located at 651 Howard Street. Registration is required. Click here for details. Wednesday, October 3 Wednesday is a busy day with three Fusion SCM sessions on the agenda. Start your day at 10:15 am at the “Oracle Fusion Supply Chain Management: Customer Adoption and Experiences” session (Moscone West, Room 2003).  This must see session will showcase customer speakers from The Boeing Company and Fideltronik, each of whom will share their company’s experiences in selecting and implementing Fusion SCM applications. If you’re wondering how Fusion SCM applications can co-exist with your existing Oracle applications, then you’ll want to sit in on the 3:30 pm session entitled “Oracle Fusion Supply Chain Management: Coexistence with Other Oracle Applications” (Moscone West, Room 2003). Stick around until 5:00 pm for the final Fusion SCM session of the day entitled “Responsive Fulfillment with Oracle Fusion Supply Chain Management” (Moscone West, Room 2001).  This session will showcase Oracle Fusion Distributed Order Orchestration and Oracle Fusion Global Order Promising and how they are changing the way companies manage order fulfillment in environments. In addition to discussing the current business challenges, product capabilities, value propositions, industry applicability, and future roadmap this session will also feature a customer presenter from The Boeing Company. Thursday, October 4 If you are a retail customer we highly recommend that you attend the final Fusion SCM session of the week at 12:45 pm, entitled “Multichannel Fulfillment Excellence in the Direct-to-Consumer Market” (Moscone West, Room 2024).  Retailers will learn how they can transform their supply chains to meet the ever-increasing demands of buy anywhere/get anywhere cross-channel requirements with Fusion Distributed Order Orchestration and Oracle Fusion Product Hub. Throughout the week, you’ll also want to visit the Fusion SCM demo pods at the Demogrounds in Moscone West so you can see demos of these Fusion applications. Visit pod W-005 for Fusion Distributed Order Orchestration, W-008 for Fusion Inventory and Cost Management, and W-006 for Fusion Product Hub. Click here for the Demogrounds map. A reminder that you can also pre-register for these sessions to secure your spot. Visit the Schedule Builder to pre-enroll for these sessions. Finally, you'll also want to check out the Fusion SCM FocusOn document which includes additional keynote and general sessions that you may want to attend throughout the week.   We look forward to seeing you in San Francisco next week.

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Develop DBA skills with MySQL for Database Administrators course

    - by Antoinette O'Sullivan
    MySQL is the world's number one open source database and the number one database for the Web. Join top companies by developing your MySQL Database Administrator skills. The MySQL for Database Administrators course is for DBAs and other database professionals who want to install the MySQL Server, set up replication and security, perform database backups and performance tuning, and protect MySQL databases. You can take this 5 day course as Training on Demand: Start training within 24 hours of registration. You will follow the lecture material via streaming video and perform hands-on activities at a date and time that suits you. Live-Virtual Event:  Take this instructor-led course from your own desk. Choose from the 19 events currently on the schedule and find an event that suits you in terms of timezone and date. In-Class Event: Travel to an education center. Here is a sample of events on the schedule:    Location  Date  Delivery Language  Mechelen, Belgium  25 February 2013  English  London, England  26 November 2012  English  Nice, France  3 December 2012  French  Paris, France  11 February 2013  French  Budapest, Hungary  26 November 2012  Hungarian  Belfast, Ireland  24 June 2013  English  Milan, Italy  14 January 2013  Japanese  Rome, Italy  18 February 2013  Japanese  Amsterdam, Netherlands  24 June 2013  Dutch  Nieuwegein, Netherlands  8 April 2013  Dutch  Warsaw, Poland  10 December 2012  Polish  Lisbon, Portugal  21 January 2013  European Portugese  Porto, Portugal  21 January 2013  European Portugese  Barcelona, Spain  4 February 2013  Spanish  Madrid, Spain  21 January 2013  Spanish  Nairobi, Kenya  26 November 2012  English  Johannesburg, South Africa  9 December 2013  English  Tokyo, Japan  10 December 2012  Japanese  Singapore  28 January 2013  English  Brisbane, Australia  10 December 2012  English  Edmonton, Canada  7 January 2013  English  Montreal, Canada  28 January 2013  English  Ottawa, Canada  28 January 2013  English  Toronto, Canada  28 January 2013  English  Vancouver, Canada  7 January 2013  English  Mexico City, Mexico  10 December 2012  Spanish  Sao Paolo, Brazil  10 December 2012  Brazilian Portugese For more information on this course or on other courses on the authentic MySQL Curriculum, go to http://oracle.com/education/mysql. Note, many organizations deploy both Oracle Database and MySQL side by side to serve different needs, and as a database professional you can find training courses on both topics at Oracle University! Check out the upcoming Oracle Database training courses and MySQL training courses. Even if you're only managing Oracle Databases at this point of time, getting familiar with MySQL will broaden your career path with growing job demand.

    Read the article

  • on coding style

    - by user12607414
    I vastly prefer coding to discussing coding style, just as I would prefer to write poetry instead of talking about how it should be written. Sometimes the topic cannot be put off, either because some individual coder is messing up a shared code base and needs to be corrected, or (worse) because some officious soul has decided, "what we really need around here are some strongly enforced style rules!" Neither is the case at the moment, and yet I will venture a post on the subject. The following are not rules, but suggested etiquette. The idea is to allow a coherent style of coding to flourish safely and sanely, as a humane, inductive, social process. Maxim M1: Observe, respect, and imitate the largest-scale precedents available. (Preserve styles of whitespace, capitalization, punctuation, abbreviation, name choice, code block size, factorization, type of comments, class organization, file naming, etc., etc., etc.) Maxim M2: Don't add weight to small-scale variations. (Realize that Maxim M1 has been broken many times, but don't take that as license to create further irregularities.) Maxim M3: Listen to and rely on your reviewers to help you perceive your own coding quirks. (When you review, help the coder do this.) Maxim M4: When you touch some code, try to leave it more readable than you found it. (When you review such changes, thank the coder for the cleanup. When you plan changes, plan for cleanups.) On the Hotspot project, which is almost 1.5 decades old, we have often practiced and benefited from such etiquette. The process is, and should be, inductive, not prescriptive. An ounce of neighborliness is better than a pound of police-work. Reality check: If you actually look at (or live in) the Hotspot code base, you will find we have accumulated many annoying irregularities in our source base. I suppose this is the normal condition of a lived-in space. Unless you want to spend all your time polishing and tidying, you can't live without some smudge and clutter, can you? Final digression: Grammars and dictionaries and other prescriptive rule books are sometimes useful, but we humans learn and maintain our language by example not grammar. The same applies to style rules. Actually, I think the process of maintaining a clean and pleasant working code base is an instance of a community maintaining its common linguistic identity. BTW, I've been reading and listening to John McWhorter lately with great pleasure. (If you end with a digression, is it a tail-digression?)

    Read the article

  • Ubuntu Preseed set Norwegian Keyboard?

    - by Vangelis Tasoulas
    It's been a couple of days now that I am trying to make a fully automated unattended installation. I managed to make it work with Ubuntu/Cobbler and a preseed file, but I cannot set the correct keyboard layout which is Norwegian in this case. I am doing the tests on a virtual machine and when I am going with a normal manual installation (no preseed) everything is working fine. When I am using the preseed file, I always end up with an "English (US)" keyboard no matter the many different options I have tried. I can change it manually with the "dpkg-reconfigure keyboard-configuration" command, but that's not the case. It should be handled automatically using the preseed file. I am using DEBCONF_DEBUG=5 when the grub is loading, and as I see in "/var/log/installer/syslog" file after the installation has finished, the preseeding commands are accepted. Can anyone help on this? The preseed file I am using is following: d-i debian-installer/country string NO d-i debian-installer/language string en_US:en d-i debian-installer/locale string en_US.UTF-8 d-i console-setup/ask_detect boolean false d-i keyboard-configuration/layout select Norwegian d-i keyboard-configuration/variant select Norwegian d-i keyboard-configuration/modelcode string pc105 d-i keyboard-configuration/layoutcode string no d-i keyboard-configuration/xkb-keymap select no d-i netcfg/choose_interface select auto d-i netcfg/get_hostname string myhostname d-i netcfg/get_domain string simula.no d-i hw-detect/load_firmware boolean true d-i mirror/country string manual d-i mirror/http/hostname string ftp.uninett.no d-i mirror/http/directory string /ubuntu d-i mirror/http/proxy string http://10.0.1.253:3142/ d-i mirror/codename string precise d-i mirror/suite string precise d-i clock-setup/utc boolean true d-i time/zone string Europe/Oslo d-i clock-setup/ntp boolean true d-i clock-setup/ntp-server string 10.0.1.254 d-i partman-auto/method string lvm partman-auto-lvm partman-auto-lvm/new_vg_name string vg0 d-i partman-auto/purge_lvm_from_device boolean true d-i partman-lvm/device_remove_lvm boolean true d-i partman-md/device_remove_md boolean true d-i partman-lvm/confirm boolean true d-i partman-lvm/confirm_nooverwrite boolean true d-i partman-auto-lvm/guided_size string max d-i partman-auto/choose_recipe select 30atomic d-i partman/default_filesystem string ext4 d-i partman-partitioning/confirm_write_new_label boolean true d-i partman/choose_partition select finish d-i partman/confirm boolean true d-i partman/confirm_nooverwrite boolean true d-i partman/mount_style select uuid d-i passwd/root-login boolean false d-i passwd/make-user boolean true d-i passwd/user-fullname string vangelis d-i passwd/username string vangelis d-i passwd/user-password-crypted password $6$asdafdsdfasdfasdf d-i passwd/user-uid string d-i user-setup/allow-password-weak boolean false d-i passwd/user-default-groups string adm cdrom dialout lpadmin plugdev sambashare d-i user-setup/encrypt-home boolean false d-i apt-setup/restricted boolean true d-i apt-setup/universe boolean true d-i apt-setup/backports boolean true d-i apt-setup/services-select multiselect security d-i apt-setup/security_host string security.ubuntu.com d-i apt-setup/security_path string /ubuntu tasksel tasksel/first multiselect Basic Ubuntu server, OpenSSH server d-i pkgsel/include string build-essential htop vim nmap ntp d-i pkgsel/upgrade select safe-upgrade d-i pkgsel/update-policy select none d-i pkgsel/updatedb boolean true d-i grub-installer/only_debian boolean true d-i grub-installer/with_other_os boolean true d-i finish-install/keep-consoles boolean false d-i finish-install/reboot_in_progress note d-i cdrom-detect/eject boolean true d-i debian-installer/exit/halt boolean false d-i debian-installer/exit/poweroff boolean false

    Read the article

  • org.apache.sling.scripting.jsp.jasper.JasperException: Unable to load tag handler class [migrated]

    - by Babak Behzadi
    I'm developing an Apache Sling WCMS and using java tag libs to rendering some data. I defined a jsp tag lib with following descriptor and handler class: TLD file contains: <?xml version="1.0" encoding="UTF-8"?> <taglib version="2.1" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-jsptaglibrary_2_1.xsd"> <tlib-version>1.0</tlib-version> <short-name>taglibdescriptor</short-name> <uri>http://bob/taglibs</uri> <tag> <name>testTag</name> <body-content>tagdependent</body-content> <tag-class>org.bob.taglibs.test.TestTagHandler</tag-class> </tag> </taglib> Tag handler class: package org.bob.taglibs.test; import javax.servlet.jsp.tagext.TagSupport; public class TestTagHandler extends TagSupport{ @Override public int doStartTag(){ try { pageContext.getOut().print("<h1>Helloooooooo</h1>"); } catch(Exception e) { return SKIP_BODY; } return EVAL_BODY_INCLUDE; } } I packaged the tag lib as BobTagLib.jar and deployed it as a bundle using Sling Web Console. I used this tag lib in a jsp page deployed in my Sling repository: index.jsp: <%@ page contentType="text/html;charset=UTF-8" language="java" %> <%@ taglib prefix="bob" uri="http://bob/taglibs" %> <html> <head><title>Simple jsp page</title></head> <body> <bob:testTag/> </body> </html> Calling the page cause the following exception: org.apache.sling.scripting.jsp.jasper.JasperException: /apps/TagTest/index.jsp(7,5) Unable to load tag handler class "org.bob.taglibs.test.TestTagHandler" for tag "bob:testTag" ... Can any one get me a solution? In advance, any help is apreciated.

    Read the article

  • Silverlight Cream for February 07, 2011 -- #1043

    - by Dave Campbell
    In this Issue: Roy Dallal, Kevin Dockx, Gill Cleeren, Oren Gal, Colin Eberhardt, Rudi Grobler, Jesse Liberty, Shawn Wildermuth, Kirupa Chinnathambi, Jeremy Likness, Martin Krüger(-2-), Beth Massi, and Michael Crump. Above the Fold: Silverlight: "A Circular ProgressBar Style using an Attached ViewModel" Colin Eberhardt WP7: "Isolated Storage" Jesse Liberty Lightswitch: "How To Create Outlook Appointments from a LightSwitch Application" Beth Massi Shoutouts: Gergely Orosz has a summary of his 4-part series on Styles in Silverlight: Everything a Developer Needs To Know From SilverlightCream.com: Silverlight Memory Leak, Part 2 Roy Dallal has part 2 of his memory leak posts up... and discusses the results of runnin VMMap and some hints on how to make best use of it. Using a Channel Factory in Silverlight (instead of adding a Service Reference). With cows. Kevin Dockx has a post up for those of you that don't like the generated code that comes about when adding a service reference, and the answer is a Channel Factory... and he has an example app in the post that populates a list of cows... honest ... check it out. Getting ready for Microsoft Silverlight Exam 70-506 (Part 4) Gill Cleeren has Part 4 of his deep-dive into studying for the Silverlight Certification exam. This time out he's got probably half a gazillion links for working with data... seriously! Sync unlimited instances of one Silverlight application How about a cross-browser sync of an unlimited number of instances of the same Silverlight app... Oren Gal has just that going on, and discusses his first two attempts and how he finally honed in on the solution. A Circular ProgressBar Style using an Attached ViewModel Wow... check out what Colin Eberhardt's done with the "Progress Bar" ... using an Attached View Model which he discussed in a post a while back... these are awesome! WP7 - Professional Audio Recorder Rudi Grobler discusses an audio recorder for WP7 that uses the NAudio audio library for not only the recording but visualization. Isolated Storage Jesse Liberty's got his 30th 'From Scratch' post up and this time he's talking about Isolated Storage. Learning OData? MSDN and Shawn Wildermuth has the videos for you! Shawn Wildermuth produced a couple series of videos for MSDN on OData: Getting Started and Consuming OData... get the link on Shawn's post. Creating Sample Data from a Class - Page 1 Kirupa Chinnathambi shows us how to use a schema of your own design in Blend... yet still have Blend produce sample data A Pivot-Style Data Grid without the DataGrid Jeremy Likness discusses the lack of an open-source grid with dynamic columns ... let him know if you've done one! ... and then he continues on to demonstrate his build-out of the same. Synchronize a freeform drawing and a real path creation Martin Krüger has a few new samples up in the Expression Gallery. This first is taking mouse movement in an InkPresenter and creating path statements from it in a canvas and playing them back. How to: use Storyboard completed behaviors Martin Krüger's next post is about Storyboards and firing one off the end of another, in Blend... so he ended up producing a behavior for doing that... and it's in the Expression Gallery How To Create Outlook Appointments from a LightSwitch Application Beth Massi has a new Lightswitch post up... her previous was email from Lightswitch... this is Outlook appointments... pretty darn cool. Quick run through of the WP7 Developer Tools January 2011 Michael Crump has a really good Quick look at the new WP7 Dev Tools that were released last week posted on his blog Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • ArchBeat Link-o-Rama for August 1, 2013

    - by OTN ArchBeat
    Performance Tuning – Systems Running BPEL Processes | Ravi Saraswathi and Jaswant Sing Ravi Saraswathi and Jaswant Singh, the authors of "Oracle SOA BPEL Process Manager 11gR1 - A Hands-on Tutorial" explain performance tuning of SOA composite applications for optimal performance and scalability. Steps to configure SAML 2.0 with Weblogic Server | Puneeth The blogger known only as Punteeth shares an illustrated technical post that will be of interest to those working with Oracle WebLogic and the Security Assertion Markup Language (SAML). Video: Planning and Getting Started - Developer PCs | Chris Muir Tune in to the latest episode of ADF Architecture TV to see Chris Muir explain why you don't have to buy the most expensive PCs in order to run JDeveloper. Key User Experience Design Principles for working with Big Data | John Fuller User Experience Designer John Fuller shares 6 core design principles for working with big data that focus on "helping people bring together a variety of data types in a fast and flexible way." Event: OTN Developer Day: ADF Mobile - Burlington, MA - Aug 28 Through six sessions, including a hands-on workshop, you'll learn a simpler way to leverage your existing skills to develop enterprise mobile applications using Oracle ADF Mobile. Registration is free, but seating is limited. Optimizing WebCenter Portal Mobile Delivery | Jeevan Joseph FMW solution architect Jeevan Joseph "walks you through identifying and analyzing some common WebCenter Portal performance bottlenecks related to page weight and describes a generic approach that can streamline your portal while improving the performance and response times." Customizing specific instances of a WebCenter task flow | Jeevan Joseph Fusion Middleware A-Team solution architect Jeevan Joseph strikes again with this article that explains "how to set up parameters on MDS customization so that it is applied only under certain conditions...making it possible to customize individual instances of task flows." Exalogic Virtual Tea Break Snippets – Modifying Memory, CPU and Storage on a vServer | Andrew Hopkinson FMW solution architect Andrew Hopkinson walks you through "the simple process of resizing the resources associated with an already existing Exalogic vServer." Oracle ADF Mobile Virtual Developer Day - Next Week | Shay Shmeltzer JDeveloper product team lead Shay Schmeltzer shares agenda information for the OTN Virtual Developer Day event covering Mobile Application Development for iOS and Android, coming up one week from today, on August 7, 2013, 9am PT/Noon ET/1pm BRT. What's New In Oracle Enterprise Pack for Eclipse 12.1.2.1.0? New features and updates on the newly-released Oracle Enterprise Pack for Eclipse 12.1.2.1.0, now available for download from OTN. IOUG Cloud Builders Unite | Jeff Erickson Check out this great Oracle Magazine article by Jeff Erickson about IOUG members organizing around their common interest in building private clouds. Thought for the Day "Stuff that's hidden and murky and ambiguous is scary because you don't know what it does." — Jerry Garcia (August 1, 1942 – August 9, 1995) Source: brainyquote.com

    Read the article

  • SQL SERVER – Preserve Leading Zero While Coping to Excel from SSMS

    - by pinaldave
    Earlier I wrote two articles about how to efficiently copy data from SSMS to Excel. Since I wrote that post there are plenty of interest generated on this subject. There are a few questions I keep on getting over this subject. One of the question is how to get the leading zero preserved while copying the data from SSMS to Excel. Well it is almost the same way as my earlier post SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet. The key here is in EXCEL and not in SQL Server. The step here is to change the format of Excel Cell to Text from Numbers and that will preserve the value of the with leading or trailing Zeros in Excel. However, I assume this is done for display purpose only because once you convert column to Text you may find it difficult to do numeric operations over the column for example Aggregation, Average etc. If you need to do the same you should either convert the columns back to Numeric in Excel or do the process in Database and export the same value as along with it as well. However, I have seen in requirement in the real world where the user has to have a numeric value with leading Zero values in it for display purpose. Here is my suggestion, instead of manipulating numeric value in the database and converting it to character value the ideal thing to do is to store it as a numeric value only in the database. Whatever changes you want to do for display purpose should be handled at the time of the display using the format function of SQL or Application Language. Honestly, database is data layer and presentation is presentation layer – they are two different things and if possible they should not be mixed. If due to any reason you cannot follow above advise and you need is to have append leading zeros in the database only here are two of my previous articles I suggest you to refer them. I am open to learn new tricks as these articles are almost three years old. Please share your opinion and suggestions in the comments area. SQL SERVER – Pad Ride Side of Number with 0 – Fixed Width Number Display SQL SERVER – UDF – Pad Ride Side of Number with 0 – Fixed Width Number Display Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Excel

    Read the article

  • “Play Now” via website vs. download & install

    - by Inside
    I've spent some time looking over the various threads here on GDSE and also on the regular Stackoverflow site, and while I saw a lot of posts and threads regarding various engines that could be used in game development, I haven't seen very much discussion regarding the various platforms that they can be used on. In particular, I'm talking about browser games vs. desktop games. I want to develop a simple 3D networked multiplayer game - roughly on the graphics level of Paper Mario and gameplay with roughly the same level of interaction as a hack & slash action/adventure game - and I'm having a hard time deciding what platform I want to target with it. I have some experience with using C++/Ogre3D and Python/Panda3D (and also some synchronized/networked programming), but I'm wondering if it's worth it to spend the extra time to learn another language and another engine/toolkit just so that the game can be played in a browser window (I'm looking at jMonkeyEngine right now). Is it worth it to go with engines that are less-mature, have less documentation, have fewer features, and smaller communities* just so that a (possibly?) larger audience can be reached? Does it make sense to even go with a web-environment for the kind of game that I want to make? Does anyone have any experiences with decisions like this? (* With the exception of Flash-based engines it seems like most of the other approaches have downsides when compared to what is available for desktop-based environments. I'd go with Flash, but I'm worried that Flash's 3D capabilities aren't mature enough right now to do what I want easily. There's also Unity3D, but I'm not sure how I feel about that at all. It seems highly polished, but requires a plugin to be downloaded for the game to be played -- at that rate I might as well have players download my game.) For simple & short games the Newgrounds approach (go to the site, click "play now", instant gratification) seems to work well. What about for more complex games? Is there a point where the complexity of a game is enough for people to say "OK, I'm going to download and play that"?

    Read the article

  • The enterprise vendor con - connecting SSD's using SATA 2 (3Gbits) thus limiting there performance

    - by tonyrogerson
    When comparing SSD against Hard drive performance it really makes me cross when folk think comparing an array of SSD running on 3GBits/sec to hard drives running on 6GBits/second is somehow valid. In a paper from DELL (http://www.dell.com/downloads/global/products/pvaul/en/PowerEdge-PowerVaultH800-CacheCade-final.pdf) on increasing database performance using the DELL PERC H800 with Solid State Drives they compare four SSD drives connected at 3Gbits/sec against ten 10Krpm drives connected at 6Gbits [Tony slaps forehead while shouting DOH!]. It is true in the case of hard drives it probably doesn’t make much difference 3Gbit or 6Gbit because SAS and SATA are both end to end protocols rather than shared bus architecture like SCSI, so the hard drive doesn’t share bandwidth and probably can’t get near the 600MiBytes/second throughput that 6Gbit gives unless you are doing contiguous reads, in my own tests on a single 15Krpm SAS disk using IOMeter (8 worker threads, queue depth of 16 with a stripe size of 64KiB, an 8KiB transfer size on a drive formatted with an allocation size of 8KiB for a 100% sequential read test) I only get 347MiBytes per second sustained throughput at an average latency of 2.87ms per IO equating to 44.5K IOps, ok, if that was 3GBits it would be less – around 280MiBytes per second, oh, but wait a minute [...fingers tap desk] You’ll struggle to find in the commodity space an SSD that doesn’t have the SATA 3 (6GBits) interface, SSD’s are fast not only low latency and high IOps but they also offer a very large sustained transfer rate, consider the OCZ Agility 3 it so happens that in my masters dissertation I did the same test but on a difference box, I got 374MiBytes per second at an average latency of 2.67ms per IO equating to 47.9K IOps – cost of an 240GB Agility 3 is £174.24 (http://www.scan.co.uk/products/240gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-500mb-s-85k-iops), but that same drive set in a box connected with SATA 2 (3Gbits) would only yield around 280MiBytes per second thus losing almost 100MiBytes per second throughput and a ton of IOps too. So why the hell are “enterprise” vendors still only connecting SSD’s at 3GBits? Well, my conspiracy states that they have no interest in you moving to SSD because they’ll lose so much money, the argument that they use SATA 2 doesn’t wash, SATA 3 has been out for some time now and all the commodity stuff you buy uses it now. Consider the cost, not in terms of price per GB but price per IOps, SSD absolutely thrash Hard Drives on that, it was true that the opposite was also true that Hard Drives thrashed SSD’s on price per GB, but is that true now, I’m not so sure – a 300GByte 2.5” 15Krpm SAS drive costs £329.76 ex VAT (http://www.scan.co.uk/products/300gb-seagate-st9300653ss-savvio-15k3-25-hdd-sas-6gb-s-15000rpm-64mb-cache-27ms) which equates to £1.09 per GB compared to a 480GB OCZ Agility 3 costing £422.10 ex VAT (http://www.scan.co.uk/products/480gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-410mb-s-30k-iops) which equates to £0.88 per GB. Ok, I compared an “enterprise” hard drive with a “commodity” SSD, ok, so things get a little more complicated here, most “enterprise” SSD’s are SLC and most commodity are MLC, SLC gives more performance and wear, I’ll talk about that another day. For now though, don’t get sucked in by vendor marketing, SATA 2 (3Gbit) just doesn’t cut it, SSD need 6Gbit to breath and even that SSD’s are pushing. Alas, SSD’s are connected using SATA so all the controllers I’ve seen thus far from HP and DELL only do SATA 2 – deliberate? Well, I’ll let you decide on that one.

    Read the article

  • Victor Grazi, Java Champion!

    - by Tori Wieldt
    Congratulations to Victor Grazi, who has been made a Java Champion! He was nominated by his peers and selected as a Java Champion for his experience as a developer, and his work in the Java and Open Source communities. Grazi is a Java evangelist and serves on the Executive Committee of the Java Community Process, representing Credit Suisse - the first non-technology vendor on the JCP. He also arranges the NY Java SIG meetings at Credit Suisse's New York campus each month, and he says it has been a valuable networking opportunity. He also is the spec lead for JSR 354, the Java Money and Currency API. Grazi has been building real time financial systems in Java since JDK version 1.02! In 1996, the internet was just starting to happen, Grazi started a dot com called Supermarkets to Go, that provided an on-line shopping presence to supermarkets and grocers. Grazi wrote most of the code, which was a great opportunity for him to learn Java and UI development, as well as database management. Next, he went to work at Bank of NY building a trading system. He studied for Java certification, and he noted that getting his certification was a game changer because it helped him started to learn the nuances of the Java language. He has held other development positions, "You may have noticed that you don't get as much junk mail from Citibank as you used to - that is thanks to one of my projects!" he told us. Grazi joined Credit Suisse in 2005 and is currently Vice President on the central architecture team. Grazi is proud of his open source project, Java Concurrent Animated, a series of animations that visualize the functionality of the components in the java.util.concurrent library. "It has afforded me the opportunity to speak around the globe" and because of it, has discovered that he really enjoys doing public presentations. He is a fine addition to the Java Champions program. The Java Champions are an exclusive group of passionate Java technology and community leaders who are community-nominated and selected under a project sponsored by Oracle. Nominees are named and selected through a peer review process. Java Champions get the opportunity to provide feedback, ideas, and direction that will help Oracle grow the Java Platform. This interchange may be in the form of technical discussions and/or community-building activities with Oracle's Java Development and Developer Program teams.

    Read the article

  • Technology vs. Antiquated Methods

    - by AreYouSerious
    So Here I am talking with my Program lead, about technology, and how while my father is the VP of a major company, he still doesn't have a blackberry, or a smart phone. and I think it's funny. Most people would say it's a generational thing. That because he's older, he dosen't accept technology, and that's why. I have trouble swallowing that because this is the same man, who bought a satellite radio for his car, and made sure that the printer for the house was networked so that his and my mom's laptop could print wirelessly from the living room through their wireless network. I think it has to do with more with necessity, and partially with finical responsibility. My father is very financially conciencious. Think about it yourself. you pay for internet at your home. You have internet access at your office. But if you get a smart phone you're going to pay almost the same amount just for that access. A lot of people take it as just another fixed cost... I'm one of those. I don't even think about it, as I check my facebook from the bus, train, or even while sitting in traffic... The convience of having connection everywhere outweigh the financial responsible person screaming at in the back of my mind. However This conversation lead us to another venue of discussion.... what happens when the power dies. if you left your charger at home, or you phone or navi just stops working... are you going to be able to continue on as you did when it was working... let take the navi as an example... if your navi stops working, how many of you know how to use a map, and navigate? can you even find where you are on a map using the cross streets that your stopped at? This is a skill that unfortunatelly is overlooked these days in the child rearing process. Most people don't see the value, while some others can't do it themselves, so how can they teach their offspring? Take another example.... what if your phone gets lost... or stolen, or you drive over it? do you have the numbers in their memorized? are they recorded somewhere? I know that if it weren't for google sync I wouldn't have them backed up... not sufficiently. And what good does that do if you're in timbuckto and your phone dies, think you can get on the internet to look up those numbers? Don't get me wrong. I'm the first to see the value in technology, and am willing to pay the price to not have to wait for prices to come down. I will pay extra to have that newest thing right now. but let me tell you what.... I know that should I ever procreate it will be a requirement for my offspring (children) to learn how to do something manually before I'll let them use technology. Food for thought?? Let everyone else know what you think.... just sayin'

    Read the article

  • WebCenter Spaces 11g PS2 Template Customization

    - by javier.ductor(at)oracle.com
    Recently, we have been involved in a WebCenter Spaces customization project. Customer sent us a prototype website in HTML, and we had to transform Spaces to set the same look&feel as in the prototype.Protoype:First of all, we downloaded a Virtual Machine with WebCenter Spaces 11g PS2, same version as customer's. The next step was to download ExtendWebCenterSpaces application. This is a webcenter application that customizes several elements of WebCenter Spaces: templates, skins, landing page, etc. jDeveloper configuration is needed, we followed steps described in Extended Guide, this blog was helpful too. . After that, we deployed the application using WebLogic console, we created a new group space and assigned the ExtendedWebCenterSpaces template (portalCentricSiteTemplate) to it. The result was this:As you may see there is a big difference with the prototype, which meant a lot of work to do from our side.So we needed to create a new Spaces template with its skin.Using jDeveloper, we created a new template based on the default template. We removed all menus we did not need and inserted 'include'  tags for header, breadcrumb and footers. Each of these elements was defined in an isolated jspx file.In the beginning, we faced a problem: we had to add code from prototype (in plain HTML) to jspx templates (JSF language). We managed to workaround this issue using 'verbatim' tags with 'CDATA' surrounding HTML code in header, breadcrumb and footers.Once the template was modified, we added css styles to the default skin. So we had some styles from portalCentricSiteTemplate plus styles from customer's prototype.Then, we re-deployed the application, assigned this template to a new group space and checked the result. After testing, we usually found issues, so we had to do some modifications in the application, then it was necessary to re-deploy the application and restart Spaces server. Due to this fact, the testing process takes quite a bit of time.Added to the template and skin customization, we also customized the Landing Page using this application and Members task flow using another application: SpacesTaskflowCustomizationApplication. I will talk about this task flow customization in a future entry.After some issues and workarounds, eventually we managed to get this look&feel in Spaces:P.S. In this customization I was working with Francisco Vega and Jose Antonio Labrada, consultants from Oracle Malaga International Consulting Centre.

    Read the article

  • My Take on Hadoop World 2011

    - by Jean-Pierre Dijcks
    I’m sure some of you have read pieces about Hadoop World and I did see some headlines which were somewhat, shall we say, interesting? I thought the keynote by Larry Feinsmith of JP Morgan Chase & Co was one of the highlights of the conference for me. The reason was very simple, he addressed some real use cases outside of internet and ad platforms. The following are my notes, since the keynote was recorded I presume you can go and look at Hadoopworld.com at some point… On the use cases that were mentioned: ETL – how can I do complex data transformation at scale Doing Basel III liquidity analysis Private banking – transaction filtering to feed [relational] data marts Common Data Platform – a place to keep data that is (or will be) valuable some day, to someone, somewhere 360 Degree view of customers – become pro-active and look at events across lines of business. For example make sure the mortgage folks know about direct deposits being stopped into an account and ensure the bank is pro-active to service the customer Treasury and Security – Global Payment Hub [I think this is really consolidation of data to cross reference activity across business and geographies] Data Mining Bypass data engineering [I interpret this as running a lot of a large data set rather than on samples] Fraud prevention – work on event triggers, say a number of failed log-ins to the website. When they occur grab web logs, firewall logs and rules and start to figure out who is trying to log in. Is this me, who forget his password, or is it someone in some other country trying to guess passwords Trade quality analysis – do a batch analysis or all trades done and run them through an analysis or comparison pipeline One of the key requests – if you can say it like that – was for vendors and entrepreneurs to make sure that new tools work with existing tools. JPMC has a large footprint of BI Tools and Big Data reporting and tools should work with those tools, rather than be separate. Security and Entitlement – how to protect data within a large cluster from unwanted snooping was another topic that came up. I thought his Elephant ears graph was interesting (couldn’t actually read the points on it, but the concept certainly made some sense) and it was interesting – when asked to show hands – how the audience did not (!) think that RDBMS and Hadoop technology would overlap completely within a few years. Another interesting session was the session from Disney discussing how Disney is building a DaaS (Data as a Service) platform and how Hadoop processing capabilities are mixed with Database technologies. I thought this one of the best sessions I have seen in a long time. It discussed real use case, where problems existed, how they were solved and how Disney planned some of it. The planning focused on three things/phases: Determine the Strategy – Design a platform and evangelize this within the organization Focus on the people – Hire key people, grow and train the staff (and do not overload what you have with new things on top of their day-to-day job), leverage a partner with experience Work on Execution of the strategy – Implement the platform Hadoop next to the other technologies and work toward the DaaS platform This kind of fitted with some of the Linked-In comments, best summarized in “Think Platform – Think Hadoop”. In other words [my interpretation], step back and engineer a platform (like DaaS in the Disney example), then layer the rest of the solutions on top of this platform. One general observation, I got the impression that we have knowledge gaps left and right. On the one hand are people looking for more information and details on the Hadoop tools and languages. On the other I got the impression that the capabilities of today’s relational databases are underestimated. Mostly in terms of data volumes and parallel processing capabilities or things like commodity hardware scale-out models. All in all I liked this conference, it was great to chat with a wide range of people on Oracle big data, on big data, on use cases and all sorts of other stuff. Just hope they get a set of bigger rooms next time… and yes, I hope I’m going to be back next year!

    Read the article

  • Visual Studio 2010 RC &ndash; Silverlight 4 and WCF RIA Services Development - Updates from MIX Anno

    - by Harish Ranganathan
    MIX is happening and there is a lot of excitement around the various releases such as the Windows Phone 7 Developer Preview, IE9 Platform Preview and few other announcements that have been made.  Clearly, the Windows Phone 7 Developer Preview has generated the maximum interest and opened a plethora of opportunities for .NET Developers.  It also takes the mobile development to a new generation and doesn’t force developers to learn different programming language. Along with this, few other releases have been out.  The most anticipated Silverlight 4 RC is out and its corresponding templates are also out there for you to download.  Once VS 2010 RC was released, it was much of a disappointment that it doesn’t support SL4 development as well as the SL4 Business Application Development (a.k.a. WCF RIA Services).   There were few workarounds though nothing concrete.  Earlier I had written about how the WCF RIA Services Preview does work with ASP.NET Development using VS 2010 RC. However, with the release of SL4 RC and the corresponding tooling updates, one can develop for both SL4 as well as SL4 + WCF RIA Services using VS 2010 RC.  This is kind of important and keeps the continuum going until VS 2010 RTMs.  So, the purpose of this post is to quickly give the updates and links to install the relevant tools. Silverlight 4 RC Runtime Windows Runtime or the Mac Runtime Silverlight 4 RC Developer Tools for Visual Studio 2010 RC Silverlight 4 Tools for Visual Studio 2010 (this would install the Runtime as well automatically) Expression Blend 4 Beta Expression Blend 4 Beta If you install the SL4 RC Developer Tools, it also installs the WCF RIA Services Preview automatically.  You just need to install the WCF RIA Services Toolkit that can be downloaded from Install the WCF RIA Services Toolkit Of course you can also just install the WCF RIA Services for VS 2010 RC separately (without SL4 Tools) from here Kindly note, all the above mentioned links are with respect to Visual Studio 2010 RC edition.  If you are developing with VS 2008, then you can just target SL3 (as I write this, there seems to be no official support for developing SL4 with VS 2008) and the related tools can be downloaded from http://www.silverlight.net/getstarted/ Basically you need to download SL 3 Runtime, SDK, Expression Blend 3 and the Silverlight Toolkit.  All the links for the download are available in the above mentioned page. Also, a version of WCF RIA Services that is supported in VS 2008 is available for download at WCF RIA Services Beta for VS 2008 I know there are far too many things to keep in mind.  So, I put a flowchart that could help with depicting it pictorial.  Note that this is just my own imagination and doesn’t cover all scenarios.  for example, if you are neither developing for Webform, Silverlight, you end up nowhere whereas in actual scenario you may want to develop Desktop, Services, Console, Game and what not.  So, keep in mind this is just Web. Cheers !!!

    Read the article

  • Harness MySQL's Continued Performance Tuning Improvements

    - by Antoinette O'Sullivan
    To fully harness the continued improvements in performance tuning you get with MySQL, take the MySQL Performance Tuning course. This 4 day class teaches you practical, safe, highly efficient ways to optimize performance for the MySQL Server. You will learn the skills needed to use tools for monitoring, evaluating and tuning.  You can take this course in the following three ways: Training-on-Demand: Follow this course at your own pace and from your own desk with streaming video of instructor delivery and booking time to follow hands-on exercises at your own convenience. Live-Virtual: Attend a live instructor-led event from your own desk. Choose from the numerous events on the schedule. In-Class:  Travel to an education center to follow this class. A sample of events on the schedule is shown below:  Location  Date  Delivery Language  Tokyo, Japan  19 November 2012  Japanese  Mechelen, Belgium  4 February 2013  English  London, England  19 November 2012  English  Budapest, Hungary  21 May 2013  Hungarian  Milan, Italy  14 January 2013  Italian  Rome, Italy  3 December 2012  Italian  Riga, Latvia  10 December 2012  Latvian  Amsterdam, Netherlands  7 January 2013  Dutch  Nieuwegein, Netherlands  26 November 2012  Dutch  Warsaw, Poland  3 December 2012  Polish  Lisbon, Portugal  4 February 2013  European Portugese  Porto, Portugal  4 February 2013  European Portugese  Barcelona, Spain  25 March 2013  Spanish  Madrid, Spain  17 December 2012  Spanish  Sydney, Australia  26 November 2012  English  Edmonton, Canada  10 December 2012  English  Montreal, Canada  26 November 2012  English  Ottawa, Canada  26 November 2012  English  Toronto, Canada  26 November 2012  English  Vancouver, Canada  10 December 2012  English  Sao Paolo, Brazil  26 November 2012  Brazilan Portugese For more information on this class or to know more about other courses on the authentic MySQL curriculum. see http://oracle.com/education/mysql. Note, many organizations deploy both Oracle Database and MySQL side by side to serve different needs, and as a database professional you can find training courses on both topics at Oracle University! Check out the upcoming Oracle Database training courses and MySQL training courses. Even if you're only managing Oracle Databases at this point of time, getting familiar with MySQL will broaden your career path with growing job demand.

    Read the article

  • Implement Negascout Algorithm with stack

    - by Dan
    I'm not familiar with how these stack exchange accounts work so if this is double posting I apologize. I asked the same thing on stackoverflow. I have added an AI routine to a game I am working on using the Negascout algorithm. It works great, but when I set a higher maximum depth it can take a few seconds to complete. The problem is it blocks the main thread, and the framework I am using does not have a way to deal with multi-threading properly across platforms. So I am trying to change this routine from recursively calling itself, to just managing a stack (vector) so that I can progress through the routine at a controlled pace and not lock up the application while the AI is thinking. I am getting hung up on the second recursive call in the loop. It relies on a returned value from the first call, so I don't know how to add these to a stack. My Working c++ Recursive Code: MoveScore abNegascout(vector<vector<char> > &board, int ply, int alpha, int beta, char piece) { if (ply==mMaxPly) { return MoveScore(evaluation.evaluateBoard(board, piece, oppPiece)); } int currentScore; int bestScore = -INFINITY; MoveCoord bestMove; int adaptiveBeta = beta; vector<MoveCoord> moveList = evaluation.genPriorityMoves(board, piece, findValidMove(board, piece, false)); if (moveList.empty()) { return MoveScore(bestScore); } bestMove = moveList[0]; for(int i=0;i<moveList.size();i++) { MoveCoord move = moveList[i]; vector<vector<char> > newBoard; newBoard.insert( newBoard.end(), board.begin(), board.end() ); effectMove(newBoard, piece, move.getRow(), move.getCol()); // First Call ****** MoveScore current = abNegascout(newBoard, ply+1, -adaptiveBeta, -max(alpha,bestScore), oppPiece); currentScore = - current.getScore(); if (currentScore>bestScore){ if (adaptiveBeta == beta || ply>=(mMaxPly-2)){ bestScore = currentScore; bestMove = move; }else { // Second Call ****** current = abNegascout(newBoard, ply+1, -beta, -currentScore, oppPiece); bestScore = - current.getScore(); bestMove = move; } if(bestScore>=beta){ return MoveScore(bestMove,bestScore); } adaptiveBeta = max(alpha, bestScore) + 1; } } return MoveScore(bestMove,bestScore); } If someone can please help by explaining how to get this to work with a simple stack. Example code would be much appreciated. While c++ would be perfect, any language that demonstrates how would be great. Thank You.

    Read the article

  • Java Spotlight Episode 85: Migrating from Spring to JavaEE 6

    - by Roger Brinkley
    Interview with Bert Ertman and Paul Bakker on migrating from Spring to JavaEE 6. Joining us this week on the Java All Star Developer Panel is Arun Gupta, Java EE Guy. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Transactional Interceptors in Java EE 7 Larry Ellison and Mark Hurd on Oracle Cloud Duke’s Choice Award submissions open until June 15 Registration for the 2012 JVM Lanugage Summit now open Events June 11-14, Cloud Computing Expo, New York City June 12, Boulder JUG June 13, Denver JUG June 13, Eclipse Juno DemoCamp, Redwoood Shore June 13, JUG Münster June 14, Java Klassentreffen, Vienna, Austria June 18-20, QCon, New York City June 20, 1871, Chicago June 26-28, Jazoon, Zurich, Switzerland July 5, Java Forum, Stuttgart, Germany July 30-August 1, JVM Language Summit, Santa Clara Feature InterviewBert Ertman is a Fellow at Luminis in the Netherlands. Next to his customer assignments he is responsible for stimulating innovation, knowledge sharing, coaching, technology choices and presales activities. Besides his day job he is a Java User Group leader for NLJUG, the Dutch Java User Group. A frequent speaker on Enterprise Java and Software Architecture related topics at international conferences (e.g. Devoxx, JavaOne, etc) as well as an author and member of the editorial advisory board for Dutch software development magazine: Java Magazine. In 2008, Bert was honored by being awarded the coveted title of Java Champion by an international panel of Java leaders and luminaries. Paul Bakker is senior software engineer at Luminis Technologies where he works on the Amdatu platform, an open source, service-oriented application platform for web applications. He has a background as trainer where he teached various Java related subjects. Paul is also a regular speaker on conferences and author for the Dutch Java Magazine.TutorialsPart 1: http://howtojboss.com/2012/04/17/article-series-migrating-spring-applications-to-java-ee-6-part-1/Part 2: http://howtojboss.com/2012/04/17/article-series-migrating-spring-applications-to-java-ee-6-part-2/Part 3: http://howtojboss.com/2012/05/10/article-series-migrating-from-spring-to-java-ee-6-part-3/   Mail Bag What’s Cool Sang Shin in EE team @larryellison JavaOne content selection is almost complete-Notifications coming soon

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >