Search Results

Search found 5254 results on 211 pages for 'concept analysis'.

Page 161/211 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • What Color is your Jetpack ?

    - by JoshReuben
    I’m a programmer, Im approaching 40, and I’m fairly decent at my job – I’ll keep doing what I’m doing for as long as they let me!   So what are your career options if you know how to code? A Programmer could be ..   An Algorithm developer Pros Interesting High barriers of entry, potential for startup competitive factor Cons Do you have the skill, qualifications? What are working conditions n this mystery niche ? micro-focus An Academic Pros Low pressure Job security – or is this an illusion ? Cons Low Pay Need a PhD A Software Architect Pros: strategic, rather than tactical Setting technology platform and high level vision You say how it should work, others have to figure out why its not working the way its supposed to ! broad view – you are paid to learn (how do you con people into paying for you to learn ??) Cons: Glorified developer – more often than not! competitive – everyone wants to do it ! loose touch with underlying tech in tough times, first guy to get the axe ! A Software Engineer Pros: interesting, always more to learn fun I can do it Fallback Cons: Nothing new under the sun – been there, done that Dealing with poor requirements, deadlines, other peoples code, overtime C#, XAML, Web - Low barriers of entry –> à race to the bottom A Team leader Pros: Setting code standards and proposing technology choices Cons: Glorified developer – more often than not! Inspecting other peoples code and debugging the problems they cannot fix Dealing with mugbies and prima donas Responsible for QA of others A Project Manager Pros No need for debugging other peoples code Cons Low barrier of entry High pressure Responsible for QA of others Loosing touch with technology A lot of bullshit meetings Have to be an asshole A Product Manager Pros No need for debugging other peoples code Learning new skillset of sales and marketing Cons Travel (I'm a family man) May need to know the bs details of an uninteresting product things I want to work with: AI, algorithms, Numerical Computing, Mathematica, C++ AMP – unfortunately, the work here is few & far between. VS & TFS Extensibility, DSLs (Workflow , Lightswitch), Code Generation – one day, code will write code ! Unity3D, WebGL – fun, fun, fun ! Modern Web – Knockout, SignalR, MVC, Node.Js ??? (tentative – I'll wait until things stabilize as this area is undergoing a pre-Cambrian explosion) Things I don’t want to work with: (but will if I'm asked to !) C# – same old, same old – not learning anything new here Old code – blech ! Environment with code & fix mentality , ad hoc requirements, excessive overtime Pc support, System administration – even after 20 years, people still ask you to do this sometimes ! debugging – my skills are just not there yet Oracle Old tech: VB 6, XSLT, WinForms, Net 3.51 or less Old style Web dev Information Systems: ASP.NET webforms, Reporting services / crystal reports, SQL Server CRUD with manual data layer, XAML MVVM – variations of the same concept, ad nauseaum. Low barriers of entry –> race to the bottom.  Metro – an elegant API coupled to a horrendous UX – I'll wait for market penetration viability before investing further in this.   Conclusion So if you are in a slump, take heart: Programming is a great career choice compared to every other job !

    Read the article

  • Web Developer - How to enhance my skillset?

    - by atif089
    First of all pardon my English. I am not a native English speaker I have been a Web Developer for the past 4 years. In these 4 years I have spent my time on the internet to learn things. My current skillset comprises of HTML CSS PHP MySQL jQuery (I would not say js and rather say jQuery because I am good at using jQuery and bad with plain javascript.) The above things seemed like an easier part of my life as I quickly learned them. But now I would really like to enhance my skillset and I am pretty confused which way to move ahead considering that I have to learn things using the web and references on my own. Design My first option is towards design. Shall I get started with design and start using Adobe Illustrator, Photoshop, Flash, Flex. Designing along with my previous skills looks like a money maker to me. As both are co-related to each other when web design is considered. And its easier to learn the first 2 and I hope I can get tutorials for the last 2 as well. Marketing A lot of my existing clients asked me if I do SEO. So this looked as a good field to me as well. I cannot estimate the scope of SEO but I assume it has a long future. Since I am business minded as well and there are a lot of tutorials around, should I start with SEO, SEM, Social Media, PPC or whatever it consists of. Software Development The complex plight and hardest thing (perhaps) but the easiest way to find a decent job in my location. If I go for software development what platform should be that I should be ideally going after? Should it be C# for windows development, or ASP.NET (once again enhances my skill set), J2EE (there are a lot of jobs for J2EE developers here) or plain C and C++. Also I think it is difficult to learn software languages right from Hello World, using internet? I have no clue how I learned PHP but I am sort of a pro now, but these other languages seems like a disaster to me? I cant figure out the reason if its because PHP is easier or there was a lot of tutorials around for PHP. Anyways is it also possible to learn software development right from Hello World using the web? Database / Server (Linux) / Network Administration Seems like a job with a decent pay but less number of jobs and a bit harder to learn online. (not sure) What should be the right track I should move ahead. P.S - Age is not a constraint for me as I am between 20-21, and I come from an IT background. I know quite little basics about C (upto structures) C++ (upto objects, I was not able to understand templates) Core Java (some basics and OOP concept) RDBMS Visual Basic 6 (used to do this long back) UNIX (a bunch of commands like who, finger, chmod, ls and a bit of #bash) Or is there anything else that I left out? I need you guys to please give me a feedback and the reason why I should select that field.

    Read the article

  • Get Fanatical About Your Followers

    - by Mike Stiles
    In the fourth of our series of discussions with Aberdeen’s Trip Kucera, we touch on what fans of your brand have come to expect in exchange for their fandom. Spotlight: Around the Oracle Social office, we live for football. So when we think of a true “fan” of a brand, something on the level of a football fan is what comes to mind. But are brands trying to invest fans on that same level? Trip: Yeah, if you’re a football fan, this is definitely your time of year. And if you’ve been to any NFL games recently, especially if you hadn’t been for a few years previously, you may have noticed that from the cup holders to in-stadium Wi-Fi, there’s an increasing emphasis being placed on “fan-focused” accommodations. That’s what they’re known as in the stadium business. Spotlight: How are brands doing in that fan-focused arena? Trip: Remember fan is short for “fanatical.” Brands can definitely learn from the way teams have become fanatical about their fans, or in the social media world, their followers. Many companies consider a segment of their addressable social audience as true fans; I’ve even heard the term “super-fans” used. So just as fans know and can tell you nearly everything about their favorite team, our research shows that there’s a lot value from getting to know your social audience—your followers—at a deeper level. Spotlight: So did your research show there’s a lot to be gained by making fandom a two-way street? Trip: Aberdeen’s new social relationship management research suggests that companies should develop capabilities to better analyze their social audience at a more granular level. Countless “ripped from the headlines” examples, from “United Breaks Guitars” to the most recent British Airways social fiasco we talked about a few weeks ago show how social can magnify the impact of a single customer voice. Spotlight: So how do the companies who are executing social most successfully do that? Trip: Leaders, which are the top-performing companies in Aberdeen’s study, are showing the value of identifying and categorizing your social audience. You should certainly treat every customer as if they have 10,000 followers, because they just might, but you can also proactively engage with high-value customer and high-value influencers. Getting back to the football analogy, it’s like how teams strive to give every guest a great experience, but they really roll out the red carpet for those season ticket and luxury box holders. Spotlight: I’m not allowed in luxury boxes, so you’ll have to tell me what that’s like. But what is the brand equivalent of rolling out the red carpet? Trip: Leaders are nearly three times more likely than Followers to have a process in place that identifies key social influencers for engagement, and more than twice as likely to identify customer advocates for social outreach. This is the kind of knowledge that gives companies the ability to better target social messaging and promotions like we talked about in our last discussion, as well as a basis for understanding how to measure the impact of their social media programs. I’ll give you an example. I hosted an event at one of my favorite restaurants recently. I had mentioned them in a Tweet several weeks before the event, and on the day of the event, they Tweeted out that they were looking forward to seeing me that evening for the event. It’s a small thing, but it had a big impact and I’d certainly go back as a result. Spotlight: So what specifically can brands use and look at to determine where their potential super-fans are? Trip: Social graph analysis, which looks at both the demographic/psychographic trends as well as the behavioral connections, can surface important brand value. Aberdeen’s PR and Brand Management research indicated that top-performing companies are more than three times more likely than Followers to both determine demographic trends through social listening (44% vs. 13%), and to identify meaningful customer segments through social (44% vs. 12%). This kind of brand-level insight can complement and enrich traditional market research. But perhaps even more importantly, it can serve as an early warning system for customer experience failures. @mikestilesPhoto: freedigitalphotos.net

    Read the article

  • Antenna Aligner Part 4: Role'ing in the deep

    - by Chris George
    Since last time I've been trying to sort out the general workflow of the app. It's fundamentally not hard, there is a list of transmitters, you select a transmitter and it shows the compass view. Having done quite a bit of ajax/asp.net/html in the past, I immediately started off by creating two divs within my 'page', one for the list, one for the compass. Then using the onClick event in the list, this will switch the display attribute on the divs. This seemed to work, but did lead to some dodgy transitional redrawing artefacts which I was not happy with. So after some Googling I realised I was doing it all wrong! JQuery mobile has the concept of giving an object in html a data-role. By giving a div the attribute data-role="page" it is then treated as a separate page on the mobile device. Within the code, this is referenced like a html anchor in the form #mypage. Using this system, page transitions such as fade or slide are automatically applied which adds to the whole authenticity of the app! Here is a simple example: . <a href="#'compasspage">compass</a> . <div data-role="page" id="compasspage" data-add-back-btn="true"> But I don't want just a static link, I want to dynamically create my list, and get each list elements to switch to the compass page with the right information. So here is the jquery that I used to dynamically inject new <li> rows into the <ul> block. $('ul').append($('<li/>', {    //here appendin `<li>`     'data-role': "list-divider" }).append($('<a/>', {    //here appending `<a>` into `<li>`     'href': '#compasspage',     'data-transition': 'none',     'onclick': 'selectTx(' + i + ')',     'html': buttonHtml }))); $('ul').listview('refresh'); This is called within a for loop so the first 5 appropriate transmitters are used. There are several things of interest to note here. Firstly, I could not find a more elegant way to tell the target page which transmitter I've clicked on, so I have used the onclick event as well as the href attribute. The onclick event fires 'selectTx' which simply sets a global member variable to the specific index number I've clicked on. Yes it's not nice, but it works. Secondly, the data-transition attribute is set to 'none'. I wanted the transition between the pages to be a whooshy slidey effect. However this worked going to the compass page, but returning to the list page gave some undesirable visual artefacts (flickering, redrawing etc.). So I decided to remove the transitions all together, which was a shame. Thirdly, rather than embedding loads of html into the append command, I removed this out into a variable 'buttonHtml'. Doing this really tidied up my code. Until next time!

    Read the article

  • Would this be a good web application architecture?

    - by Gustav Bertram
    My problem Our MVC based framework does not allow us to cache only part of our output. Ideally we want to cahce static and semi-static bits, and run dynamic bits. In addition, we need to consider data caching that reacts to database changes. My idea The concept I came up with was to represent a page as a tree of XML fragment objects. (I say XML, but I mean XHTML). Some of the fragments are dynamic, and can pull their data directly from models or other sources, but most of the fragments are static scaffolding. If a subtree of fragments is completely static, then I imagine that they could unfold into pure XML that would then be cached as the text representation of their parent element. This process would ideally continue until we are left with a root element that contains all of the static XML, and has a couple of dynamic XML fragments that are resolved and attached to the relevant nodes of the XML tree just before the page is displayed. In addition to separating content into dynamic and static fragments, some fragments could be dynamic and cached. A simple expiry time which propagates up through the XML fragment tree would indicate that a specific fragment should periodically be refreshed. A newspaper section or front page does not need to be updated each second. Minutes or sometimes even longer is sufficient. Other fragments would be dynamic and uncached. Typically too many articles are viewed for them to be cached - the cache would overflow. Some individual articles may be cached if they are extremely popular. Functional notes The folding mechanism could be to be smart enough to judge when it would be more profitable to fold a dynamic cached fragment and propagate the expiry date to the parent fragment, or to keep it separate and simple attach to the XML tree when resolving the page. If some dynamic cached fragments are associated to database objects through mechanisms like a globally unique content id, then changes to the database could trigger changes to the output cache. If fragments store the identifiers of parent fragments, then they could trigger a refolding process that would then include the updated data. A set of pure XML with an ordered array of fragment objects (that each store the identifying information of the node to which they should be attached), can be resolved in a fairly simple way by walking the XML tree, and merging the data from the fragments. Because it is not necessary to parse and construct the entire tree in memory before attaching nodes, processing should be fairly fast. The identifiers of each fragment would be a combination of relevant identity data and the type of fragment object. Cached parent fragments would contain references to these identifiers, in order to then either pull them from the fragment cache, or to run their code. The controller's responsibility is reduced to making changes to the database, and telling the root XML fragment object to render itself. The Question My question has two parts: Is this a good design? Are there any obvious flaws I'm missing? Has somebody else thought of this before? References? Is there an existing alternative that I should consider? A cool templating engine maybe?

    Read the article

  • PowerShell Script To Find Where SharePoint 2007 Features Are Activated

    - by Brian T. Jackett
    Recently I posted a script to find where SharePoint 2010 Features Are Activated.  I built the original version to use SharePoint 2010 PowerShell commandlets as that saved me a number of steps for filtering and gathering features at each level.  If there was ever demand for a 2007 version I could modify the script to handle that by using the object model instead of commandlets.  Just the other week a fellow SharePoint PFE Jason Gallicchio had a customer asking about a version for SharePoint 2007.  With a little bit of work I was able to convert the script to work against SharePoint 2007.   Solution    Below is the converted script that works against a SharePoint 2007 farm.  Note: There appears to be a bug with the 2007 version that does not give accurate results against a SharePoint 2010 farm.  I ran the 2007 version against a 2010 farm and got fewer results than my 2010 version of the script.  Discussing with some fellow PFEs I think the discrepancy may be due to sandboxed features, a new concept in SharePoint 2010.  I have not had enough time to test or confirm.  For the time being only use the 2007 version script against SharePoint 2007 farms and the 2010 version against SharePoint 2010 farms.    Note: This script is not optimized for medium to large farms.  In my testing it took 1-3 minutes to recurse through my demo environment.  This script is provided as-is with no warranty.  Run this in a smaller dev / test environment first. 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 function Get-SPFeatureActivated { # see full script for help info, removed for formatting [CmdletBinding()] param(     [Parameter(position = 1, valueFromPipeline=$true)]     [string]     $Identity )#end param     Begin     {         # load SharePoint assembly to access object model         [void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint")             # declare empty array to hold results. Will add custom member for Url to show where activated at on objects returned from Get-SPFeature.         $results = @()                 $params = @{}     }     Process     {         if([string]::IsNullOrEmpty($Identity) -eq $false)         {             $params = @{Identity = $Identity}         }                 # create hashtable of farm features to lookup definition ids later         $farm = [Microsoft.SharePoint.Administration.SPFarm]::Local                         # check farm features         $results += ($farm.FeatureDefinitions | Where-Object {$_.Scope -eq "Farm"} | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                          % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value ([string]::Empty) -PassThru} |                          Select-Object -Property Scope, DisplayName, Id, Url)                 # check web application features         $contentWebAppServices = $farm.services | ? {$_.typename -like "Windows SharePoint Services Web Application"}                 foreach($webApp in $contentWebAppServices.WebApplications)         {             $results += ($webApp.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                          % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $webApp.GetResponseUri(0).AbsoluteUri -PassThru} |                          Select-Object -Property Scope, DisplayName, Id, Url)                         # check site collection features in current web app             foreach($site in ($webApp.Sites))             {                 $results += ($site.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                                  % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $site.Url -PassThru} |                                  Select-Object -Property Scope, DisplayName, Id, Url)                                 # check site features in current site collection                 foreach($web in ($site.AllWebs))                 {                     $results += ($web.Features | Select-Object -ExpandProperty Definition | Where-Object {[string]::IsNullOrEmpty($Identity) -or ($_.DisplayName -eq $Identity)} |                                      % {Add-Member -InputObject $_ -MemberType noteproperty -Name Url -Value $web.Url -PassThru} |                                      Select-Object -Property Scope, DisplayName, Id, Url)                                                        $web.Dispose()                 }                 $site.Dispose()             }         }     }     End     {         $results     } } #end Get-SPFeatureActivated Get-SPFeatureActivated   Conclusion    I have posted this script to the TechNet Script Repository (click here).  As always I appreciate any feedback on scripts.  If anyone is motivated to run this 2007 version script against a SharePoint 2010 to see if they find any differences in number of features reported versus what they get with the 2010 version script I’d love to hear from you.         -Frog Out

    Read the article

  • Learn Many Languages

    - by Jeff Foster
    My previous blog, Deliberate Practice, discussed the need for developers to “sharpen their pencil” continually, by setting aside time to learn how to tackle problems in different ways. However, the Sapir-Whorf hypothesis, a contested and somewhat-controversial concept from language theory, seems to hold reasonably true when applied to programming languages. It states that: “The structure of a language affects the ways in which its speakers conceptualize their world.” If you’re constrained by a single programming language, the one that dominates your day job, then you only have the tools of that language at your disposal to think about and solve a problem. For example, if you’ve only ever worked with Java, you would never think of passing a function to a method. A good developer needs to learn many languages. You may never deploy them in production, you may never ship code with them, but by learning a new language, you’ll have new ideas that will transfer to your current “day-job” language. With the abundant choices in programming languages, how does one choose which to learn? Alan Perlis sums it up best. “A language that doesn‘t affect the way you think about programming is not worth knowing“ With that in mind, here’s a selection of languages that I think are worth learning and that have certainly changed the way I think about tackling programming problems. Clojure Clojure is a Lisp-based language running on the Java Virtual Machine. The unique property of Lisp is homoiconicity, which means that a Lisp program is a Lisp data structure, and vice-versa. Since we can treat Lisp programs as Lisp data structures, we can write our code generation in the same style as our code. This gives Lisp a uniquely powerful macro system, and makes it ideal for implementing domain specific languages. Clojure also makes software transactional memory a first-class citizen, giving us a new approach to concurrency and dealing with the problems of shared state. Haskell Haskell is a strongly typed, functional programming language. Haskell’s type system is far richer than C# or Java, and allows us to push more of our application logic to compile-time safety. If it compiles, it usually works! Haskell is also a lazy language – we can work with infinite data structures. For example, in a board game we can generate the complete game tree, even if there are billions of possibilities, because the values are computed only as they are needed. Erlang Erlang is a functional language with a strong emphasis on reliability. Erlang’s approach to concurrency uses message passing instead of shared variables, with strong support from both the language itself and the virtual machine. Processes are extremely lightweight, and garbage collection doesn’t require all processes to be paused at the same time, making it feasible for a single program to use millions of processes at once, all without the mental overhead of managing shared state. The Benefits of Multilingualism By studying new languages, even if you won’t ever get the chance to use them in production, you will find yourself open to new ideas and ways of coding in your main language. For example, studying Haskell has taught me that you can do so much more with types and has changed my programming style in C#. A type represents some state a program should have, and a type should not be able to represent an invalid state. I often find myself refactoring methods like this… void SomeMethod(bool doThis, bool doThat) { if (!(doThis ^ doThat)) throw new ArgumentException(“At least one arg should be true”); if (doThis) DoThis(); if (doThat) DoThat(); } …into a type-based solution, like this: enum Action { DoThis, DoThat, Both }; void SomeMethod(Action action) { if (action == Action.DoThis || action == Action.Both) DoThis(); if (action == Action.DoThat || action == Action.Both) DoThat(); } At this point, I’ve removed the runtime exception in favor of a compile-time check. This is a trivial example, but is just one of many ideas that I’ve taken from one language and implemented in another.

    Read the article

  • Olympics data available for all on Windows Azure SQL Database and Power View

    - by jamiet
    Are you looking around for some decent test data for your BI demos? Well, if so, Microsoft have provided some data about all medals won at the Olympics Games (1900 to 2008) at OlympicsData workbook - Excel, SSIS, Azure sample; it provides analysis over athletes, countries, medal type, sport, discipline and various other dimensions. The data has been provided in an Excel workbook along with instructions on how to load the data into a Windows Azure SQL Database using SQL Server Integration Services (SSIS). Frankly though, the rigmarole of standing up your own Windows Azure SQL Database ok, SQL Azure database, is both costly (SQL Azure isn’t free) and time consuming (the provided instructions aren’t exactly an idiot’s guide and getting SSIS to work properly with Excel isn’t a barrel of laughs either). To ease the pain for all you BI folks out there that simply want to party on the data I have loaded it all into the SQL Azure database that I use for hosting AdventureWorks on Azure. You can read more about AdventureWorks on Azure below however I’ll summarise here by saying it is a SQL Azure database provided for the use of the SQL Server community and which is supported by voluntary donations. To view the data the credentials you need are: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Type those into SSMS and away you go, the data is provided in four tables [olympics].[Sport], [olympics].[Discipline], [olympics].[Event] & [olympics].[Medalist]: I figured this would be a good candidate for a Power View report so I fired up Excel 2013 and built such a report to slice’n’dice through the data – here are some screenshots that should give you a flavour of what is available: A view of all the available data Where do all the gymastics medals go? Which countries do top ten all-time medal winners come from? You get the idea. There is masses of information here and if you have Excel 2013 handy Power View provides a quick and easy way of surfing through it. To save you the bother of setting up the Power View report yourself you can have the one that I took these screenshots from, it is available on my SkyDrive at OlympicsAnalysis.xlsx so just hit the link and download to play to your heart’s content. Party on, people! As I said above the data is hosted on a SQL Azure database that I use for hosting “AdventureWorks on Azure” which I first announced in March 2013 at AdventureWorks2012 now available for all on SQL Azure. I’ll repeat the pertinent parts of that blog post here: I am pleased to announce that as of today … [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use for their own means. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected] Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more than we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. I’d like to emphasize that last point. If my hosting this Olympics data is useful to you please support this initiative by donating. Thanks in advance. @Jamiet

    Read the article

  • JavaOne 2012: Nashorn Edition

    - by $utils.escapeXML($entry.author)
    As with my JavaOne 2012: OpenJDK Edition post a while back (now updated to reflect the schedule of the talks), I find it convenient to have my JavaOne schedule ordered by subjects of interest. Beside OpenJDK in all its flavors, another subject I find very exciting is Nashorn. I blogged about the various material on Nashorn in the past, and we interviewed Jim Laskey, the Project Lead on Project Nashorn in the Java Spotlight podcast. So without further ado, here are the JavaOne 2012 talks and BOFs with Nashorn in their title, or abstract:CON5390 - Nashorn: Optimizing JavaScript and Dynamic Language Execution on the JVM - Monday, Oct 1, 8:30 AM - 9:30 AMThere are many implementations of JavaScript, meant to run either on the JVM or standalone as native code. Both approaches have their respective pros and cons. The Oracle Nashorn JavaScript project is based on the former approach. This presentation goes through the performance work that has gone on in Oracle’s Nashorn JavaScript project to date in order to make JavaScript-to-bytecode generation for execution on the JVM feasible. It shows that the new invoke dynamic bytecode gets us part of the way there but may not quite be enough. What other tricks did the Nashorn project use? The presentation also discusses future directions for increased performance for dynamic languages on the JVM, covering proposed enhancements to both the JVM itself and to the bytecode compiler.CON4082 - Nashorn: JavaScript on the JVM - Monday, Oct 1, 3:00 PM - 4:00 PMThe JavaScript programming language has been experiencing a renaissance of late, driven by the interest in HTML5. Nashorn is a JavaScript engine implemented fully in Java on the JVM. It is based on the Da Vinci Machine (JSR 292) and will be available with JDK 8. This session describes the goals of Project Nashorn, gives a top-level view of how it all works, provides the current status, and demonstrates examples of JavaScript and Java working together.BOF4763 - Meet the Nashorn JavaScript Team - Tuesday, Oct 2, 4:30 PM - 5:15 PMCome to this session to meet the Oracle JavaScript (Project Nashorn) language teamBOF6661 - Nashorn, Node, and Java Persistence - Tuesday, Oct 2, 5:30 PM - 6:15 PMWith Project Nashorn, developers will have a full and modern JavaScript engine available on the JVM. In addition, they will have support for running Node applications with Node.jar. This unique combination of capabilities opens the door for best-of-breed applications combining Node with Java SE and Java EE. In this session, you’ll learn about Node.jar and how it can be combined with Java EE components such as EclipseLink JPA for rich Java persistence. You’ll also hear about all of Node.jar’s mapping, caching, querying, performance, and scaling features.CON10657 - The Polyglot Java VM and Java Middleware - Thursday, Oct 4, 12:30 PM - 1:30 PMIn this session, Red Hat and Oracle discuss the impact of polyglot programming from their own unique perspectives, examining non-Java languages that utilize Oracle’s Java HotSpot VM. You’ll hear a discussion of topics relating to Ruby, Lisp, and Clojure and the intersection of other languages where they may touch upon individual frameworks and projects, and you’ll get perspectives on JavaScript via the Nashorn Project, an upcoming JavaScript engine, developed fully in Java.CON5251 - Putting the Metaobject Protocol to Work: Nashorn’s Java Bindings - Thursday, Oct 4, 2:00 PM - 3:00 PMProject Nashorn is Oracle’s new JavaScript runtime in Java 8. Being a JavaScript runtime running on the JVM, it provides integration with the underlying runtime by enabling JavaScript objects to manipulate Java objects, implement Java interfaces, and extend Java classes. Nashorn is invokedynamic-based, and for its Java integration, it does away with the concept of wrapper objects in favor of direct virtual machine linking to Java objects’ methods provided by a metaobject protocol, providing much higher performance than what could be expected from a scripting runtime. This session looks at the details of the integration, a topic of interest to other language implementers on the JVM and a wider audience of developers who want to understand how Nashorn works.That's 6 sessions tooting the Nashorn this year at JavaOne, up from 2 last year.

    Read the article

  • Introduction to WebCenter Personalization: &ldquo;The Conductor&rdquo;

    - by Steve Pepper
    There are some new faces in the town of WebCenter with the latest 11g PS3 release.  A new component has introduced itself as "Oracle WebCenter Personalization", a.k.a WCP, to simplify delivery of a personalized experience and content to end users.  This posting reviews one of the primary components within WCP: "The Conductor". The Conductor: This ain't just an ordinary cloud... One of the founding principals behind WebCenter Personalization was to provide an open client-side API that remains independent of the technology invoking it, in addition to independence from the architecture running it.  The Conductor delivers this, and much, much more. The Conductor is the engine behind WebCenter Personalization that allows flow-based documents, called "Scenarios", to be managed and executed on the server-side through a well published and RESTful api.      The Conductor also supports an extensible model for custom provider integration that can be easily invoked within a Scenario to promote seamless integration with existing business assets. Introducing the Scenario Conductor Scenarios are declarative offline-authored documents using the custom Personalization JDeveloper bundle included with WebCenter.  A Scenario contains one (or more) statements that can: Create variables that are scoped to the current execution context Iterate over collections, or loop until a specific condition is met Execute one or more statements when a condition is met Invoke other scenarios that exist within the same namespace Invoke a data provider that integrates with custom applications Once a variable is assigned within the Scenario's execution context, it can be referenced anywhere within the same Scenario using the common Expression Language syntax used in J2EE web containers. Scenarios are then published and tested to the Integrated WebLogic Server domain, or published remotely to other domains running WebCenter Personalization. Various Client-side Models The Conductor server API is built upon RESTful services that support a wide variety of clients able to communicate over HTTP.  The Conductor supports the following client-side models: REST:  Popular browser-based languages can be used to manage and execute Conductor Scenarios.  There are other public methods to retrieve configured provider metadata that can be used by custom applications. The Conductor currently supports XML and JSON for it's API syntax. Java: WebCenter Personalization delivers a robust and light-weight java client with the popular Jersey framework as it's foundation.  It has never been easier to write a remote java client to manage remote RESTful services. Expression Language (EL): Allow the results of Scenario execution to control your user interface or embed personalized content using the session-scoped managed bean.  The EL client can also be used in straight JSP pages with minimal configuration. Extensible Provider Framework The Conductor supports a pluggable provider framework for integrating custom code with Scenario execution.  There are two types of providers supported by the Conductor: Function Provider: Function Providers are simple java annotated classes with static methods that are meant to be served as utilities.  Some common uses would include: object creation or instantiation, data transformation, and the like.  Function Providers can be invoked using the common EL syntax from variable assignments, conditions, and loops. For example:  ${myUtilityClass:doStuff(arg1,arg2))} If you are familiar with EL Functions, Function Providers are based on the same concept. Data Provider: Like Function Providers, Data Providers are annotated java classes, but they must adhere to a much more strict object model.  Data Providers have access to a wealth of Conductor services, such as: Access to namespace-scoped configuration API that can be managed by Oracle Enterprise Manager, Scenario execution context for expression resolution, and more.  Oracle ships with three out-of-the-box data providers that supports integration with: Standardized Content Servers(CMIS),  Federated Profile Properties through the Properties Service, and WebCenter Activity Graph. Useful References If you are looking to immediately get started writing your own application using WebCenter Personalization Services, you will find the following references helpful in getting you on your way: Personalizing WebCenter Applications Authoring Personalized Scenarios in JDeveloper Using Personalization APIs Externally Implementing and Calling Function Providers Implementing and Calling Data Providers

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery - Part 2

    - by Bob Zurek
    As discussed in my last blog posting on this topic, Information Discovery, a core capability of the Oracle Endeca Information Discovery solution enables businesses to search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. With search as a core advanced capabilities of our product it is important to understand some of the key differences and capabilities in the underlying data store of Oracle Endeca Information Discovery and that is our Endeca Server. In the last post on this subject, we talked about Exploratory Search capabilities along with support for cascading relevance. Additional search capabilities in the Endeca Server, which differentiate from simple keyword based "search boxes" in other Information Discovery products also include: The Endeca Server Supports Set Search.  The Endeca Server is organized around set retrieval, which means that it looks at groups of results (all the documents that match a search), as well as the relationship of each individual result to the set. Other approaches only compute the relevance of a document by comparing the document to the search query – not by comparing the document to all the others. For example, a search for “U.S.” in another approach might match to the title of a document and get a high ranking. But what if it were a collection of government documents in which “U.S.” appeared in many titles, making that clue less meaningful? A set analysis would reveal this and be used to adjust relevance accordingly. The Endeca Server Supports Second-Order Relvance. Unlike simple search interfaces in traditional BI tools, which provide limited relevance ranking, such as a list of results based on key word matching, Endeca enables users to determine the most salient terms to divide up the result. Determining this second-order relevance is the key to providing effective guidance. Support for Queries and Filters. Search is the most common query type, but hardly complete, and users need to express a wide range of queries. Oracle Endeca Information Discovery also includes navigation, interactive visualizations, analytics, range filters, geospatial filters, and other query types that are more commonly associated with BI tools. Unlike other approaches, these queries operate across structured, semi-structured and unstructured content stored in the Endeca Server. Furthermore, this set is easily extensible because the core engine allows for pluggable features to be added. Like a search engine, queries are answered with a results list, ranked to put the most likely matches first. Unlike “black box” relevance solutions, which generalize one strategy for everyone, we believe that optimal relevance strategies vary across domains. Therefore, it provides line-of-business owners with a set of relevance modules that let them tune the best results based on their content. The Endeca Server query result sets are summarized, which gives users guidance on how to refine and explore further. Summaries include Guided Navigation® (a form of faceted search), maps, charts, graphs, tag clouds, concept clusters, and clarification dialogs. Users don’t explicitly ask for these summaries; Oracle Endeca Information Discovery analytic applications provide the right ones, based on configurable controls and rules. For example, the analytic application might guide a procurement agent filtering for in-stock parts by visualizing the results on a map and calculating their average fulfillment time. Furthermore, the user can interact with summaries and filters without resorting to writing complex SQL queries. The user can simply just click to add filters. Within Oracle Endeca Information Discovery, all parts of the summaries are clickable and searchable. We are living in a search driven society where business users really seem to enjoy entering information into a search box. We do this everyday as consumers and therefore, we have gotten used to looking for that box. However, the key to getting the right results is to guide that user in a way that provides additional Discovery, beyond what they may have anticipated. This is why these important and advanced features of search inside the Endeca Server have been so important. They have helped to guide our great customers to success. 

    Read the article

  • What common interface would be appropriate for these game object classes?

    - by Jefffrey
    Question A component based system's goal is to solve the problems that derives from inheritance: for example the fact that some parts of the code (that are called components) are reused by very different classes that, hypothetically, would lie in a very different branch of the inheritance tree. That's a very nice concept, but I've found out that CBS is often hard to accomplish without using ugly hacks. Implementations of this system are often far from clean. But I don't want to discuss this any further. My question is: how can I solve the same problems a CBS try to solve with a very clean interface? (possibly with examples, there are a lot of abstract talks about the "perfect" design already). Context Here's an example I was going for before realizing I was just reinventing inheritance again: class Human { public: Position position; Movement movement; Sprite sprite; // other human specific components }; class Zombie { Position position; Movement movement; Sprite sprite; // other zombie specific components }; After writing that I realized I needed an interface, otherwise I would have needed N containers for N different types of objects (or to use boost::variant to gather them all together). So I've thought of polymorphism (move what systems do in a CBS design into class specific functions): class Entity { public: virtual void on_event(Event) {} // not pure virtual on purpose virtual void on_update(World) {} virtual void on_draw(Window) {} }; class Human : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; class Zombie : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; Which was nice, except for the fact that now the outside world would not even be able to know where a Human is positioned (it does not have access to its position member). That would be useful to track the player position for collision detection or if on_update the Zombie would want to track down its nearest human to move towards him. So I added const Position& get_position() const; to both the Zombie and Human classes. And then I realized that both functionality were shared, so it should have gone to the common base class: Entity. Do you notice anything? Yes, with that methodology I would have a god Entity class full of common functionality (which is the thing I was trying to avoid in the first place). Meaning of "hacks" in the implementation I'm referring to I'm talking about the implementations that defines Entities as simple IDs to which components are dynamically attached. Their implementation can vary from C-stylish: int last_id; Position* positions[MAX_ENTITIES]; Movement* movements[MAX_ENTITIES]; Where positions[i], movements[i], component[i], ... make up the entity. Or to more C++-style: int last_id; std::map<int, Position> positions; std::map<int, Movement> movements; From which systems can detect if an entity/id can have attached components.

    Read the article

  • Monitoring Windows Azure Service Bus Endpoint with BizTalk 360?

    - by Michael Stephenson
    I'm currently working with a customer who is undergoing an initiative to expose some of their line of business applications to external partners and SAAS applications and as part of this we have been looking at using the Windows Azure Service Bus. For the first part of the project we were focused on some synchronous request response scenarios where an external application would use the Service Bus relay functionality to get data from some internal applications. When we were looking at the operational monitoring side of the solution it was obvious that although most of the normal server monitoring capabilities would be required for the on premise components we would have to look at new approaches to validate that the operation of the service from outside of the organization was working as expected. A number of months ago one of my colleagues Elton Stoneman wrote about an approach I have introduced with a number of clients in the past where we implement a diagnostics service in each service component we build. This service would allow us to make a call which would flex some of the working parts of the system to prove it was working within any SLA. This approach is discussed on the following article: http://geekswithblogs.net/EltonStoneman/archive/2011/12/12/the-value-of-a-diagnostics-service.aspx In our solution we wanted to take the same approach but we had to consider that the service clients were external to the service. We also had to consider that by going through Windows Azure Service Bus it's not that easy to make most of your standard monitoring solutions just give you an easy way to do this. In a previous article I have described how you can use BizTalk 360 to monitor things using a custom extension to the Web Endpoint Manager and I felt that we could use this approach to provide an excellent way to monitor our service bus endpoint. The previous article is available on the following link: http://geekswithblogs.net/michaelstephenson/archive/2012/09/12/150696.aspx   The Monitoring Solution BizTalk 360 currently has an easy way to hook up the endpoint manager to a url which it will then call and if a successful response is returned it then considers the endpoint to be in a healthy state. We would take advantage of this by creating an ASP.net web page which would be called by BizTalk 360 and behind this page we would implement the functionality to call the diagnostics service on our Service Bus endpoint. The ASP.net page could include logic to work out how to handle the response from the diagnostics service. For example if the overall result of the diagnostics service was successful but the call to the diagnostics service was longer than a certain amount of time then we could return an error and indicate the service is taking too long. The following diagram illustrates the monitoring pattern.   The diagnostics service which is hosted in the line of business application allows us to ping a simple message through the Azure Service Bus relay to the WCF services in the LOB application and we they get a response back indicating that the service is working fine. To implement this I used the exact same approach I described in my previous post to create a custom web page which calls the diagnostics service and then it would return an HTTP response code which would depend on the error condition returned or a 200 if it was successful. One of the limitations of this approach is that the competing consumer pattern for listening to messages from service bus means that you cannot guarantee which server would process your diagnostics check message but with BizTalk 360 you could simply add multiple endpoint checks so that it could access the individual on-premise web servers directly to ensure that each server is working fine and then check that messages can also be processed through the cloud. Conclusion It took me about 15 minutes to get a proof of concept of this up and running which was able to monitor our web services which had been exposed via Windows Azure Service Bus. I was then able to inherit all of the monitoring benefits of BizTalk 360 to provide an enterprise class monitoring solution for our cloud enabled API.

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Extreme Makeover, Phone Edition: Comcasts xfinity

    Mobile Makeover For many companies the first foray into Windows Phone 7 (WP7) may be in porting their existing mobile apps. It is tempting to simply transfer existing functionality, avoiding the additional design costs. Readdressing business needs and taking advantage of the WP7 platform can reduce cost and is essential to a successful re-launch. To better understand the advantage of new development lets examine a conceptual upgrade of Comcasts existing mobile app. Before Comcast has a great mobile app that provides several key features. The ability to browse the lineup using a guide, a client for Comcast email accounts, On Demand gallery, and much more. We will leverage these and build on them using some of the incredible WP7 features.   After With the proliferation of DVRs (Digital Video Recorders) and a variety of media devices (TV, PC, Mobile) content providers are challenged to find creative ways to build their brands. Every client touch point must provide both value added services as well as opportunities for marketing and up-sale; WP7 makes it easy to focus on those opportunities. The new app is an excellent vehicle for presenting Comcasts newly rebranded TV, Voice, and Internet services. These services now fly under the banner of xfinity and have been expanded to provide the best experience for Comcast customers. The Windows Phone 7 app will increase the surface area of this service revolution.   The home menu is simplified and highlights Comcasts Triple Play: Voice, TV, and Internet. The inbox has been replaced with a messages view, and message management is handled by a WP7 hub. The hub presents emails, tweets, and IMs from Comcast and other viewers the user follows on Twitter.  The popular view orders shows based on the users viewing history and current cable package. The first show Glee is both popular and participating in a conceptual co-marketing effort, so it receives prime positioning. The second spot goes to a hit show on a premium channel, in this example HBOs The Pacific, encouraging viewers to upgrade for this premium content. The remaining spots are ordered based on viewing history and popularity. Tapping the play button moves the user to the theatre where they can watch previews or full episodes streaming from Fancast. Tapping an extra presents the user with show details as well as interactive content that may be included as part of co-marketing efforts. Co-Marketing with Dynamic Content The success of Comcasts services are tied to the success of the networks and shows it purveys, making co-marketing efforts essential. In this concept FOX is co-marketing its popular show Glee. A customized panorama is updated with the latest gleeks tweets, streaming HD episodes, and extras featuring photos and video of the cast. If WP7 apps can be dynamically extended with web hosted .xap files, including sandboxed partner experiences would enable interactive features such as the Gleek Peek, in which a viewer can select a character from a panorama to view the actors profile. This dynamic inline experience has a tailored appeal to aspiring creatives and is technically possible with Windows Phone 7.   Summary The conceptual Comcast mobile app for Windows Phone 7 highlights just a few of the incredible experiences and business opportunities that can be unlocked with this latest mobile solution. It is critical that organizations recognize and take full advantage of these new capabilities. Simply porting existing mobile applications does not leverage these powerful tools; re-examining existing applications and upgrading them to Windows Phone 7 will prove essential to the continued growth and success of your brand.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • HERMES Medical Solutions Helps Save Lives with MySQL

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} HERMES Medical Solutions was established in 1976 in Stockholm, Sweden, and is a leading innovator in medical imaging hardware/software products for health care facilities worldwide. HERMES delivers a plethora of different medical imaging solutions to optimize hospital workflow. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} HERMES advanced algorithms make it possible to detect the smallest changes under therapies important and necessary to optimize different therapeutic methods and doses. Challenges Fighting illness & disease requires state-of-the-art imaging modalities and software in order to diagnose accurately, stage disease appropriately and select the best treatment available. Selecting and implementing a new database platform that would deliver the needed performance, reliability, security and flexibility required by the high-end medical solutions offered by HERMES. Solution Decision to migrate from in-house database to an embedded SQL database powering the HERMES products, delivered either as software, integrated hardware and software solutions, or via the cloud in a software-as-a-service configuration. Evaluation of several databases and selection of MySQL based on its high performance, ease of use and integration, and low Total Cost of Ownership. On average, between 4 and 12 Terabytes of data are stored in MySQL databases underpinning the HERMES solutions. The data generated by each medical study is indeed stored during 10 years or more after the treatment was performed. MySQL-based HERMES systems also allow doctors worldwide to conduct new drug research projects leveraging the large amount of medical data collected. Hospitals and other HERMES customers worldwide highly value the “zero administration” capabilities and reliability of MySQL, enabling them to perform medical analysis without any downtime. Relying on MySQL as their embedded database, the HERMES team has been able to increase their focus on further developing their clinical applications. HERMES Medical Solutions could leverage the Oracle Financing payment plan to spread its investment over time and make the MySQL choice even more valuable. “MySQL has proven to be an excellent database choice for us. We offer high-end medical solutions, and MySQL delivers the reliability, security and performance such solutions require.” Jan Bertling, CEO.

    Read the article

  • When things go awry

    - by Phil Factor
    The moment the Entrepreneur opened his mouth on prime-time national TV, spelled out the URL and waxed big on how exciting ‘his’ new website was, I knew I was in for a busy night. I’d designed and built it. All at once, half a million people tried to log into the website. Although all my stress-testing paid off, I have to admit that the network locked up tight long before there was any danger of a database or website problem. Soon afterwards, the Entrepreneur and the Big Boss were there in the autopsy meeting. We picked through all our systems in detail to see how they’d borne the unexpected strain. Mercifully, in view of the sour mood of the Big Boss, it turned out that the only thing we could have done better was buy a bigger pipe to and from the internet. We’d specified that ‘big pipe’ when designing the system. The Big Boss had then railed at the cost and so we’d subsequently compromised. I felt that my design decisions were vindicated. The Big Boss brooded for a while. Then he made the significant comment: “What really ****** me off is the fact that, for ten minutes, we couldn’t take people’s money.” At that point I stopped feeling smug. Had the internet connection been better, the system would have reached its limit and failed rather precipitously, and that wasn’t what he wanted. Then it occurred to me that what had gummed up the connection was all those images on the site, that had made it so impressive for the visitors. If there had been a way to automatically pare down the site to the bare essentials under stress… Hmm. I began to consider disaster-recovery in the broadest sense – maintaining a service in spite of unusual or unexpected events. What he said makes a lot of sense: sacrifice whatever isn’t essential to keep the core service running when we approach the capacity limits. Maybe in IT we should borrow (or revive) the business concept of the ‘Skeleton service’, maintaining only the priority parts under stress, using a process that is well-prepared and carefully rehearsed. How might this work? Whatever the event we have to prepare for, it is all about understanding the priorities; knowing what one can dispense with when the going gets tough. In the event of database disaster, it’s much faster to deploy a skeletal system with only the essential data than to restore the entire system, though there would have to be a reconciliation process to update the revived database retrospectively, once the emergency was over. It isn’t just the database that could be designed for resilience. One could prepare for unusually high traffic in a website by designing a system that degraded gradually to a ‘skeletal’ site, one that maintained the commercial essentials without fat images, JavaScript libraries and razzmatazz. This is all what the Big Boss scathingly called ‘a mere technicality’. It seems to me that what is needed first is a culture of application and database design which acknowledges that we live in a very imperfect world, and react accordingly when things go awry.

    Read the article

  • New Oracle BI Applications released

    - by THE
    Oracle has just released two new Applications for Oracle Business Intelligence Analytics with the Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} 7.9.6.x Extension Pack: Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} · Oracle Manufacturing Analytics, part of the Oracle BI Applications product family, helps discrete and process manufacturing organizations optimize their supply networks by integrating data from across the enterprise value chain, thereby enabling executives, operations managers, cost accountants and production supervisors to make informed and actionable decisions related to manufacturing execution. Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} · Oracle Enterprise Asset Management Analytics, part of the Oracle BI Applications product family, offers complete and enhanced visibility to enterprise-wide maintenance information. Pre-built reports covering Maintenance History, Maintenance Cost Analysis and Maintenance Work Orders, provide Maintenance Managers information to maximize performance, identify potential issues much in advance, and address them before they escalate into serious problems.  More Information about the existing Business Intelligence Analytics Applications can be found on this page: http://www.oracle.com/us/solutions/ent-performance-bi/bi-applications-066544.html If you are not familiar with Oracle Manufacturing or Oracle Enterprise Asset Management, these PDFs might get you started: http://www.oracle.com/us/products/applications/060289.pdf http://www.oracle.com/us/products/applications/057127.pdf

    Read the article

  • PCI Encryption Key Management

    - by Unicorn Bob
    (Full disclosure: I'm already an active participant here and at StackOverflow, but for reasons that should hopefully be obvious, I'm choosing to ask this particular question anonymously). I currently work for a small software shop that produces software that's sold commercially to manage small- to mid-size business in a couple of fairly specialized industries. Because these industries are customer-facing, a large portion of the software is related to storing and managing customer information. In particular, the storage (and securing) of customer credit card information. With that, of course, comes PCI compliance. To make a long story short, I'm left with a couple of questions about why certain things were done the way they were, and I'm unfortunately without much of a resource at the moment. This is a very small shop (I report directly to the owner, as does the only other full-time employee), and the owner doesn't have an answer to these questions, and the previous developer is...err...unavailable. Issue 1: Periodic Re-encryption As of now, the software prompts the user to do a wholesale re-encryption of all of the sensitive information in the database (basically credit card numbers and user passwords) if either of these conditions is true: There are any NON-encrypted pieces of sensitive information in the database (added through a manual database statement instead of through the business object, for example). This should not happen during the ordinary use of the software. The current key has been in use for more than a particular period of time. I believe it's 12 months, but I'm not certain of that. The point here is that the key "expires". This is my first foray into commercial solution development that deals with PCI, so I am unfortunately uneducated on the practices involved. Is there some aspect of PCI compliance that mandates (or even just strongly recommends) periodic key updating? This isn't a huge issue for me other than I don't currently have a good explanation to give to end users if they ask why they are being prompted to run it. Question 1: Is the concept of key expiration standard, and, if so, is that simply industry-standard or an element of PCI? Issue 2: Key Storage Here's my real issue...the encryption key is stored in the database, just obfuscated. The key is padded on the left and right with a few garbage bytes and some bits are twiddled, but fundamentally there's nothing stopping an enterprising person from examining our (dotfuscated) code, determining the pattern used to turn the stored key into the real key, then using that key to run amok. This seems like a horrible practice to me, but I want to make sure that this isn't just one of those "grin and bear it" practices that people in this industry have taken to. I have developed an alternative approach that would prevent such an attack, but I'm just looking for a sanity check here. Question 2: Is this method of key storage--namely storing the key in the database using an obfuscation method that exists in client code--normal or crazy? Believe me, I know that free advice is worth every penny that I've paid for it, nobody here is an attorney (or at least isn't offering legal advice), caveat emptor, etc. etc., but I'm looking for any input that you all can provide. Thank you in advance!

    Read the article

  • Collision 2D Quads

    - by Vico Pelaez
    I want to detect collision between two 2D squares, one square is static and the other one moves according to keyboard arrows. I have implemented some code, however nothing happens when they overlap each other and what I tried to achieve in the code was to detect an overlapping between them. I think I am either not understanding the concept really well or that because one of the squares is moving this is not working. Please I would really appreciate your help. Thank you! float x1=0.05 ,Y1=0.05; float x2=0.05 ,Y2=0.05; float posX1 =0.5, posY1 = 0.5; float movX2 = 0.0 , movY2 = 0.0; struct box{ int width=0.1; int heigth=0.1; }; void init(){ glClearColor(0.0, 0.0, 0.0, 0.0); glColor3f(1.0, 1.0, 1.0); } void quad1(){ glTranslatef(posX1, posY1, 0.0); glBegin(GL_POLYGON); glColor3f(0.5, 1.0, 0.5); glVertex2f(-x1, -Y1); glVertex2f(-x1, Y1); glVertex2f(x1,Y1); glVertex2f(x1,-Y1); glEnd(); } void quad2(){ glMatrixMode(GL_PROJECTION); glLoadIdentity(); glPushMatrix(); glTranslatef(movX2, movY2, 0.0); glBegin(GL_POLYGON); glColor3f(1.5, 1.0, 0.5); glVertex2f(-x2, -Y2); glVertex2f(-x2, Y2); glVertex2f(x2,Y2); glVertex2f(x2,-Y2); glEnd(); glPopMatrix(); } void reset(){ //Reset position of square??? movX2 = 0.0; movY2 = 0.0; collisionB = false; } bool collision(box A, box B){ int leftA, leftB; int rightA, rightB; int topA, topB; int bottomA, bottomB; //Calculate the sides of box A leftA = x1; rightA = x1 + A.width; topA = Y1; bottomA = Y1 + A.heigth; //Calculate the sides of box B leftB = x2; rightB = x2 + B.width; topB = Y1; bottomB = Y1+ B.heigth ; if( bottomA <= topB ) return false; if( topA >= bottomB ) return false; if( rightA <= leftB ) return false; if( leftA >= rightB ) return false; return true; } float move_unit = 0.1; void keyboardown(int key, int x, int y) { switch (key){ case GLUT_KEY_UP: movY2 += move_unit; break; case GLUT_KEY_RIGHT: movX2 += move_unit; break; case GLUT_KEY_LEFT: movX2 -= move_unit; break; case GLUT_KEY_DOWN: movY2 -= move_unit; break; default: break; } glutPostRedisplay(); } void display(){ glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glMatrixMode(GL_PROJECTION); glLoadIdentity(); cuad1(); if (!collision) { cuad2(); } else{ reset(); } glFlush(); } int main(int argc, char** argv){ glutInit(&argc, argv); glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB); glutInitWindowSize(500,500); glutInitWindowPosition(0, 0); glutCreateWindow("Collision Practice"); glutSpecialFunc(keyboardown); glutDisplayFunc(display); init(); glutMainLoop(); }

    Read the article

  • Profiling Startup Of VS2012 &ndash; JustTrace Profiler

    - by Alois Kraus
    JustTrace is made by Telerik which is mainly known for its collection of UI controls. The current version (2012.3.1127.0) does include a performance and memory profiler which does cost 614€ and is currently with a special offer for 306€ on sale. It does include one year of free upgrades. The uneven € numbers are calculated from the 799€ and 50% dicsount price. The UI is already in Metro style and simple to use. Multi process, attach, method recording filter are not supported. It looks like JustTrace is like Ants a Just My Code profiler. For stuff where you do not have the pdbs or you want to dig deeper into the BCL code you will not get far. After getting the profile data you get in the All Methods grid a plain list with hit count and own time. The method list for all methods is also suspiciously short which is a clear sign that you will not get far during the analysis of foreign code. But at least there is also a memory profiler included. For this I have to choose in the first window for Profiling Type “Memory Profiler” to check the memory consumption of VS.  There are some interesting number to see but I do really miss from YourKit the thread stack window. How am I supposed to get a clue when much memory is allocated and the CPU consumption is high in which places I should look? The Snapshot summary gives a rough overview which is ok for a first impression. Next is Assemblies? This gives you a list of all loaded assemblies. Not terribly useful.   The By Type view gives you exactly what it is supposed to do. You have to keep in mind that this list is filtered by the types you did check in the Assemblies list. The By Type instance list does only show types from assemblies which do not originate from Microsoft. By default mscorlib and System are not checked. That is the reason why for the first time my By Type window looked like The idea behind this feature is to show only your instances because you are ultimately responsible for the overall memory consumption. I am not sure if I do like this feature because by default it does hide too much. I do want to see at least how many strings and arrays are allocated. A simple namespace filter would also do it in my opinion. Now you can examine all string instances and look who in the object graph does keep a reference on them. That is nice but YourKit has the big plus that you can also look into the string contents.  I am also not sure how in the graph cycles are visualized and what will happen if you have thousands of objects referencing you. That's pretty much it about JustTrace. It can help the average developer to pinpoint performance and memory issues by just looking at his own code and instances. Showing them more will not help them because the sheer amount of information will overwhelm them. And you need to have a pretty good understanding how the GC and the CLR does work. When you have a performance issue at a customer machine it is sometimes very helpful to be able a bring a profiler onto the machine (no pdbs, …) and to get a full snapshot of all processes which are in the problematic use case involved. For these more advanced use cased JustTrace is certainly the wrong tool. Next: SpeedTrace

    Read the article

  • Squibbly: LibreOffice Integration Framework for the Java Desktop

    - by Geertjan
    Squibbly is a new framework for Java desktop applications that need to integrate with LibreOffice, or more generally, need office features as part of a Java desktop solution that could include, for example, JavaFX components. Here's what it looks like, right now, on Ubuntu 13.04: Why is the framework called Squibbly? Because I needed a unique-ish name, because "squibble" sounds a bit like "scribble" (which is what one does with text documents, etc), and because of the many absurd definitions in the Urban Dictionary for the apparently real word "squibble", e.g., "A name for someone who is squibblish in nature." And, another e.g., "A squibble is a small squabble. A squabble is a little skirmish." But the real reason is the first definition (and definitely not the fourth definition): "Taking a small portion of another persons something, such as a small hit off of a pipe, a bite of food, a sip of a drink, or drag of a cigarette." In other words, I took (or "squibbled") a small portion of LibreOffice, i.e., OfficeBean, and integrated it into a NetBeans Platform application. Now anyone can add new features to it, to do anything they need, such as create a legislative software system as Propylon has done with their own solution on the NetBeans Platform: For me, the starting point was Chuk Munn Lee's similar solution from some years ago. However, he uses reflection a lot in that solution, because he didn't want to bundle the related JARs with the application. I understand that benefit but I find it even more beneficial to not need to require the user to specify the location of the LibreOffice location, since all the necessary JARs and native libraries (currently 32-bit Linux only, by the way) are bundled with the application. Plus, hundreds of lines of reflection code, as in Chuk's solution, is not fun to work with at all. Switching between applications is done like this: It's a work in progress, a proof of concept only. Just the result of a few hours of work to get the basic integration to work. Several problems remain, some of them potentially unsolvable, starting with these, but others will be added here as I identify them: Window management problems. I'd like to let the user have multiple LibreOffice applications and documents open at the same time, each in a new TopComponent. However, I haven't figured out how to do that. Right now, each application is opened into the same TopComponent, replacing the currently open application. I don't know the OfficeBean API well enough, e.g., should a single OfficeBean be shared among multiple TopComponents or should each of them have their own instance of it? Focus problems. When putting the application behind other applications and then switching back to the application, typing text becomes impossible. When closing a TopComponent and reopening it, the content is lost completely. Somehow the loss of focus, and then the return of focus, disables something. No idea how to fix that. The project is checked into this location, which isn't public yet, so you can't access it yet. Once it's publicly available, it would be great to get some code contributions and tweaks, etc. https://java.net/projects/squibbly Here's the source structure, showing especially how the OfficeBean JARs and native libraries (currently for Linux 32-bit only) fit in: Ultimately, would be cool to integrate or share code with http://joeffice.com!

    Read the article

  • Generic Adjacency List Graph implementation

    - by DmainEvent
    I am trying to come up with a decent Adjacency List graph implementation so I can start tooling around with all kinds of graph problems and algorithms like traveling salesman and other problems... But I can't seem to come up with a decent implementation. This is probably because I am trying to dust the cobwebs off my data structures class. But what I have so far... and this is implemented in Java... is basically an edgeNode class that has a generic type and a weight-in the event the graph is indeed weighted. public class edgeNode<E> { private E y; private int weight; //... getters and setters as well as constructors... } I have a graph class that has a list of edges a value for the number of Vertices and and an int value for edges as well as a boolean value for whether or not it is directed. The brings up my first question, if the graph is indeed directed, shouldn't I have a value in my edgeNode class? Or would I just need to add another vertices to my LinkedList? That would imply that a directed graph is 2X as big as an undirected graph wouldn't it? public class graph { private List<edgeNode<?>> edges; private int nVertices; private int nEdges; private boolean directed; //... getters and setters as well as constructors... } Finally does anybody have a standard way of initializing there graph? I was thinking of reading in a pipe-delimited file but that is so 1997. public graph GenereateGraph(boolean directed, String file){ List<edgeNode<?>> edges; graph g; try{ int count = 0; String line; FileReader input = new FileReader("C:\\Users\\derekww\\Documents\\JavaEE Projects\\graphFile"); BufferedReader bufRead = new BufferedReader(input); line = bufRead.readLine(); count++; edges = new ArrayList<edgeNode<?>>(); while(line != null){ line = bufRead.readLine(); Object edgeInfo = line.split("|")[0]; int weight = Integer.parseInt(line.split("|")[1]); edgeNode<String> e = new edgeNode<String>((String) edges.add(e); } return g; } catch(Exception e){ return null; } } I guess when I am adding edges if boolean is true I would be adding a second edge. So far, this all depends on the file I write. So if I wrote a file with the following Vertices and weights... Buffalo | 18 br Pittsburgh | 20 br New York | 15 br D.C | 45 br I would obviously load them into my list of edges, but how can I represent one vertices connected to the other... so on... I would need the opposite vertices? Say I was representing Highways connected to each city weighted and un-directed (each edge is bi-directional with weights in some fictional distance unit)... Would my implementation be the best way to do that? I found this tutorial online Graph Tutorial that has a connector object. This appears to me be a collection of vertices pointing to each other. So you would have A and B each with there weights and so on, and you would add this to a list and this list of connectors to your graph... That strikes me as somewhat cumbersome and a little dismissive of the adjacency list concept? Am I wrong and that is a novel solution? This is all inspired by steve skiena's Algorithm Design Manual. Which I have to say is pretty good so far. Thanks for any help you can provide.

    Read the article

  • Use a custom value object or a Guid as an entity identifier in a distributed system?

    - by Kazark
    tl;dr I've been told that in domain-driven design, an identifier for an entity could be a custom value object, i.e. something other than Guid, string, int, etc. Can this really be advisable in a distributed system? Long version I will invent an situation analogous to the one I am currently facing. Say I have a distributed system in which a central concept is an egg. The system allows you to order eggs and see spending reports and inventory-centric data such as quantity on hand, usage, valuation and what have you. There area variety of services backing these behaviors. And say there is also another app which allows you to compose recipes that link to a particular egg type. Now egg type is broken down by the species—ostrich, goose, duck, chicken, quail. This is fine and dandy because it means that users don't end up with ostrich eggs when they wanted quail eggs and whatnot. However, we've been getting complaints because jumbo chicken eggs are not even close to equivalent to small ones. The price is different, and they really aren't substitutable in recipes. And here we thought we were doing users a favor by not overwhelming them with too many options. Currently each of the services (say, OrderSubmitter, EggTypeDefiner, SpendingReportsGenerator, InventoryTracker, RecipeCreator, RecipeTracker, or whatever) are identifying egg types with an industry-standard integer representation the species (let's call it speciesCode). We realize we've goofed up because this change could effect every service. There are two basic proposed solutions: Use a predefined identifier type like Guid as the eggTypeID throughout all the services, but make EggTypeDefiner the only service that knows that this maps to a speciesCode and eggSizeCode (and potentially to an isOrganic flag in the future, or whatever). Use an EggTypeID value object which is a combination of speciesCode and eggSizeCode in every service. I've proposed the first solution because I'm hoping it better encapsulates the definition of what an egg type is in the EggTypeDefiner and will be more resilient to changes, say if some people now want to differentiate eggs by whether or not they are "organic". The second solution is being suggested by some people who understand DDD better than I do in the hopes that less enrichment and lookup will be necessary that way, with the justification that in DDD using a value object as an ID is fine. Also, they are saying that EggTypeDefiner is not a domain and EggType is not an entity and as such should not have a Guid for an ID. However, I'm not sure the second solution is viable. This "value object" is going to have to be serialized into JSON and URLs for GET requests and used with a variety of technologies (C#, JavaScript...) which breaks encapsulation and thus removes any behavior of the identifier value object (is either of the fields optional? etc.) Is this a case where we want to avoid something that would normally be fine in DDD because we are trying to do DDD in a distributed fashion? Summary Can it be a good idea to use a custom value object as an identifier in a distributed system (solution #2)?

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >