Search Results

Search found 13177 results on 528 pages for 'jboss tools'.

Page 480/528 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • Five Key Trends in Enterprise 2.0 for 2011

    - by kellsey.ruppel(at)oracle.com
    We recently sat down with Andy MacMillan, an industry veteran and vice president of product management for Enterprise 2.0 at Oracle, to get his take on the year ahead in Enterprise 2.0 (E2.0). He offered us his five predictions about the ways he believes E2.0 technologies will transform business in 2011. 1. Forward-thinking organizations will achieve an unprecedented level of organizational awareness. Enterprise 2.0 and Web 2.0 technologies have already transformed the ways customers, employees, partners, and suppliers communicate and stay informed. But this year we are anticipating that organizations will go to the next step and integrate social activities with business applications to deliver rich contextual "activity streams." Activity streams are a new way for enterprise users to get relevant information as quickly as it happens, by navigating to that information in context directly from their portal. We don't mean syndicating social activities limited to a single application. Instead, we believe back-office systems will be combined with social media tools to drive how users make informed business decisions in brand new ways. For example, an account manager might log into the company portal and automatically receive notification that colleagues are closing business around a certain product in his market segment. With a single click, he can reach out instantly to these colleagues via social media and learn from their successes to drive new business opportunities in his own area. 2. Online customer engagement will become a high priority for CMOs. A growing number of chief marketing officers (CMOs) have created a new direct report called "head of online"--a senior marketing executive responsible for all engagements with customers and prospects via the Web, mobile, and social media. This new field has been dubbed "Web experience management" or "online customer engagement" by firms and analyst organizations. It is likely to rapidly increase demand for a host of new business objectives and metrics from Web content management solutions. As companies interface with customers more and more over the Web, Web experience management solutions will help deliver more targeted interactions to ensure increased customer loyalty while meeting sales and business objectives. 3. Real composite applications will be widely adopted. We expect organizations to move from the concept of a single "uber-portal" that encompasses all the necessary features to a more modular, component-based concept for composite applications. This approach is now possible as IT and power users are empowered to assemble new, purpose-built composite applications quickly from existing components. 4. Records management will drive ECM consolidation. We continue to see a significant shift in the approach to records management. Several years ago initiatives were focused on overlaying records management across a set of electronic repositories and physical storage locations. We believe federated records management will continue, but we also expect to see records management driving conversations around single-platform content management consolidation. 5. Organizations will demand ECM at extreme scale. We have already seen a trend within IT organizations to provide a common, highly scalable infrastructure to consolidate and support content and information needs. But as data sizes grow exponentially, ECM at an extreme scale is likely to spread at unprecedented speeds this year. This makes sense as regulations and transparency requirements rise. The model in which ECM and lightweight CMS systems provide basic content services such as check-in, update, delete, and search has converged around a set of industry best practices and has even been coded into new industry standards such as content management interoperability services. As these services converge and the demand for them accelerates, organizations are beginning to rationalize investments into a single, highly scalable infrastructure. Is your organization ready for Enterprise 2.0 in 2011? Learn more.

    Read the article

  • Crime Scene Investigation: SQL Server

    - by Rodney Landrum
    “The packages are running slower in Prod than they are in Dev” My week began with this simple declaration from one of our lead BI developers, quickly followed by an emailed spreadsheet demonstrating that, over 5 executions, an extensive ETL process was running average 630 seconds faster on Dev than on Prod. The situation needed some scientific investigation to determine why the same code, the same data, the same schema would yield consistently slower results on a more powerful server. Prod had yet to be officially christened with a “Go Live” date so I had the time, and having recently been binge watching CSI: New York, I also had the inclination. An inspection of the two systems, Prod and Dev, revealed the first surprise: although Prod was indeed a “bigger” system, with double the amount of RAM of Dev, the latter actually had twice as many processor cores. On neither system did I see much sign of resources being heavily taxed, while the ETL process was running. Without any real supporting evidence, I jumped to a conclusion that my years of performance tuning should have helped me avoid, and that was that the hardware differences explained the better performance on Dev. We spent time setting up a Test system, similarly scoped to Prod except with 4 times the cores, and ported everything across. The results of our careful benchmarks left us truly bemused; the ETL process on the new server was slower than on both other systems. We burned more time tweaking server configurations, monitoring IO and network latency, several times believing we’d uncovered the smoking gun, until the results of subsequent test runs pitched us back into confusion. Finally, I decided, enough was enough. Hadn’t I learned very early in my DBA career that almost all bottlenecks were caused by code and database design, not hardware? It was time to get back to basics. With over 100 SSIS packages and hundreds of queries, each handling specific tasks such as file loads, bulk inserts, transforms, logging, and so on, the task seemed formidable. And yet, after barely an hour spent with Profiler, Extended Events, and wait statistics DMVs, I had a lead in the shape of a query that joined three tables, containing millions of rows, returned 3279 results, but performed 239K logical reads. As soon as I looked at the execution plans for the query in Dev and Test I saw the culprit, an implicit conversion warning on a join predicate field that was numeric in one table and a varchar(50) in another! I turned this information over to the BI developers who quickly resolved the data type mismatches and found and fixed “several” others as well. After the schema changes the same query with the same databases ran in under 1 second on all systems and reduced the logical reads down to fewer than 300. The analysis also revealed that on Dev, the ETL task was pulling data across a LAN, whereas Prod and Test were connected across slower WAN, in large part explaining why the same process ran slower on the latter two systems. Loading the data locally on Prod delivered a further 20% gain in performance. As we progress through our DBA careers we learn valuable lessons. Sometimes, with a project deadline looming and pressure mounting, we choose to forget them. I was close to giving into the temptation to throw more hardware at the problem. I’m pleased at least that I resisted, though I still kick myself for not looking at the code on day one. It can seem a daunting prospect to return to the fundamentals of the code so close to roll out, but with the right tools, and surprisingly little time, you can collect the evidence that reveals the true problem. It is a lesson I trust I will remember for my next 20 years as a DBA, if I’m ever again tempted to bypass the evidence.

    Read the article

  • Single page not appearing in Google Search

    - by Dan
    Description I have a static franchise website which has various sub pages each dedicated to an individual franchisee. For each franchisee the page, the only thing slightly similar between all of them are the page titles, they follow this structure: <title> Welcome to THE_COMPANY - PRODUCT_DESCRIPTION Services, THE_LOCATION </title> THE_COMPANY and PRODUCT_DESCRIPTION are the same across all franchisees, however THE_LOCATION changes depeding on where they are located in the UK. Each franchisee page has the following <meta /> tags: <meta name="DC.creator" content="user"/> <meta name="DC.format" content="text/html"/> <meta name="DC.language" content="en"/> <meta name="DC.date.modified" content="2014-01-23T11:22:31+00:00"/> <meta name="DC.date.created" content="2014-01-23T11:22:09+00:00"/> <meta name="DC.type" content="Page"/> <meta name="DC.distribution" content="Global"/> <meta name="robots" content="ALL"/> <meta name="distribution" content="Global"/> The main content on each franchisee page is completely different. The Problem There is one particular franchisee page, located in Area A.. Which will not appear in Google Search results at all. However every single other franchisee (if you Google Search for "THE_COMPANY, THE_LOCATION" is number 1). And if I do the same search on Bing, Yahoo or DuckDuckGo, the Area A franchisee is the first result on all of them. Has Google for some reason black listed one page on the site? What I Have Tried Ensuring the page is referenced in my sitemap.xml file 'Fetching as Google Bot' the link www.the_company.co.uk/areaa When that came back as OK I would submit to index Resubmitting the sitemap.xml file in Webmaster Tools Linking to the Area A page from another pages content For this I also waited about 3 weeks before checking again to give Google time to re-index Making a change to the page content and waiting another 2 / 3 weeks Removing the page completely and recreating it with an alternative URL The closest thing I have found to this issue is this StackOverflow question but this particular franchisee has existed for almost a year, it used to appear on Google searches however no longer does. I'm guessing the Panda update wasn't too happy with something on the page, but it hasn't effected anything else on the site and I am at a loss for things to try. I would greatly appreciate any information or thoughts as to what could have caused this Thanks. Update In line with Daniel Fukudas answer below, I have followed some of his steps but everything seems to check out alright: HTTP Headers HTTP/1.1 200 OK => Date => Tue, 25 Feb 2014 16:31:29 GMT Server => Zope/(2.12.16, python 2.6.6, linux2) ZServer/1.1 Content-Length => 40078 Expires => Sat, 01 Jan 2000 00:00:00 GMT Content-Type => text/html;charset=utf-8 Content-Language => en Vary => Accept-Encoding Connection => close Robots <meta /> tag: <meta name="robots" content="ALL"/> I have updated this <meta /> tag to read content="INDEX" instead now. robots.txt: User-agent: * Disallow: User-Agent: Googlebot Disallow: /*sendto_form$ Disallow: /*folder_factories$ Using site:THE_COMPANY.co.uk: Searching for 'AREA A site:THE_COMPANY.co.uk' does not return the page, but regardless of that searching just for site:THE_COMPANY.co.uk will not necessarily return every indexed page, or so I understand... Update It appears Google likes to drop pages every now and then from the index, despite my steps above, I left the site alone and the page appeared back in the SERPs by itself.

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • Advanced Record-Level Business Intelligence with Inner Queries

    - by gt0084e1
    While business intelligence is generally applied at an aggregate level to large data sets, it's often useful to provide a more streamlined insight into an individual records or to be able to sort and rank them. For instance, a salesperson looking at a specific customer could benefit from basic stats on that account. A marketer trying to define an ideal customer could pull the top entries and look for insights or patterns. Inner queries let you do sophisticated analysis without the overhead of traditional BI or OLAP technologies like Analysis Services. Example - Order History Constancy Let's assume that management has realized that the best thing for our business is to have customers ordering every month. We'll need to identify and rank customers based on how consistently they buy and when their last purchase was so sales & marketing can respond accordingly. Our current application may not be able to provide this and adding an OLAP server like SSAS may be overkill for our needs. Luckily, SQL Server provides the ability to do relatively sophisticated analytics via inner queries. Here's the kind of output we'd like to see. Creating the Queries Before you create a view, you need to create the SQL query that does the calculations. Here we are calculating the total number of orders as well as the number of months since the last order. These fields might be very useful to sort by but may not be available in the app. This approach provides a very streamlined and high performance method of delivering actionable information without radically changing the application. It's also works very well with self-service reporting tools like Izenda. SELECT CustomerID,CompanyName, ( SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID ) As Orders, DATEDIFF(mm, ( SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) ,getdate() ) AS MonthsSinceLastOrder FROM Customers Creating Views To turn this or any query into a view, just put CREATE VIEW AS before it. If you want to change it use the statement ALTER VIEW AS. Creating Computed Columns If you'd prefer not to create a view, inner queries can also be applied by using computed columns. Place you SQL in the (Formula) field of the Computed Column Specification or check out this article here. Advanced Scoring and Ranking One of the best uses for this approach is to score leads based on multiple fields. For instance, you may be in a business where customers that don't order every month require more persistent follow up. You could devise a simple formula that shows the continuity of an account. If they ordered every month since their first order, they would be at 100 indicating that they have been ordering 100% of the time. Here's the query that would calculate that. It uses a few SQL tricks to make this happen. We are extracting the count of unique months and then dividing by the months since initial order. This query will give you the following information which can be used to help sales and marketing now where to focus. You could sort by this percentage to know where to start calling or to find patterns describing your best customers. Number of orders First Order Date Last Order Date Percentage of months order was placed since last order. SELECT CustomerID, (SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) As Orders, (SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS LastOrder, (SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS FirstOrder, DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) AS MonthsSinceFirstOrder, 100*(SELECT COUNT(DISTINCT 100*DATEPART(yy,OrderDate) + DATEPART(mm,OrderDate)) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) / DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) As OrderPercent FROM Customers

    Read the article

  • Social HCM: Is Your Team Listening?

    - by Mike Stiles
    Does integrating Social HCM into your enterprise make sense? Consider Sam and Christina. Sam is a new hire at a big company. On the job 3 weeks, a question has come up on how to properly file an expense report to get reimbursed. It was covered in the onboarding session, but shockingly enough, Sam didn’t memorize or write down every word of the session. The answer is probably in a handout, in a stack of handouts 2 inches thick. It also might be on the employee web site…somewhere. Christina is a new hire at a different big company. She has the same question. She logs into her company’s social network, goes to the “new hires” group, asks her question and gets an answer in seconds. Christina says, “Cool!” Sam says, “Grrrr.” It’s safe to say the qualified talent your company wants is accustomed to using social platforms to communicate and get quick answers. As such, Christina is comfortable at her new company, whereas Sam is wondering what he’s gotten himself into. Companies that cling to talent communication and management systems that don’t speak to talent’s needs or expectations put themselves at risk. Right from the recruiting stage, prospects can determine if a company has embraced the communications tools of the 21st century. If they don’t see it, alarm bells go off. With great talent more in demand than ever, enterprises should reconsider making “this is the way we do it, you adapt to us” their mantra. Other blogs have clearly outlined that apart from meeting top recruits’ expectations, Social HCM benefits the organization itself in terms of efficiency, talent performance & measurement. Recruiting: Jobvite shows 64% of companies hired using social. 89% of job seekers are using social in their search. Social can give employers access to relevant communities of prospects and advance the brand. Nucleus Research found general hiring software can provide over 1,000% ROI by reducing churn and improving screening. Social talent acquisition should perform at least as well. Learning & Development:Employees, learning from the company or from peers, can be kept on top of the latest needed skillsets and engage in self-paced training so as to advance within the company. Performance Management:Just as gamers are egged on by levels and achievements, talent can reach for workplace kudos, be they shout-outs from peers & managers or formally established milestones. Plus employee reviews become consistent and fair as managers have access to the cumulative feedback social offers. Workflow and Collaboration:With workforces dispersing in terms of physical location, social provides a platform that helps eliminate drawbacks that would have brought just 10 years ago. Finding and connecting with just the right colleague to get the most relevant info at any given time has never been more possible…or expected. While yes, marketing has taken the social lead inside the enterprise, HCM (with the word “human” right there in its name) is the obvious locale for the next big integration of social in business. The technology is there. At Oracle, Fusion HCM apps are deeply embedded with Social HCM…just one example of systems taking social across the enterprise. Christina’s company is communicating with her in ways she’s used to. Sam’s company may as well be trying to talk to him using signal flags. @mikestilesPhoto via stock.xchng

    Read the article

  • Perm SSIS Developer Urgently Required

    - by blakmk
      Job Role To provide dedicated data services support to the company, by designing, creating, maintaining and enhancing database objects, ensuring data quality, consistency and integrity. Migrating data from various sources to central SQL 2008 data warehouse will be the primary function. Migration of data from bespoke legacy database’s to SQL 2008 data warehouse. Understand key business requirements, Liaising with various aspects of the company. Create advanced transformations of data, with focus on data cleansing, redundant data and duplication. Creating complex business rules regarding data services, migration, Integrity and support (Best Practices). Experience ·         Minimum 3 year SSIS experience, in a project or BI Development role and involvement in at least 3 full ETL project life cycles, using the following methodologies and tools o    Excellent knowledge of ETL concepts including data migration & integrity, focusing on SSIS. o    Extensive experience with SQL 2005 products, SQL 2008 desirable. o    Working knowledge of SSRS and its integration with other BI products. o    Extensive knowledge of T-SQL, stored procedures, triggers (Table/Database), views, functions in particular coding and querying. o    Data cleansing and harmonisation. o    Understanding and knowledge of indexes, statistics and table structure. o    SQL Agent – Scheduling jobs, optimisation, multiple jobs, DTS. o    Troubleshoot, diagnose and tune database and physical server performance. o    Knowledge and understanding of locking, blocks, table and index design and SQL configuration. ·         Demonstrable ability to understand and analyse business processes. ·         Experience in creating business rules on best practices for data services. ·         Experience in working with, supporting and troubleshooting MS SQL servers running enterprise applications ·         Proven ability to work well within a team and liaise with other technical support staff such as networking administrators, system administrators and support engineers. ·         Ability to create formal documentation, work procedures, and service level agreements. ·         Ability to communicate technical issues at all levels including to a non technical audience. ·         Good working knowledge of MS Word, Excel, PowerPoint, Visio and Project.   Location Based in Crawley with possibility of some remote working Contact me for more info: http://sqlblogcasts.com/blogs/blakmk/contact.aspx      

    Read the article

  • PASS Summit 2013 Review

    - by Ajarn Mark Caldwell
    As a long-standing member of PASS who lives in the greater Seattle area and has attended about nine of these Summits, let me start out by saying how GREAT it was to go to Charlotte, North Carolina this year.  Many of the new folks that I met at the Summit this year, upon hearing that I was from Seattle, commented that I must have been disappointed to have to travel to the Summit this year after 5 years in a row in Seattle.  Well, nothing could be further from the truth.  I cheered loudly when I first heard that the 2013 Summit would be outside Seattle.  I have many fond memories of trips to Orlando, Florida and Grapevine, Texas for past Summits (missed out on Denver, unfortunately).  And there is a funny dynamic that takes place when the conference is local.  If you do as I have done the last several years and saved my company money by not getting a hotel, but rather just commuting from home, then both family and coworkers tend to act like you’re just on a normal schedule.  For example, I have a young family, and my wife and kids really wanted to still see me come home “after work”, but there are a whole lot of after-hours activities, social events, and great food to be enjoyed at the Summit each year.  Even more so if you really capitalize on the opportunities to meet face-to-face with people you either met at previous summits or have spoken to or heard of, from Twitter, blogs, and forums.  Then there is also the lovely commuting in Seattle traffic from neighboring cities rather than the convenience of just walking across the street from your hotel.  So I’m just saying, there are really nice aspects of having the conference 2500 miles away. Beyond that, the training was fantastic as usual.  The SQL Server community has many outstanding presenters and experts with deep knowledge of the tools who are extremely willing to share all of that with anyone who wants to listen.  The opening video with PASS President Bill Graziano in a NASCAR race turned dream sequence was very well done, and the keynotes, as usual, were great.  This year I was particularly impressed with how well attended were the Professional Development sessions.  Not too many years ago, those were very sparsely attended, but this year, the two that I attended were standing-room only, and these were not tiny rooms.  I would say this is a testament to both the maturity of the attendees realizing how important these topics are to career success, as well as to the ever-increasing skills of the presenters and the program committee for selecting speakers and topics that resonated with people.  If, as is usually the case, you were not able to get to every session that you wanted to because there were just too darn many good ones, I encourage you to get the recordings. Overall, it was a great time as these events always are.  It was wonderful to see old friends and make new ones, and the people of Charlotte did an awesome job hosting the event and letting their hospitality shine (extra kudos to SQLSentry for all they did with the shuttle, maps, and other event sponsorships).  We’re back in Seattle next year (it is a release year, after all) but I would say that with the success of this year’s event, I strongly encourage the Board and PASS HQ to firmly reestablish the location rotation schedule.  I’ll even go so far as to suggest standardizing on an alternating Seattle – Charlotte schedule, or something like that. If you missed the Summit this year, start saving now, and register early, so you can join us!

    Read the article

  • Personal Financial Management – The need for resuscitation

    - by Salil Ravindran
    Until a year or so ago, PFM (Personal Financial Management) was the blue eyed boy of every channel banking head. In an age when bank account portability is still fiction, PFM was expected to incentivise customers to switch banks. It still is, in some emerging economies, but if the state of PFM in matured markets is anything to go by, it is in a state of coma and badly requires resuscitation. Studies conducted around the year show an alarming decline and stagnation in PFM usage in mature markets. A Sept 2012 report by Aite Group – Strategies for PFM Success shows that 72% of users hadn’t used PFM and worse, 58% of them were not kicked about using it. Of the rest who had used it, only half did on a bank site. While there are multiple reasons for this lack of adoption, some are glaringly obvious. While pretty graphs and pie charts are important to provide a visual representation of my income and expense, it is simply not enough to encourage me to return. Static representation of data without any insightful analysis does not help me. Budgeting and Cash Flow is important but when I have an operative account, a couple of savings accounts, a mortgage loan and a couple of credit cards help me with what my affordability is in specific contexts rather than telling me I just busted my budget. Help me with relative importance of each budget category so that I know it is fine to go over budget on books for my daughter as against going over budget on eating out. Budget over runs and spend analysis are post facto and I am informed of my sins only when I return to online banking. That too, only if I decide to come to the PFM area. Fundamentally, PFM should be a part of my banking engagement rather than an analysis tool. It should be contextual so that I can make insight based decisions. So what can be done to resuscitate PFM? Amalgamation with banking activities – In most cases, PFM tools are integrated into online banking pages and they are like chapter 37 of a long story. PFM needs to be a way of banking rather than a tool. Available balances should shift to Spendable Balances. Budget and goal related insights should be integrated with transaction sessions to drive pre-event financial decisions. Personal Financial Guidance - Banks need to think ground level and see if their PFM offering is really helping customers achieve self actualisation. Banks need to recognise that most customers out there are non-proficient about making the best value of their money. Customers return when they know that they are being guided rather than being just informed on their finance. Integrating contextual financial offers and financial planning into PFM is one way ahead. Yet another way is to help customers tag unwanted spending thereby encouraging sound savings habits. Mobile PFM – Most banks have left all those numbers on online banking. With access mostly having moved to devices and the success of apps, moving PFM on to devices will give it a much needed shot in the arm. This is not only about presenting the same wine in a new bottle but also about leveraging the power of the device in pushing real time notifications to make pre-purchase decisions. The pursuit should be to analyse spend, budgets and financial goals real time and push them pre-event on to the device. So next time, I should know that I have over run my eating out budget before walking into that burger joint and not after. Increase participation and collaboration – Peer group experiences and comments are valued above those offered by the bank. Integrating social media into PFM engagement will let customers share and solicit their financial management experiences with their peer group. Peer comparisons help benchmark one’s savings and spending habits with those of the peer group and increases stickiness. While mature markets have gone through this learning in some way over the last one year, banks in maturing digital banking economies increasingly seem to be falling into this trap. Best practices lie in profiling and segmenting customers, being where they are and contextually guiding them to identify and achieve their financial goals. Banks could look at the likes of Simple and Movenbank to draw inpiration from.

    Read the article

  • Heterogeneous Datacenter Management with Enterprise Manager 12c

    - by Joe Diemer
    The following is a Guest Blog, contributed by Bryce Kaiser, Product Manager at Blue MedoraWhen I envision a perfect datacenter, it would consist of technologies acquired from a single vendor across the entire server, middleware, application, network, and storage stack - Apps to Disk - that meets your organization’s every IT requirement with absolute best-of-breed solutions in every category.   To quote a familiar motto, your datacenter would consist of "Hardware and Software, Engineered to Work Together".  In almost all cases, practical realities dictate something far less than the IT Utopia mentioned above.   You may wish to leverage multiple vendors to keep licensing costs down, a single vendor may not have an offering in the IT category you need, or your preferred vendor may quite simply not have the solution that meets your needs.    In other words, your IT needs dictate a heterogeneous IT environment.  Heterogeneity, however, comes with additional complexity. The following are two pretty typical challenges:1) No End-to-End Visibility into the Enterprise Wide Application Deployment. Each vendor solution which is added to an infrastructure may bring its own tooling creating different consoles for different vendor applications and platforms.2) No Visibility into Performance Bottlenecks. When multiple management tools operate independently, you lose diagnostic capabilities including identifying cross-tier issues with database, hung-requests, slowness, memory leaks and hardware errors/failures causing DB/MW issues. As adoption of Oracle Enterprise Manager (EM) has increased, especially since the release of Enterprise Manager 12c, Oracle has seen an increase in the number of customers who want to leverage their investments in EM to manage non-Oracle workloads.  Enterprise Manager provides a single pane of glass view into their entire datacenter.  By creating a highly extensible framework via the Oracle EM Extensibility Development Kit (EDK), Oracle has provided the tooling for business partners such as my company Blue Medora as well as customers to easily fill gaps in the ecosystem and enhance existing solutions.  As mentioned in the previous post on the Enterprise Manager Extensibility Exchange, customers have access to an assortment of Oracle and Partner provided solutions through this Exchange, which is accessed at http://www.oracle.com/goto/emextensibility.  Currently, there are over 80 Oracle and partner provided plug-ins across the EM 11g and EM 12c versions.  Blue Medora is one of those contributing partners, for which you will find 3 of our solutions including our flagship plugin for VMware.  Let's look at Blue Medora’s VMware plug-in as an example to what I'm trying to convey.  Here is a common situation solved by true visibility into your entire stack:Symptoms•    My database is bogging down, however the database appears okay internally.  Maybe it’s starved for resources?•    My OS tooling is showing everything is “OK”.  Something doesn’t add up. Root cause•    Through the VMware plugin we can see the problem is actually on the virtualization layer Solution•    From within Enterprise Manager  -- the same tool you use for all of your database tuning -- we can overlay the data of the database target, host target, and virtual machine target for a true picture of the true root cause. Here is the console view: Perhaps your monitoring conditions are more specific to your environment.  No worries, Enterprise Manager still has you covered.  With Metric Extensions you have the “Next Generation” of User-Defined Metrics, which easily bring the power of your existing management scripts into a single console while leveraging the proven Enterprise Manager framework. Simply put, Oracle Enterprise manager boasts a growing ecosystem that provides the single pane of glass for your entire datacenter from the database and beyond.  Bryce can be contacted at [email protected]

    Read the article

  • Oracle Spatial and Graph – A year in review

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} What a great year for Oracle Spatial! Or shall I now say, Oracle Spatial and Graph, with our official name change this summer. There were so many exciting events and updates we had this year, and this blog will review and link to some of the events you may have missed over the year. We kicked off 2012 with our webinar: Situational Analysis at OnStar with Oracle Spatial and Graph. We collaborated with OnStar’s Emergency Strategy and Outreach expert, Jeff Joyner ,on how Onstar uses Google Earth Visualization, NAVTEQ data and Oracle Database to deliver fast, accurate emergency services to its customers. In the next webinar in our 2012 series, Oracle partner TARGUSinfo showcased how to build a robust, scalable and secure customer relationship management systems – with built-in mapping and spatial analysis, and deployed in the cloud. This is a very cool system using all Oracle technologies including Oracle Database and Fusion Middleware MapViewer. Attendees learned how to gather market insight, score prospects and customers and perform location analysis. The replay is available here. Our final webinar of the year focused on using Oracle Business Intelligence tools, along with Oracle Spatial and Graph to perform location-aware predictive analysis. Watch the webcast here: In June, we joined up with the Location Intelligence conference in Washington, DC, and had a very successful 2012 Oracle Spatial User Conference. Customers and partners from the US, as well as from EMEA and Asia, flew in to share experiences and ideas, and get technical updates from Oracle experts. Users were excited to hear about spatial-Exadata performance, and advances in MapViewer and BI. Peter Doolan of Oracle Public Sector kicked off the event with a great keynote, and US Census, NOAA, and Ordnance Survey Great Britain were just a few of the presenters. Presentation archive here. We recognized some of the most exceptional partners and customers for their contributions to advancing mainstream solutions using geospatial technologies. Planning for 2013’s conference has already started. Please contribute your papers for consideration here. http://www.locationintelligence.net/ We also launched a new Oracle PartnerNetwork Spatial Specialization – to enable partners to get validated in the marketplace for their expertise in taking solutions to market. Individuals can also get individual certifications. Learn more here. Oracle Open World was not to disappoint, with news regarding our next Oracle Spatial and Graph release, as well as the announcement of our new Oracle Spatial and Graph SIG board! Join the SIG today. One more exciting event as we look to 2013. Spatial and location technologies have a dedicated track at the January BIWA SIG Summit – on January 9-10 in Redwood Shores, CA. View the agenda and register here: www.biwasummit.org. We thank you for all your support during the year of 2012 and look towards an even more exciting 2013! Wishing you and your family a prosperous New Year and Happy Holidays!

    Read the article

  • SO-Aware Service Explorer – Configure and Export your services from VS 2010 into the repository

    - by cibrax
    We have introduced a new Visual Studio tool called “Service Explorer” as part of the new SO-Aware SDK version 1.3 to help developers to configure and export any regular WCF service into the SO-Aware service repository. This new tool is a regular Visual Studio Tool Window that can be opened from “View –> Other Windows –> Services Explorer”. Once you open the Services Explorer, you will able to see all the available WCF services in the Visual Studio Solution. In the image above, you can see that a “HelloWorld” service was found in the solution and listed under the Tool window on the left. There are two things you can do for a new service in tool, you can either export it to SO-Aware repository or associate it to an existing service version in the repository. Exporting the service to SO-Aware means that you want to create a new service version in the repository and associate the WCF service WSDL to that version. Associating the service means that you want to use a version already created in SO-Aware with the only purpose of managing and centralizing the service configuration in SO-Aware. The option for exporting a service will popup a dialog like the one bellow in which you can enter some basic information about the service version you want to create and the repository location. The option for associating a service will popup a dialog in which you can pick any existing service version repository and the application configuration file that you want to keep in sync for the service configuration. Two options are available for configuring a service, WCF Configuration or SO-Aware. The WCF Configuration option just tells the tool that the service will use the standard WCF configuration section “system.serviceModel” but that section must be updated and kept in sync with the configuration selected for the service in the repository. The SO-Aware configuration option will tell the tool that the service configuration will be resolved at runtime from the repository. For example, selecting SO-Aware will generate the following configuration in the selected application configuration file, <configuration> <configSections> <section name="serviceRepository" type="Tellago.ServiceModel.Governance.ServiceConfiguration.ServiceRepositoryConfigurationSection, Tellago.ServiceModel.Governance.ServiceConfiguration" /> </configSections> <serviceRepository url="http://localhost/soaware/servicerepository.svc"> <services> <service name="ref:HelloWorldService(1.0)@dev" type="SOAwareSampleService.HelloWorldService" /> </services> </serviceRepository> </configuration> As you can see the tool represents a great addition to the toolset that any developer can use to manage and centralize configuration for WCF services. In addition, it can be combined with other useful tools like WSCF.Blue (Web Service Contract First) for generating the service artifacts like schemas, service code or the service WSDL itself.

    Read the article

  • Construction Paper, Legos, and Architectural Modeling

    I can remember as a kid playing with construction paper and Legos to explore my imagination. Through my exploration I was able to build airplanes, footballs, guns, and more, out of paper. Additionally I could create entire cities, robots, or anything else I could image out of Legos.  These toys, I now realize were in fact tools that gave me an opportunity to explore my ideas in the physical world through the use of modeling.  My imagination was allowed to run wild as I, unknowingly at the time, made design decisions that directly affected the models I was building from the raw materials.  To prove my point further, I can remember building a paper airplane that seemed to go nowhere when I tried to throw it. So I decided to attach a paper clip to the plane before I decided to throw it the next time to test my concept that by adding more weight to the plane that it would fly better and for longer distances. The paper airplane allowed me to model my design decision through the use of creating an artifact in that I created a paper airplane that was carrying extra weight through the incorporation of the paper clip in to the design. Also, I remember using Legos to build all sorts of creations, and these creations became artifacts of my imagination. As I further and further defined my Lego creations through the process of playing I was able to create elaborate artifacts of my imagination. These artifacts represented design decision I had made in the evolution of my creation through my child like design process. In some form or fashion the artifacts I created as a kid are very similar to the artifacts that I create when I model a software architectural concept or a software design in that the process of making decisions is directly translated in to a tangible model in the form of an architectural model. Architectural models have been defined as artifacts that depict design decisions of a system’s architecture.  The act of creating architectural models is the act of architectural modeling. Furthermore, architectural modeling is the process of creating a physical model based architectural concepts and documenting these design decisions. In the process of creating models, the standard notation used is Architectural modeling notation. This notation is the primary method of capturing the essence of design decisions regarding architecture.  Modeling notations can vary based on the need and intent of a project; typically they range from natural language to a diagram based notation. Currently, Unified Markup Language (UML) is the industry standard in terms of architectural modeling notation  because allows for architectures to be defined through a series of boxes, lines, arrows and other basic symbols that encapsulate design designs in to virtual components, connectors, configurations and interfaces.  Furthermore UML allows for additional break down of models through the use of natural language as to explain each section of the model in plain English. One of the major factors in architectural modeling is to define what is to be modeled. As a basic rule of thumb, I tend to model architecture based on the complexity of systems or sub sub-systems of architecture. Another key factor is the level of detail that is actually needed for a model. For example if I am modeling a system for a CEO to view then the low level details will be omitted. In comparison, if I was modeling a system for another engineer to actually implement I would include as much detailed information as I could to help the engineer implement my design.

    Read the article

  • How to make Ubuntu recognize an unknown external display (so I can adjust its resolution)?

    - by WagnerAA
    I have a Dell laptop with an external monitor attached (a Samsumg SyncMaster 931c). My laptop display was recognized, and I can adjust its optimum resolution. My external display is still unknown, thus I'm stuck at a lower resolution (1024x768): I tried the "Detect Displays" button, but it didn't work, nothing happens. I recently upgraded from Ubuntu 12.04 to 12.10. Things were working before. I don't know if I can actually change this configuration, or if this is a bug. I searched for an answer here and also in Launchpad's website, but found none. I even tried to install Nvidia drivers, and just messed things up. It seems I wasn't even using nvidia before, as I guessed by looking at my additional drivers configuration: My laptop has an Intel chipset, I guess: $ dpkg --get-selections | grep -i -e nvidia -e intel intel-gpu-tools install libdrm-intel1:amd64 install libdrm-intel1:i386 install nvidia-common install xserver-xorg-video-intel install I don't have an xorg.conf file (I think this is nvidia related, am I right?): $ cat /etc/X11/xorg.conf cat: /etc/X11/xorg.conf: No such file or directory $ ls -l /etc/X11/ total 76 drwxr-xr-x 2 root root 4096 Out 19 23:41 app-defaults drwxr-xr-x 2 root root 4096 Abr 25 2012 cursors -rw-r--r-- 1 root root 18 Abr 25 2012 default-display-manager drwxr-xr-x 4 root root 4096 Abr 25 2012 fonts -rw-r--r-- 1 root root 17394 Dez 3 2009 rgb.txt lrwxrwxrwx 1 root root 13 Mai 1 03:33 X -> /usr/bin/Xorg drwxr-xr-x 3 root root 4096 Out 19 23:41 xinit drwxr-xr-x 2 root root 4096 Jan 23 2012 xkb -rw-r--r-- 1 root root 0 Out 24 08:55 xorg.conf.nvidia-xconfig-original -rwxr-xr-x 1 root root 709 Abr 1 2010 Xreset drwxr-xr-x 2 root root 4096 Out 19 10:08 Xreset.d drwxr-xr-x 2 root root 4096 Out 19 10:08 Xresources -rwxr-xr-x 1 root root 3730 Jan 20 2012 Xsession drwxr-xr-x 2 root root 4096 Out 20 00:11 Xsession.d -rw-r--r-- 1 root root 265 Jul 1 2008 Xsession.options -rw-r--r-- 1 root root 13 Ago 15 06:43 XvMCConfig -rw-r--r-- 1 root root 601 Abr 25 2012 Xwrapper.config Here is some information I gathered by looking at other related posts: $ sudo lshw -C display; lsb_release -a; uname -a *-display:0 description: VGA compatible controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 07 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:48 memory:f6800000-f6bfffff memory:d0000000-dfffffff ioport:1800(size=8) *-display:1 UNCLAIMED description: Display controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 07 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:f6100000-f61fffff LSB Version: core-2.0-amd64:core-2.0-noarch:core-3.0-amd64:core-3.0-noarch:core-3.1-amd64:core-3.1-noarch:core-3.2-amd64:core-3.2-noarch:core-4.0-amd64:core-4.0-noarch:cxx-3.0-amd64:cxx-3.0-noarch:cxx-3.1-amd64:cxx-3.1-noarch:cxx-3.2-amd64:cxx-3.2-noarch:cxx-4.0-amd64:cxx-4.0-noarch:desktop-3.1-amd64:desktop-3.1-noarch:desktop-3.2-amd64:desktop-3.2-noarch:desktop-4.0-amd64:desktop-4.0-noarch:graphics-2.0-amd64:graphics-2.0-noarch:graphics-3.0-amd64:graphics-3.0-noarch:graphics-3.1-amd64:graphics-3.1-noarch:graphics-3.2-amd64:graphics-3.2-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-3.2-amd64:printing-3.2-noarch:printing-4.0-amd64:printing-4.0-noarch:qt4-3.1-amd64:qt4-3.1-noarch Distributor ID: Ubuntu Description: Ubuntu 12.10 Release: 12.10 Codename: quantal Linux Batcave 3.5.0-17-generic #28-Ubuntu SMP Tue Oct 9 19:31:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ xrandr -q Screen 0: minimum 320 x 200, current 2304 x 800, maximum 32767 x 32767 LVDS1 connected 1280x800+0+0 (normal left inverted right x axis y axis) 286mm x 1790mm 1280x800 59.9*+ 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 connected 1024x768+1280+32 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0* 800x600 60.3 56.2 848x480 60.0 640x480 59.9 DP1 disconnected (normal left inverted right x axis y axis) If there's anything else I can do, any other information I can post here, to help me configure this external display, please let me know. If this is actually a bug, I apologize (I know bugs are not allowed here), but I really wasn't sure. And I will promptly file a bug report in Launchpad if that's the case. Thanks a lot in advance. ;)

    Read the article

  • Type of computer for a developer on the road

    - by nabucosound
    Hi developers: I am planning to be traveling through eurasia and asia (russia, china, korea, japan, south east asia...) for a while and, although there are plenty of marvelous things to see and to do, I must keep on working :(. I am a python developer, dedicated mainly to web projects. I use django, sqlite3, browsers, and ocassionaly (only if I have no choice) I install postgres, mysql, apache or any other servers commonly used in the internets. I do my coding on vim, use ssh to connect, lftp to transfer files, IRC, grep/ack... So I spend most of my time in the terminal shells. But I also use IM or Skype to communicate with my clients and peers, as well as some other software (that after all is not mandatory for my day-to-day work). I currently work with a Macbook Pro (3 years old now) and so far I am very happy with the performance. But I don't want to carry it if I am going to be "on transit" for long time, it is simply huge and heavy for what I am planning to load in my rather small backpack (while traveling, less is more, you know). So here I am reading all kind of opinions about netbooks, because at first sight this is the kind of computer I thought I had to choose. I am going to use Linux for it, Microsoft is not my cup of tea and Mac is not available for them, unless I were to buy a Macbook air, something that I won't do because if I am robbed or rain/dust/truck loaders break it I would burst in tears. I am concerned about wifi performance and connectivity, I am going to use one of those linux distros/tools to hack/test on "open" networks (if you know what I mean) in case I am not in a place with real free wifi access and I find myself in an emergency. CPU speed should be acceptable, but since I don't plan to run Photoshop or expensive IDEs, I guess most of the time I won't be overloading the machine. Apart from this, maybe (surely) I am missing other features to consider. With that said (sorry about the length) here it comes my question, raised from a deep ignorance regarding the wars betweeb betbooks vs notebooks (I assume tablet PCs are not for programming yet): If I buy a netbook will I have to throw it away after 1 month on the road and buy a notebook? Or will I be OK? Thanks! Hector Update I have received great feedback so far! I would like to insist on the fact that I will be traveling through many different countries and scenarios. I am sure that while in Japan I will be more than fine with anything related to technology, connectivity, etc. But consider that I will be, for example, on a train through Russia (transsiberian) and will cross Mongolia as well. I will stay in friends' places sometimes, but most of the time I will have to work from hostel rooms, trains, buses, beaches (hey this last one doesn't sound too bad hehe!). I think some of your answers guys seem to focus on the geek part but loose the point of this "on the road" fact. I am very aware and agree that netbooks suck compared to notebooks, but what I am trying to do here is to find a balance and discover your experiences with netbooks to see first hand if a netbook will be a fail in the mid-long term of the trip for my purposes. So I have resumed the main concepts expressed here on this small list, in no particular order: keyboard/touchpad feel: I use vim so no need of moving mouse pointers that much, unless I am browsing the web, but intensive use of keyboard screen real state: again, terminal work for most of the time battery life: I think something very important weight/size: also very important looks not worth stealing it, don't give a shit if it is lost/stolen/broken: this may depend on kind of person, your economy, etc. Also to prevent losing work, I will upload EVERYTHING to the cloud whenever I'll have a chance. wifi: don't want to discover my wifi is one of those that cannot deal with half the routers on this planet or has poor connectivity. Thanks again for your answers and comments!

    Read the article

  • Oracle ties social, CRM, analytics products to customer experience

    - by Richard Lefebvre
    Oracle will embark on a new product strategy that centers on customer experience management, an approach driven by the company’s many recent acquisitions.  The new approach, announced by the company Monday night, will be seen in an expansive suite that features familiar Oracle products -- such as its Fusion CRM platform -- and offerings the company recently gained through acquisitions, including FatWire, RightNow and Vitrue. Billed as Oracle Customer Experience (CX), the suite enables businesses to respond to a market centered on the customer experience, said Anthony Lye, the company’s senior vice president of CRM. Companies “are very aware their products are commoditizing,” Lye said in an interview last week, referring to how the Web and social media channels have empowered customers. Customer experiences start and mature outside of CRM, and applications today need to reflect that shift, Lye said. Businesses thus need to step away from a pure CRM model, he said. Oracle claims CX will improve customer experience management by connecting businesses with customers across Web sites and social channels. Companies can create a single, real-time view of the customer and use predictive analytics of interactions to strengthen the customer experience, Oracle said. “Companies have to connect with their customers wherever, whenever and however they want,” Lye said. “They have to know and understand their customer.” Lye promoted Oracle CX as a suite that will work across channels to complement the company’s applications. A new strategy has been “cooking” for years now, but the acquisitions Oracle has made over the past two years made the time right for a “unique collaboration,” Lye said. CX includes basic Oracle CRM solutions such as Siebel and the new Fusion Apps. It also includes the company’s MDM products, Enterprise Data Quality, Customer Hub and Product Hub. And the suite is rounded out by the services that Oracle recently bought, transactions that created or enhanced the company’s presence in social, marketing, e-commerce and customer service. For instance, FatWire provides tools for marketing. ATG focuses on e-commerce. And RightNow specializes in customer service. Two recent acquisitions -- Collective Intellect and Vitrue -- gave Oracle a seat at the social table. Collective Intellect is a social intelligence program, and Vitrue is a social marketing and engagement platform. Those acquisitions have yet to be finalized. Oracle hopes to eventually integrate the two social offerings, as well as most of the other services, into the CX suite. CX can integrate on Oracle’s standard middleware, and can give users a lower TCO by leveraging it as a single stack on premise or as a cloud solution. Lye deferred questions about the pricing of CX, and instead pitched Oracle’s ability to offer multiple customer experience solutions in one suite. Businesses have struggled with the complexity of infrastructure and modern services that communicate with customers, Lye said. “They’ve struggled to pull all these things together. We’ve done that,” he said. Stephen Powers, a research director at Forrester Research Inc. in Cambridge, Mass., said it’s not surprising for Oracle to offer the CX suite and a related customer experience strategy.  “They’ve got CRM, ATG, FatWire. Clearly, it’s been the strategy for them,” he said. But the challenge for Oracle, and for any other vendor that has gone on an “acquisition spree,” is to connect its many products, Powers said. “The portfolio has to be more than the parts. They’ve got to realize the efficiencies and value of having these pieces to tie them together,” he said. “The proof is in the pudding. Adobe has done a nice job in its space with the products they’ve got. Now, Oracle has got to show it has something.” Albert McKeon (SearchCRM) Published: 25 Jun 2012 : http://searchcrm.techtarget.com/news/2240158644/Oracle-ties-social-CRM-analytics-products-to-customer-experience

    Read the article

  • Openmatics Revolutionizes Fleet Management with Standards-Based Vehicle Telematics Platform

    - by Michael Snow
    Openmatics s.r.o. was founded in 2010 as a subsidiary of ZF Friedrichshafen AG, a global player in driveline and chassis technology. Oracle Customer:  Openmatics s.r.o.Location:  Pilsen, Czech RepublicIndustry:  AutomotiveEmployees:  70 Its goal was to develop and operate a flexible, open telematics platform for automotive applications, which is independent from vehicle and component suppliers—recognizing that the fragmented telematics market was not meeting today’s fleet management needs. Openmatics provides a rich product portfolio, and customers can extend the platform, as required, to meet their needs. Partners and third-parties can develop their own applications using the Openmatics’ software development kit and can sell them via the Openmatics app shop.ZF Friedrichshafen AG is a global player in driveline and chassis technology. With 121 production companies and 650 service partners in 26 countries, ZF is among the top 10 largest automotive suppliers worldwide. Founded in 1915 to develop and produce transmissions for airships and vehicles, the group’s product offerings now include transmissions and steering systems as well as chassis components and complete axle systems and modules.  A word from Openmatics s.r.o.  “Oracle WebCenter Portal, together with the underlying Oracle Application Development Framework, provided the fundamental infrastructure for the Openmatics platform. Fleet managers can now reduce fuel consumption and operating costs, and more efficiently manage vehicle usage, maintenance, and safety. The standards-based platform allows third-party suppliers to deploy their own vehicle telematics services as Openmatics apps and creates a de facto standard for the automotive industry, independent from a single manufacturer or service provider.” – Gero Strobel, Head of Development, Openmatics s.r.o. Challenges Create an industry standard for vehicle telematics by establishing a customizable platform that enables access to telematics information, such as current and past fuel consumption, through a web browser to better meet automotive market and customer needs Reduce fleet-management costs by eliminating the need to invest in isolated telematics hardware and software solutions per vehicle brand and vehicle component manufacturer Establish an open platform where third-party providers—such as original equipment manufacturers (OEM), insurers, fleet operators, and individual developers—can deploy their own vehicle telematics services Allow users to purchase targeted telematics services as single apps to reduce costs and ensure rapid growth of telematics services available on the platform Enable users to configure their telematics apps with ease to make sure the platform meets individual fleet management requirements, such as analyzing past and current fuel consumption of a truck fleet Solutions Deployed Oracle WebCenter Portal as a foundation for Openmatics, a standards-based automotive telematics platform that provides next-generation fleet management with unified digital communication from and to vehicles on the move Used Oracle Application Development Framework as the development framework for Oracle WebCenter Portal’s components and services, providing developers with ready-to-use software development kits with application programming interfaces, design templates, and visual tools that accelerated time to market Used Oracle Enterprise Pack for Eclipse to simplify telematics application development in Java Enabled fleet monitoring by recording vehicle data—such as fuel consumption information—through onboard units, delivering the information to Oracle Database, and making it accessible through a customizable app portfolio on any web browser Stored vehicle telematics data—sent as encrypted information—in Oracle Database, ensuring data integrity and immediate availability for the platform’s telematics applications Enabled a wide range of telematics services suppliers, from vehicle component manufacturers to fleet application developers, to offer vehicle telematics services on the Openmatics platform, ensuring platform independence from OEMs Provided Openmatics customers with the means to individually select the automotive telematics services that are relevant to their business requirements, eliminating the need to pay for superfluous information and reducing fleet management costs Oracle Products & Services Oracle Application Development Framework Oracle WebCenter Portal Oracle SOA Suite Oracle Enterprise Pack for Eclipse Oracle Database Oracle Consulting &amp;amp;amp;amp;amp;amp;amp;&amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;gt;amp;&amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;gt;lt;p&amp;amp;amp;amp;amp;amp;amp;amp;gt; &amp;amp;amp;amp;amp;amp;amp;amp;lt;/p&amp;amp;amp;amp;amp;amp;amp;amp;gt;

    Read the article

  • Apps UX Launches Blueprints for Mobile User Experiences

    - by mvaughan
    By Misha Vaughan, Oracle Applications User ExperienceAt Oracle OpenWorld 2012 this year, the Oracle Applications User Experience (Apps UX) team announced the release of Mobile User Experience Functional Design Patterns. These patterns are designed to work directly with Oracle’s Fusion Middleware, specifically, ADF Mobile.  The Oracle Application Development Framework for mobile users enables developers to build one application that can be deployed to multiple mobile device platforms. These same mobile design patterns provide the guidance for Oracle teams to develop Fusion Mobile expenses. Application developers can use Oracle’s mobile design patterns to design iPhone, Android, or browser-based smartphone applications. We are sharing our mobile design patterns and their baked-in, scientifically proven usability to enable Oracle customers and partners to build mobile applications quickly.A different way of thinking and designing. Lynn Rampoldi-Hnilo, Senior Manager of Mobile User Experiences for Apps UX, says mobile design has to be compelling. “It needs to be optimized for the device, and be visually rich and simple,” she said. “What is really key is that you are designing for a user’s most personal device, the device that they will have with them at all times of the day.”Katy Massucco, director of the overall design patterns site, said: “You need to start with a simplified task flow. Everything should be a natural interaction. The action should be relevant and leveraging the device. It should be seamless.”She suggests that developers identify the essential tasks that a user would want to do while mobile. “They need to understand the user and the context,” she added. ?A sample inline action design patternWhat people are sayingReactions to the release of the design patterns have been positive. Debra Lilley, Oracle ACE Director and Fusion User Experience Advocate (FXA), has already demo’ed Fusion Mobile Expenses widely.  Fellow Oracle Ace Director Ronald van Luttikhuizen, called it a “cool demo by @debralilley of the new mobile expenses app.” FXA member Floyd Teter says he is already cooking up some plans for using mobile design patterns.  We hope to see those ideas at Collaborate or ODTUG in 2013. For another perspective on why user experience is such an important focus for mobile applications, check out this video by John King, Director, and Monty Latiolais, President, both from ODTUG, or the Oracle Development Tools User Group.In a separate interview by e-mail, Latiolais wrote: “I enjoy the fact we can take something that, in the past, has been largely subjective, and now apply to it a scientifically proven look and feel. Trusting Oracle’s UX Design Patterns, the presentation really can become one less thing to worry about. As someone with limited ADF experience, that is extremely beneficial.”?King, who was also interviewed by e-mail, wrote: “User Experience is about making the task at hand as easy and error-free as possible. Oracle's UX labs worked hard to make the User Experience in the new Fusion Applications as good as possible; ADF makes adding tested, consistent, user experiences a declarative exercise by leveraging that work. As we move applications onto mobile platforms, user experience is the driving factor. Customers are "spoiled" by a bevy of fantastic applications, and ours cannot disappoint them. Creating applications that enable users to quickly and effectively accomplish whatever task is at hand takes thought and practice. Developers must become ’power users’ and then create applications that they and their users will love.”

    Read the article

  • Social Targeting: This One's Just for You

    - by Mike Stiles
    Think of social targeting in terms of the archery competition we just saw in the Olympics. If someone loaded up 5 arrows and shot them straight up into the air all at once, hoping some would land near the target, the world would have united in laughter. But sadly for hysterical YouTube video viewing, that’s not what happened. The archers sought to maximize every arrow by zeroing in on the spot that would bring them the most points. Marketers have always sought to do the same. But they can only work with the tools that are available. A firm grasp of the desired target does little good if the ad products aren’t there to deliver that target. On the social side, both Facebook and Twitter have taken steps to enhance targeting for marketers. And why not? As the demand to monetize only goes up, they’re quite motivated to leverage and deliver their incredible user bases in ways that make economic sense for advertisers. You could target keywords on Twitter with promoted accounts, and get promoted tweets into search. They would surface for your followers and some users that Twitter thought were like them. Now you can go beyond keywords and target Twitter users based on 350 interests in 25 categories. How does a user wind up in one of these categories? Twitter looks at that user’s tweets, they look at whom they follow, and they run data through some sort of Twitter secret sauce. The result is, you have a much clearer shot at Twitter users who are most likely to welcome and be responsive to your tweets. And beyond the 350 interests, you can also create custom segments that find users who resemble followers of whatever Twitter handle you give it. That means you can now use boring tweets to sell like a madman, right? Not quite. This ad product is still quality-based, meaning if you’re not putting out tweets that lead to interest and thus, engagement, that tweet will earn a low quality score and wind up costing you more under Twitter’s auction system to maintain. That means, as the old knight in “Indiana Jones and the Last Crusade” cautions, “choose wisely” when targeting based on these interests and categories to make sure your interests truly do line up with theirs. On the Facebook side, they’re rolling out ad targeting that uses email addresses, phone numbers, game and app developers’ user ID’s, and eventually addresses for you bigger brands. Why? Because you marketers asked for it. Here you were with this amazing customer list but no way to reach those same customers should they be on Facebook. Now you can find and communicate with customers you gathered outside of social, and use Facebook to do it. Fair to say such users are a sensible target and will be responsive to your message since they’ve already bought something from you. And no you’re not giving your customer info to Facebook. They’ll use something called “hashing” to make sure you don’t see Facebook user data (beyond email, phone number, address, or user ID), and Facebook can’t see your customer data. The end result, social becomes far more workable and more valuable to marketers when it delivers on the promise that made it so exciting in the first place. That promise is the ability to move past casting wide nets to the masses and toward concentrating marketing dollars efficiently on the targets most likely to yield results.

    Read the article

  • The Connected Company: WebCenter Portal - Feedback - Analytics and Polls

    - by Michael Snow
    Evernote Export body, td { }Guest Post by: Mitchell Palski, Staff Sales Consultant The importance of connecting peers has been widely recognized and socialized as a critical component of employee intranets. Organizations are striving to provide mediums for sharing knowledge and improving awareness across their enterprise. Indirectly, the socialization of your enterprise should lead to cost savings and improved product/service quality. However, many times the direct effects of connecting an organization’s leadership with its employees are overlooked. Oracle WebCenter Portal can help you bridge that gap by gathering implicit and explicit feedback. Implicit Feedback Through Usage Analytics Analytics allows administrators to track and analyze WebCenter Portal traffic and usage. Analytics provides the following basic functionality: Usage Tracking Metrics: Analytics collects and reports metrics of common WebCenter Portal functions, including community and portlet traffic. Behavior Tracking: Analytics can be used to analyze WebCenter Portal metrics to determine usage patterns, such as page visit duration and usage over time. User Profile Correlation: Analytics can be used to correlate metric information with user profile information. Usage tracking reports can be viewed and filtered by user profile data such as country, company or title. Usage analytics help measure how users interact with website content – allowing your IT staff and business analysts to make informed decisions when planning development for your next intranet enhancement. For example: If users are not accessing your Announcements page and missing critical information that they need to be aware of, you may elect to use graphical links on the home page to direct more users to that page. As a result, the number of employee help-requests to HR decreases. If users are not accessing your News page to read recent articles, you may elect to stop spending as much time updating the page with new stories and cut costs in your communications department. You notice that there is a high volume of users accessing the Employee Dashboard page so your organization decides to continue making personalization enhancements to the page and investing in the Portal tool that most users are accessing. Usage analytics aren’t necessarily a new concept in the IT industry. What sets WebCenter Portal Analytics apart is: Reports are tailored for WebCenter specific tools Report can be easily added to a page as simple as a drag-and-drop Explicit Feedback Through Polls WebCenter Portal users can create, edit, take, and analyze online polls. With polls, you can survey your audience (such as their opinions and their experience level), check whether they can recall important information, and gather feedback and metrics. How many times have you been involved in a requirements discussion and someone has asked a question similar to “Well how do you know that no one likes our home page?” and the response is “Everyone says they hate it! That’s all anyone complains about.” No one has any measurable, quantifiable metric to gauge user satisfaction. Analytics measure usage, but your organization also needs to measure the quality of your portal as defined by the actual people that use it. With that information, your leadership can make informed decisions that will not only match usage patterns but also relate to employees on a personal level. The end result is a connection between employees and leadership that gives everyone in the organization a sense of ownership of their Portal rather than the feeling of development decisions being segregated to leadership only. Polls can be created and edited through the Poll Manager: Polls and View Poll Results can easily be added to a page through drag-and-drop. What did we learn? Being a “connected” company doesn’t just mean helping employees connect with each other horizontally across your enterprise. It also means connecting those employees to the decisions that affect their everyday activities. Through WebCenter Portal Usage Analytics and Polls, any decision that is made to remove a Portal page, update a Portal page, or develop new Portal functionality, can be justified by quantifiable metrics. Instead of fielding complaints and hearing that your employees don’t have a voice, give those employees a voice and listen!

    Read the article

  • Q&A: Drive Online Engagement with Intuitive Portals and Websites

    - by kellsey.ruppel
    We had a great webcast yesterday and wanted to recap the questions that were asked throughout. Can ECM distribute contents to 3rd party sites?ECM, which is now called WebCenter Content can distribute content to 3rd party sites via several means as well as SSXA - Site Studio for External Applications. Will you be able to provide more information on these means and SSXA?If you have an existing JSP application, you can add the SSXA libraries to your IDE where your application was built (JDeveloper for example).  You can now drop some code into your 3rd party site/application that can both create and pull dynamically contributable content out of the Content Server for inclusion in your pages.   If the 3rd party site is not a JSP application, there is also the option of leveraging two Site Studio (not SSXA) specific custom WebCenter Content services to pull Site Studio XML content into a page. More information on SSXA can be found here: http://docs.oracle.com/cd/E17904_01/doc.1111/e13650/toc.htm Is there another way than a ”gadget” to integrate applications (like loan simulator) in WebCenter Sites?There are some other ways such as leveraging the Pagelet Producer, which is a core component of WebCenter Portal. Oracle WebCenter Portal's Pagelet Producer (previously known as Oracle WebCenter Ensemble) provides a collection of useful tools and features that facilitate dynamic pagelet development. A pagelet is a reusable user interface component. Any HTML fragment can be a pagelet, but pagelet developers can also write pagelets that are parameterized and configurable, to dynamically interact with other pagelets, and respond to user input. Pagelets are similar to portlets, but while portlets were designed specifically for portals, pagelets can be run on any web page, including within a portal or other web application. Pagelets can be used to expose platform-specific portlets in other web environments. More on Page Producer can be found here:http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_pagelet.htm#CHDIAEHG Can you describe the mechanism available to achieve the context transfer of content?The primary goal of context transfer is to provide a uniform experience to customers as they transition from one channel to another, for instance in the use-case discussed in the webcast, it was around a customer moving from the .com marketing website to the self-service site where the customer wants to manage his account information. However if WebCenter Sites was able to identify and segment the customers  to a specific category where the customer is a potential target for some promotions, the same promotions should be targeted to the customer when he is in the self-service site, which is managed by WebCenter Portal. The context transfer can be achieved by calling out the WebCenter Sites Engage Server API’s, which will identify the segment that the customer has been bucketed into. Again through REST API’s., WebCenter Portal can then request WebCenter Sites for specific content that needs to be targeted for a customer for the identified segment. While this integration can be achieved through custom integration today, Oracle is looking into productizing this integration in future releases.  How can context be transferred from WebCenter Sites (marketing site) to WebCenter Portal (Online services)?WebCenter Portal Personalization server can call into WebCenter Sites Engage Server to identify the segment for the user and then through REST API’s request specific content that needs to be surfaced in the Portal. Still have questions? Leave them in the comments section! And you can catch a replay of the webcast here.

    Read the article

  • Q&A: Drive Online Engagement with Intuitive Portals and Websites

    - by kellsey.ruppel
    We had a great webcast yesterday and wanted to recap the questions that were asked throughout. Can ECM distribute contents to 3rd party sites?ECM, which is now called WebCenter Content can distribute content to 3rd party sites via several means as well as SSXA - Site Studio for External Applications. Will you be able to provide more information on these means and SSXA?If you have an existing JSP application, you can add the SSXA libraries to your IDE where your application was built (JDeveloper for example).  You can now drop some code into your 3rd party site/application that can both create and pull dynamically contributable content out of the Content Server for inclusion in your pages.   If the 3rd party site is not a JSP application, there is also the option of leveraging two Site Studio (not SSXA) specific custom WebCenter Content services to pull Site Studio XML content into a page. More information on SSXA can be found here: http://docs.oracle.com/cd/E17904_01/doc.1111/e13650/toc.htm Is there another way than a ”gadget” to integrate applications (like loan simulator) in WebCenter Sites?There are some other ways such as leveraging the Pagelet Producer, which is a core component of WebCenter Portal. Oracle WebCenter Portal's Pagelet Producer (previously known as Oracle WebCenter Ensemble) provides a collection of useful tools and features that facilitate dynamic pagelet development. A pagelet is a reusable user interface component. Any HTML fragment can be a pagelet, but pagelet developers can also write pagelets that are parameterized and configurable, to dynamically interact with other pagelets, and respond to user input. Pagelets are similar to portlets, but while portlets were designed specifically for portals, pagelets can be run on any web page, including within a portal or other web application. Pagelets can be used to expose platform-specific portlets in other web environments. More on Page Producer can be found here: http://docs.oracle.com/cd/E23943_01/webcenter.1111/e10148/jpsdg_pagelet.htm#CHDIAEHG Can you describe the mechanism available to achieve the context transfer of content?The primary goal of context transfer is to provide a uniform experience to customers as they transition from one channel to another, for instance in the use-case discussed in the webcast, it was around a customer moving from the .com marketing website to the self-service site where the customer wants to manage his account information. However if WebCenter Sites was able to identify and segment the customers  to a specific category where the customer is a potential target for some promotions, the same promotions should be targeted to the customer when he is in the self-service site, which is managed by WebCenter Portal. The context transfer can be achieved by calling out the WebCenter Sites Engage Server API’s, which will identify the segment that the customer has been bucketed into. Again through REST API’s., WebCenter Portal can then request WebCenter Sites for specific content that needs to be targeted for a customer for the identified segment. While this integration can be achieved through custom integration today, Oracle is looking into productizing this integration in future releases.  How can context be transferred from WebCenter Sites (marketing site) to WebCenter Portal (Online services)?WebCenter Portal Personalization server can call into WebCenter Sites Engage Server to identify the segment for the user and then through REST API’s request specific content that needs to be surfaced in the Portal. Still have questions? Leave them in the comments section! And you can catch a replay of the webcast here.

    Read the article

  • Government Mandates and Programming Languages

    A recent SEC proposal (which, at over 600 pages, I havent read in any detail) includes the following: We are proposing to require the filing of a computer program (the waterfall computer program, as defined in the proposed rule) of the contractual cash flow provisions of the securities in the form of downloadable source code in Python, a commonly used computer programming language that is open source and interpretive. The computer program would be tagged in XML and required to be filed with the Commission as an exhibit. Under our proposal, the filed source code for the computer program, when downloaded and run (by loading it into an open Python session on the investors computer), would be required to allow the user to programmatically input information from the asset data file that we are proposing to require as described above. We believe that, with the waterfall computer program and the asset data file, investors would be better able to conduct their own evaluations of ABS and may be less likely to be dependent on the opinions of credit rating agencies. With respect to any registration statement on Form SF-1 (Section 239.44) or Form SF-3 (Section 239.45) relating to an offering of an asset-backed security that is required to comply with Item 1113(h) of Regulation AB, the Waterfall Computer Program (as defined in Item 1113(h)(1) of Regulation AB) must be written in the Python programming language and able to be downloaded and run on a local computer properly configured with a Python interpreter. The Waterfall Computer Program should be filed in the manner specified in the EDGAR Filer Manual. I dont see how it can be in investors best interests that the SEC demand a particular programming language be used for software related to investment data.  I have a feeling that investors who use computers at all already have software with which they are familiar, and that the vast majority of them are not running an open source scripting language on their machines to do their financial analysis.  In fact, I would wager that most of them are using tools like Excel, and if they really need to script anything, its being done with VBA in Excel. Now, Im not proposing that the SEC should require that the data be provided in Excel format with VBA scripts included so everyone can easily access the data (despite the fact that this would actually be pretty useful generally).  Rather, I think it is ill-advised for a government agency to make recommendations of this nature, period.  If the goal of the recommendation is to ensure that the way things work is codified in a transparent manner, than I can certainly respect that.  It seems to me that this could be accomplished without dictating the technology to use.  To wit: An Excel document could contain all of the data as well as the formulae necessary, and most likely would not require the end-user to install anything on their machine The SEC could simply create a calculator in the cloud such that any/all investors could use a single canonical web-based (or web service based) tool Millions of Java and .NET developers could write their own implementations You can read more about this issue, including the favorable position on it, on Jayanth Varmas blog. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Implementing Service Level Agreements in Enterprise Manager 12c for Oracle Packaged Applications

    - by Anand Akela
    Contributed by Eunjoo Lee, Product Manager, Oracle Enterprise Manager. Service Level Management, or SLM, is a key tool in the proactive management of any Oracle Packaged Application (e.g., E-Business Suite, Siebel, PeopleSoft, JD Edwards E1, Fusion Apps, etc.). The benefits of SLM are that administrators can utilize representative Application transactions, which are constantly and automatically running behind the scenes, to verify that all of the key application and technology components of an Application are available and performing to expectations. A single transaction can verify the availability and performance of the underlying Application Tech Stack in a much more efficient manner than by monitoring the same underlying targets individually. In this article, we’ll be demonstrating SLM using Siebel Applications, but the same tools and processes apply to any of the Package Applications mentioned above. In this demonstration, we will log into the Siebel Application, navigate to the Contacts View, update a contact phone record, and then log-out. This transaction exposes availability and performance metrics of multiple Siebel Servers, multiple Components and Component Groups, and the Siebel Database - in a single unified manner. We can then monitor and manage these transactions like any other target in EM 12c, including placing pro-active alerts on them if the transaction is either unavailable or is not performing to required levels. The first step in the SLM process is recording the Siebel transaction. The following screenwatch demonstrates how to record Siebel transaction using an EM tool called “OpenScript”. A completed recording is called a “Synthetic Transaction”. The second step in the SLM process is uploading the Synthetic Transaction into EM 12c, and creating Generic Service Tests. We can create a Generic Service Test to execute our synthetic transactions at regular intervals to evaluate the performance of various business flows. As these transactions are running periodically, it is possible to monitor the performance of the Siebel Application by evaluating the performance of the synthetic transactions. The process of creating a Generic Service Test is detailed in the next screenwatch. EM 12c provides a guided workflow for all of the key creation steps, including configuring the Service Test, uploading of the Synthetic Test, determining the frequency of the Service Test, establishing beacons, and selecting performance and usage metrics, just to name a few. The third and final step in the SLM process is the creation of Service Level Agreements (SLA). Service Level Agreements allow Administrators to utilize the previously created Service Tests to specify expected service levels for Application availability, performance, and usage. SLAs can be created for different time periods and for different Service Tests. This last screenwatch demonstrates the process of creating an SLA, as well as highlights the Dashboards and Reports that Administrators can use to monitor Service Test results. Hopefully, this article provides you with a good start point for creating Service Level Agreements for your E-Business Suite, Siebel, PeopleSoft, JD Edwards E1, or Fusion Applications. Enterprise Manager Cloud Control 12c, with the Application Management Suites, represents a quick and easy way to implement Service Level Management capabilities at customer sites. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Google+ |  Newsletter

    Read the article

  • Call for Papers for both Devoxx UK and France now open!

    - by Yolande
    Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} The two conferences are taking place the last week of March 2013 with London on March 26th and 27 and Paris on March 28th and 29th. Oracle fully supports "Devoxx UK" and "Devoxx France" as a European Platinum Partner. Submit proposals and participate in both conferences since they are a two-hour train ride away from one another. The Devoxx conferences are designed “for developers by developers.” The conference committees are looking for speakers who are passionate developers unafraid to share their knowledge of Java, mobile, web and beyond. The sessions are about frameworks, tools and development with in-depth conference sessions, short practical quickies, and bird-of-a-feather discussions. Those different formats allow speakers to choose the best way to present their topics and can be mentioned during the submission process Devoxx has proven its success under Stephan Janssen, organizer of Devoxx in Belgium for the past 11 years. Devoxx has been the biggest Java conference in Europe for many years. To organize those local conferences, Stephan has enrolled the top community leaders in the UK and France. Ben Evans and Martijn Verberg are the leaders of London Java User Group (JUG) and are also known internationally for starting the Adopt-a-JSR program. Antonio Goncalves is the leader of the Paris JUG. He organized last year’s Devoxx France, which was a big success with twice the size first expected. The organizers made sure to add the local character to the conferences. "The community energy has to feel right," said Ben Evans and for that he picked an "old Victoria hall" for the venue. Those leaders are part of very dynamic Java communities in France and in the UK. France has 22 JUGs; the Paris JUG alone has 2,000 members. The UK has over 50,000 developers working in London and its surroundings; a lot of them are Java developers working in the financial industry. The conference fee is kept as low as possible to encourage those developers to attend. Devoxx promises to be crowded and sold out in advance. Make sure to submit your talks to both Devoxx UK and France before January 31st, 2013. 

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >