Search Results

Search found 7677 results on 308 pages for 'das team'.

Page 70/308 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Building Great-Looking, Usable Apps: A two-day workshop applying Oracle’s best UX practices in ADF

    - by mvaughan
    By Misha Vaughan, Oracle Applications User ExperienceI have been with Oracle for more than 12 years. It is a company that has granted me extraordinary creative freedom to help deliver compelling experiences for customers.I am beyond proud to talk about one of the experiences we just took for a test drive. Recently, we delivered a first-of-its-kind, three-team collaboration, train-the-trainer event in Reading, U.K., on building great-looking, usable apps based on Oracle Fusion Applications -- using the ADF tool kit. A new kind of workshopKevin Li, Platform Product Director, asked the Oracle Applications User Experience VP, Jeremy Ashley, if the team had anything to help partners and customers build applications that looked like Fusion. He was receiving this request from European partners and customers.Some quick conversations ensued, and the idea for the workshop was born: We would conduct an experiment.  We would work with feedback from the key Platform Technology Solutions (PTS) trainers under Andre Pavanello, Director, Platform Technology Solutions, in Europe, Middle East, and Africa. We would partner with the ADF team lead by Grant Ronald, Director of Product Management, title> and leverage the Applications UX expertise in Ashley’s team.The goal: Create a pilot workshop that in two days would explain to an ADF developer how to leverage the next-generation user experience best-practices developed for Fusion Apps. Why? Customers who need integrations with Oracle Fusion Applications, who are looking for custom applications that need to co-exist with Fusion, or who quite simply want a next-generation design for a custom app, need their solutions to reflect the next-generation research and design.Building an event for an ADF developerThe biggest hurdle was figuring out where to start.  How far into user experience country do you take an ADF developer? How far into ADF do you need to go if you are a UX professional?After some time in the UX kitchen, the workshop recipe looked like this: Mix equal parts: Fusion user experience design principles and functional design patterns The art and science behind UX How to wireframe designs that you can build in Fusion How to translate those designs into an ADF application Ultan O’Broin, Director of Global User Experience, explaining the trouble ticket wireframe design exerciseLynn Munsinger, Senior Group Product Manager, explaining the follow-on trouble ticket ADF coding exercise For spice, add:•    Debra Lilley, Fujitsu and ACE director, showcasing some of the latest ADF design work in the new face of Fusion Applications •    Partner show-and-tell of example apps they have built with FMW and ADF that are dynamic, beautiful, and interactive.Debra Lilley, Oracle ACE Director and Fujitsu Fusion Champion on the new face of Fusion built with ADF and Fusion extensibility with composers as a window into “the possible”?The taste testThis first go-round of the workshop was aimed squarely at ADF developers and partners.  We were privileged to have participation and feedback from:•    Sten Vesterli, Scott/Tiger S. A., Denmark•    John Sim, Fishbowl Solutions, UK•    Josef Huber, Primus Delphi Group, Munich•    Thaddaus Weindl, Primus Delphi, Group , Munich•    Praveen Pillalamarri, EiS Technologies, Bangalore•    Balaji Kamepalli, EiS Technologies, Bangalore•    Plinio Arbizu, Services & Processes Solutions S. A., Mexico•    Yannick Ongena, infoMENTUM, UK•    Jakub Ciszek, infoMENTUM, UK•    Mauro Flores, infoMENTUM, UK•    Matteo Formica, infoMENTUM, UKRichard Bingham, Oracle, Mauro Flores and Matteo Formica, infoMENTUMWhy is this so exciting?  Oracle has invested heavily in the research and development of the Oracle Fusion Applications user experience. This investment has been and continues to be applied across the product lines. Now, we finally get to teach customers and partners how to take advantage of this investment for custom solutions.This event was a pilot to test-drive the content, as well as a train-the-trainer event that our EMEA colleagues will be using with partners who want to build with Fusion Apps design patterns.What did attendees think?"I liked most the science stuff, like eye-tracking, design patterns and best-practice (color, contrast),” Josef Huber said. “It was a very good introduction to UI design, and most developers and project managers are very bad in that.  So this course would be good for all developers and even project managers." Team Anonymous: John Sim, Fishbowl Solutions, Flavius Sana, Oracle, Josef Huber, infoMENTUM, Mireille Duroussaud, Oracle. Winners of the wireframing design exercise.  Sten Vesterli, of Scott/Tiger, said he attended to learn techniques he could use in his own projects. He wants to ensure that his applications better meet the needs of his users, and he said sessions during the workshop on user interface design and wireframing were most useful to him.  “Go to this event to learn the art and science of good user interfaces from people who really know how to do it,” he said.Sten Vesterli, Scott/Tiger, Angelo Santagata, Oracle Plinio Arbizu said the workshop fulfilled his goals, thanks to the recommendations given in how to design user interfaces to facilitate the adoption of applications among the final users. “The workshop combined these recommendations with an exercise that improved the technical comprehension, permitting the usage of JDeveloper to set forth our solutions,” he said. He added: “The first session that I really enjoyed was the five Fusion design principles. It was incredible to discover how these simple principles were included in an inherit manner in Fusion Applications, and I had been using many of them applying only ADF components.  Another topic that I enjoyed a lot was the eight recommendations about the visual design of UIs. The issues that were raised in that lesson are unknown to the developers and of great value to achieve an attractive presentation layer to the end users.  Participate in this workshop, and include these usability features in your projects and in this manner not only to facilitate and improve the user productivity, but also to distinguish you as a professional who takes advantage fully of the functionalities offered by Oracle technology. Praveen Pillalamarri came to the workshop to learn about the difficulties faced in UI and UX development, and how this can be resolved with the help of ADF.  He also appreciated the opportunity to talk with other individuals who came to the workshop. Pillalmarri said, “The way we looked at things in terms of work and projects were sharpened.  UI and UX design knowledge shared by you was quite interesting, especially the minute things which we ignored in the UI or UX design.” Plinio Arbizu, Services & Processes Solutions S. A., Richard Bingham, Oracle, Balaji Kamepalli, & Praveen Pillalamarri, EiS TechnologiesReady to spread the wordIn EMEA, Oracle customers and partners have access to three world-class trainers via Platform Technology Solutions: Mireille Duroussaud, Flavius Sana, and Angelo Santagata. Contact Andre Pavanello if you like to experience this workshop firsthand, or you have customers or partners who would benefit from the training.We are looking to bring the event to the U.S. in spring 2013. If you have interest in this kind of a workshop, leave a comment below. For those who want to follow the action, join the ADF Enterprise Methodology Group run by Oracle’s Chris Muir. Ask questions and continue with the conversation in this forum, or check blogs.oracle.com/usableapps for topics emerging from the workshop.

    Read the article

  • Win a place at a SQL Server Masterclass with Kimberly Tripp and Paul Randal

    - by Testas
    The top things YOU need to know about managing SQL Server - in one place, on one day - presented by two of the best SQL Server industry trainers!And you could be there courtesy of UK SQL Server User Group and SQL Server Magazine! This week the UK SQL Server User Group will provide you with details of how to win a place at this must see seminar   You can also register for the seminar yourself at:www.regonline.co.uk/kimtrippsql More information about the seminar   Where: Radisson Edwardian Heathrow Hotel, London When: Thursday 17th June 2010 This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will:·         Debunk many of the ingrained misconceptions around SQL Server's behaviour   ·         Show you disaster recovery techniques critical to preserving your company's life-blood - the data   ·         Explain how a common application design pattern can wreak havoc in the database ·         Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to changeSessions AbstractsKEYNOTE: Bridging the Gap Between Development and Production  Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.  SESSION ONE: SQL Server MythbustersIt's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained. SESSION TWO: Database Recovery Techniques Demo-Fest Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted! SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys! SESSION 4: Essential Database MaintenanceIn this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well.    Speaker Biographies     Paul S.Randal  Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.  

    Read the article

  • FIFA EM 2012: Tippspiel Anwendung von Christian Rokitta

    - by carstenczarski
    Sie sind nicht nur APEX-Entwickler, sondern auch Fußballfan ...? Dann ist die EURO2012 Tippspiel-Anwendung (natürlich mit APEX entwickelt) von Christian Rokitta genau das Richtige für Sie. Aber auch alle anderen finden hier eine APEX-Anwendung, welche die Möglichkeiten für ein APEX-Anwendungslayout eindrucksvoll vor Augen führt. Es hat doch wesentlich mehr drin, als die mitgelieferten Templates anbieten.

    Read the article

  • Monitora&ccedil;&atilde;o com Oracle Enteprrise Manager

    - by fernando.galdino
    A figura abaixo oferece uma visão geral das possibilidades de monitoramento providas pelo Oracle Enterprise Manager (OEM), que é uma ferramenta que permite gerenciar a infraestrutura de TI da empresa. Um componente importante da solução é chanado OEM Grid Control. Esse componente permite gerenciar, visualizar e monitorar diversos elementos a partir de uma mesma console. E que elementos podem ser monitorados? No conceito utilizado pelo OEM, os elementos que podem ser monitorados são chamados de Targets, e esses targets envolvem a monitoração de hosts (Windows, Linux, Solaris), Banco de Dados, Middleware, Aplicações Web, Serviços que podem ser customizados pelo administrador, Sistemas e Grupos de targets, além dos aplicativos Oracle. Cada elemento monitorado é ativado através de packs de gerenciamento. Ou seja, há uma série de packs que podem ser adquiridas conforme a necessidade, para permitir a monitoração a partir do próprio OEM Grid Control. Existem packs de monitoramento especiais para banco de dados Oracle, packs de monitoramento para Tomcat, Jboss, WebLogic, SOA Suite, Identity Management. A lista é bem extensa e darei mais detalhes em um novo post. Mas caso queira visitar, veja: http://download.oracle.com/docs/cd/B16240_01/doc/nav/overview.htm Além das packs de monitoramento, existem também plugins e conectores. Os plugins permitem o gerenciamento de elementos adicionais, tais como dispositivos de rede, servidores, banco de dados de terceiros (DB2, SQL Server), Vmware, etc. Já os conectores permitem a integração com outros softwares, tais como gerenciadores de requisições de helpdesk, de modo a integrar os alertas gerados pela ferramenta e gerar tickets em ferramentas como CA Service Desk, BMC Remedy e outros. A extensão de funcionalidades é realmente bem vasta. Num próximo post irei comentar sobre o Ops Center, um novo componente que surgiu após a aquisição da Sun. Além do Grid Control e do Ops Center, há outros componentes bem interessantes. A figura abaixo ilustra diversas camadas onde o ferramental Oracle pode ser usado para monitoração. Há uma pack que permite gerenciar os níveis de serviços em todas as camadas ilustradas. Dada uma requisição, pode-se decompor os dados de SLA em cada camada. E há também o Real User Monitoring, que trata de medir a experiência com o usuário. Falarei disso num novo post, mas basicamente a ferramenta permite acompanhar todo o tráfego de rede gerado dos usuários finais até os servidores web, e com isso rastrear como cada usuário usa a aplicação, quanto tempo ele navega pelo site, se ele enfrentou algum tipo de problema, se houve algum pedido não finalizado devido a algum problema na infraestrutura. É uma ferramenta bem interessante, falarei um pouco mais dela depois. E claro, há também componentes para a realização de testes funcionais e de carga. Em breve, aqui no blog :)

    Read the article

  • SQL Server Master class winner

    - by Testas
     The winner of the SQL Server MasterClass competition courtesy of the UK SQL Server User Group and SQL Server Magazine!    Steve Hindmarsh     There is still time to register for the seminar yourself at:  www.regonline.co.uk/kimtrippsql     More information about the seminar     Where: Radisson Edwardian Heathrow Hotel, London  When: Thursday 17th June 2010  This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will: Debunk many of the ingrained misconceptions around SQL Server's behaviour    Show you disaster recovery techniques critical to preserving your company's life-blood - the data    Explain how a common application design pattern can wreak havoc in the database Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to change  Sessions Abstracts  KEYNOTE: Bridging the Gap Between Development and Production    Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.   SESSION ONE: SQL Server Mythbusters  It's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained.   SESSION TWO: Database Recovery Techniques Demo-Fest  Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted!   SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward   Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys!   SESSION 4: Essential Database Maintenance  In this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well. Speaker Biographies     Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.   Speaker Testimonials  "To call them good trainers is an epic understatement. They know how to deliver technical material in ways that illustrate it well. I had to stop Paul at one point and ask him how long it took to build a particular slide because the animations were so good at conveying a hard-to-describe process." "These are not beginner presenters, and they put an extreme amount of preparation and attention to detail into everything that they do. Completely, utterly professional." "When it comes to the instructors themselves, Kimberly and Paul simply have no equal. Not only are they both ultimate authorities, but they have endless enthusiasm about the material, and spot on delivery. If either ever got tired they never showed it, even after going all day and all week. We witnessed countless demos over the course of the week, some extremely involved, multi-step processes, and I can’t recall one that didn’t go the way it was supposed to." "You might think that with this extreme level of skill comes extreme levels of egotism and lack of patience. Nothing could be further from the truth. ... They simply know how to teach, and are approachable, humble, and patient." "The experience Paul and Kimberly have had with real live customers yields a lot more information and things to watch out for than you'd ever get from documentation alone." “Kimberly, I just wanted to send you an email to let you know how awesome you are! I have applied some of your indexing strategies to our website’s homegrown CMS and we are experiencing a significant performance increase. WOW....amazing tips delivered in an exciting way!  Thanks again” 

    Read the article

  • Distrilogie muda de nome para Altimate

    - by Paulo Folgado
     O Grupo Distrilogie entra numa nova dimensão O Distribuidor de valor acrescentado em TI aposta numa mudança radical: muda de nome e de imagem, para passar a ser Altimate - Smart IT Distributor   Lisboa, 5 de Maio de 2010 - Para o grupo de reconhecido sucesso, o principal ponto forte está na mudança: a partir de hoje, a Distrilogie Portugal, Espanha, Bélgica, Luxemburgo, Holanda e França, bem como todas as suas aquisições, deixam o seu nome e formam o novo grupo Altimate. Na Península Ibérica, esta mudança afecta o grupo Distrilogie Iberia, formado pela Distrilogie Portugal, Distrilogie Espanha e Mambo Technology, o distribuidor especializado em segurança do grupo.   Altimate: uma marca com grandes ambições europeias Esta mudança assenta na vontade de reforçar um grupo de longo e frutífero trajecto, que conta com os melhores talentos e uma diversificada gama de soluções altamente complementares. "Continuar a crescer ao nosso ritmo (+27% este ano), em tempos como os de agora, passa por desenvolver todas as sinergias possíveis dentro do nosso grupo, e não só a nível nacional e regional, mas também pan-europeu. O nosso grupo goza, a nível internacional, de uma grande diversidade de soluções, que se complementam entre si. É uma riqueza que queremos aproveitar e desenvolver a nível de cada país, consolidando o nosso portfólio pan-europeu. Trata-se de um ponto fundamental para o crescimento futuro, agora que o mercado dos principais fabricantes tende à concentração", explica Alexis Brabant, Director-Geral da Altimate Iberia e membro do Comité Executivo Europeu do Grupo Altimate.   Por outro lado, a criação da Altimate assenta numa ambiciosa estratégia de expansão e consolidação por todo o continente. Entre outros objectivos fundamentais, a Altimate pretende estabelecer-se em 4 novos países da União Europeia nos próximos 2 anos. Assim o ilustra Patrice Arzillier, fundador da Distrilogie e PDG do grupo Altimate: "Graças ao apoio incondicional do nosso accionista DCC, o nosso grupo conheceu um desenvolvimento notável. Hoje, a criação da Altimate marca uma nova etapa de crescimento combinando solidez económica, ambição de expansão europeia e manutenção dos nossos valores fundadores."  Altimate: alta proximidade Tal como a Distrilogie, o novo grupo Altimate tem como missão o sucesso dos seus parceiros e fabricantes. Para a cumprir, continuará a potenciar a proximidade das suas equipas - altamente qualificadas e voltadas para a identificação das soluções mais inteligentes, inovadoras e adequadas.  Para mais informações acerca da Altimate, visite o novo site . http://www.altimate-group.com  

    Read the article

  • OpenCV install problems on Studio 12.04 - broken dependencies

    - by Will
    I'm trying to follow the Ubuntu OpenCV documentation at OpenCV. The provided script has a line which executed for some time, taking away more packages than I expected (such as ubuntu-studio video); sudo apt-get -qq remove ffmpeg x264 libx264-dev When the script gets to the line below, it bombs; sudo apt-get -qq install libopencv-dev build-essential checkinstall cmake pkg-config yasm libtiff4-dev libjpeg-dev libjasper-dev libavcodec-dev libavformat-dev libswscale-dev libdc1394-22-dev libxine-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev libv4l-dev python-dev python-numpy libtbb-dev libqt4-dev libgtk2.0-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils ffmpeg The error msg is; E: Unable to correct problems, you have held broken packages. I've since run Update-Manager, run sudo apt-get updates, rebooted, tried the above script line manually, and still no change. I've just run sudo apt-get install -f and nothing seemed to change. It did mention that some packages were no longer needed and could be removed by apt-get autoremove, so I ran that. It removed a number of packages, so I reran the install command above. Still same problem of held broken packages. I just ran sudo apt-get -u dist-upgrade Part of the response was; The following packages have been kept back: gstreamer0.10-ffmpeg I'm not sure what that means. I do know that it shows up in my Update-Manager and cannot be checked I then ran sudo dpkg --configure -a and then reran sudo apt-get -f install and the package was still not upgraded, though there was this very interesting comment; Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gstreamer0.10-ffmpeg : Depends: libavcodec53 (< 5:0) but it is not going to be installed or libavcodec-extra-53 (< 5:0) but 5:0.7.2-1ubuntu1+codecs1~oneiric2 is to be installed E: Unable to correct problems, you have held broken packages. Then I ran sudo apt-get -u dist-upgrade It showed I had one held package, so I ran; sudo apt-get -o Debug::pkgProblemResolver=yes dist-upgrade It also exited without upgrading the package, so I ran; sudo apt-get remove --dry-run gstreamer0.10-ffmpeg:i386 And it gave me; *The following packages will be REMOVED: arista gstreamer0.10-ffmpeg 0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded. Remv arista [0.9.7-3ubuntu1] Remv gstreamer0.10-ffmpeg [0.10.12-1ubuntu1]* But when I reran sudo apt-get -u dist-upgrade It showed the package was still there. *The following packages have been kept back: gstreamer0.10-ffmpeg 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.* Update: Just went into Synaptic PM and completely removed gstreamer0.10-ffmpeg Reran sudo apt-get -u dist-upgrade And was told; 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. However, when I ran the original apt-get to install opencv (first code at the top of this question), it still gave me the same broken package errors. So I tried $ cat /etc/apt/sources.list # # deb cdrom:[Ubuntu-Studio 11.10 _Oneiric Ocelot_ - Release i386 (20111011.1)]/ oneiric main multiverse restricted universe # deb cdrom:[Ubuntu-Studio 11.10 _Oneiric Ocelot_ - Release i386 (20111011.1)]/ oneiric main multiverse restricted universe # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://security.ubuntu.com/ubuntu precise-security main restricted deb-src http://security.ubuntu.com/ubuntu precise-security main restricted deb http://security.ubuntu.com/ubuntu precise-security universe deb-src http://security.ubuntu.com/ubuntu precise-security universe deb http://security.ubuntu.com/ubuntu precise-security multiverse deb-src http://security.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu oneiric partner ## Uncomment the following two lines to add software from Ubuntu's ## 'extras' repository. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://extras.ubuntu.com/ubuntu oneiric main # deb-src http://extras.ubuntu.com/ubuntu oneiric main # deb http://download.opensuse.org/repositories/home:/popinet/xUbuntu_11.04 ./ # disabled on upgrade to precise and then; $ cat /etc/apt/sources.list.d/* But I don't have enough reputation to post the results here (it says I need at least 10 reputation points to post more than 2 links), so I don't know how to provide the requested feedback. Then tried; $ sudo apt-get check [sudo] password for <abcd>: Reading package lists... Done Building dependency tree Reading state information... Done However, no resolution of the problem yet. What else do I need to do? Will an upgrade to Ubuntu Studio 13.xx solve this problem (or compound it)?

    Read the article

  • Future of Active FoxPro Pages - secured

    Finally some official news about Active FoxPro Pages, aka AFP. The German company BvL Bürosysteme Vertriebs GmbH bought all rights of Active FoxPro Pages from the insolvency stock. Being a former customer and intensive user of AFP since version 2.0 BvL has own interest in the continuation of AFP on current and future web servers. Together with their partners Christof Wollenhaupt (Foxpert Software Development & Consulting) and Jochen Kirstätter (IOS Indian Ocean Software Ltd) BvL will continue with development, support and marketing of AFP in the upcoming weeks. There will be an updated version of AFP, the relaunch of the website, re-enabling of activation server, re-establishment of support channel, and much more... Personally, I am relieved that this superb product made its way out of the dust of the past years. And of course, to be involved (again) in the development and support of Active FoxPro Pages gives me a big smile. Rest assured that there will be more articles on AFP soon! Here is the original announcement of 27th September 2010 from the online forum of German FoxPro Usergroup (dFPUG) - section Active FoxPro Pages: Liebe AFP Anwender, liebe FoxPro Gemeinde, nach den Insolvenzen der ProLib Software GmbH und der ProLib Tools GmbH gab es einige Verunsicherung über die Zukunft der Active FoxPro Pages. Wir können euch nun mitteilen, dass eine für alle Beteiligten positive Lösung gefunden wurde. Wir, die BvL Bürosysteme Vertriebs GmbH aus Berlin, haben sämtliche Rechte an der AFP aus der Insolvenzmasse vom Insolvenzverwalter abgekauft. Bereits 1987 wurde die BvL Bürosysteme Vertriebs GmbH gegründet und hat sich seit dem erfolgreich im Markt bewährt. Wir gehören auch schon seit Foxpro2.0 zur Foxpro-Gemeinde und auch mit der AFP2.0 haben wir unseren Einstieg in die AFP-Gemeinde vollzogen. Wir wollen die AFP nicht in irgendeine Schublade packen, sondern unser Ziel ist es, die AFP weiterzuentwickeln, speziell auch auf die kommenden Serverversionen. Unter der Homepage www.active-foxpro-pages.de wird es demnächst einen neuen Auftritt geben. An den Preisen soll sich nichts groß verändern, das Handbuch soll anständig aufgelegt werden und selbstverständlich soll der Support und die Weiterentwicklung eine große Aufmerksamkeit bekommen. Mit Christof Wollenhaupt und Jochen Kirstätter haben wir zwei Partner an Bord, die sich um den Support und die Weiterentwicklung kümmern werden. Christof Wollenhaupt wird maßgeblich und federführend an der Weiterentwicklung beteiligt sein. Über Christof Wollenhaupt können auch ab sofort Lizenzen gekauft werden, Christof Wollenhaupt ist für den Online-Vertrieb zuständig, der gerade aufgebaut wird. Sollte ein AFP Server aktiviert werden müssen, können sich alle bisherigen Lizenzinhaber auch direkt an Christof Wollenhaupt wenden. In den nächsten Wochen werden wir die AFP wieder auf Touren bringen. Eine aktuelle Version, eine neue Webseite, der Aktivierungsserver, ein Überblick über das leicht geänderte Lizensierungsmodell, und vieles mehr ist gerade in Arbeit. Die Zukunft und die Weiterentwicklung der AFP sind jetzt gesichert! Mit freundlichen Grüßen Ralph-Norman von Loesch Source: http://forum.dfpug.de/bodyframe.afp?msgid=728069

    Read the article

  • Emit Knowledge - social network for knowledge sharing

    - by hajan
    Emit Knowledge, as the words refer - it's a social network for emitting / sharing knowledge from users by users. Those who can benefit the most out of this network is perhaps all of YOU who have something to share with others and contribute to the knowledge world. I've been closely communicating with the core team of this very, very interesting, brand new social network (with specific purpose!) about the concept, idea and the vision they have for their product and I can say with a lot of confidence that this network has real potential to become something from which we will all benefit. I won't speak much about that and would prefer to give you link and try it yourself - http://www.emitknowledge.com Mainly, through the past few months I've been testing this network and it is getting improved all the time. The user experience is great, you can easily find out what you need and it follows some known patterns that are common for all social networks. They have some real good ideas and plans that are already under development for the next updates of their product. You can do micro blogging or you can do regular normal blogging… it’s up to you, and the way it works, it is seamless. Here is a short Question and Answers (QA) interview I made with the lead of the team, Marijan Nikolovski: 1. Can you please explain us briefly, what is Emit Knowledge? Emit Knowledge is a brand new knowledge based social network, delivering quality content from users to users. We believe that people’s knowledge, experience and professional thoughts compose quality content, worth sharing among millions around the world. Therefore, we created the platform that matches people’s need to share and gain knowledge in the most suitable and comfortable way. Easy to work with, Emit Knowledge lets you to smoothly craft and emit knowledge around the globe. 2. How 'old' is Emit Knowledge? In hamster’s years we are almost five years old start-up :). Just kidding. We’ve released our public beta about three months ago. Our official release date is 27 of June 2012. 3. How did you come up with this idea? Everything started from a simple idea to solve a complex problem. We’ve seen that the social web has become polluted with data and is on the right track to lose its base principles – socialization and common cause. That was our start point. We’ve gathered the team, drew some sketches and started to mind map the idea. After several idea refactoring’s Emit Knowledge was born. 4. Is there any competition out there in the market? Currently we don't have any competitors that share the same cause. What makes our platform different is the ideology that our product promotes and the functionalities that our platform offers for easy socialization based on interests and knowledge sharing. 5. What are the main technologies used to build Emit Knowledge? Emit Knowledge was built on a heterogeneous pallet of technologies. Currently, we have four of separation: UI – Built on ASP.NET MVC3 and Knockout.js; Messaging infrastructure – Build on top of RabbitMQ; Background services – Our in-house solution for job distribution, orchestration and processing; Data storage – Build on top of MongoDB; What are the main reasons you've chosen ASP.NET MVC? Since all of our team members are .NET engineers, the decision was very natural. ASP.NET MVC is the only Microsoft web stack that sticks to the HTTP behavioral standards. It is easy to work with, have a tiny learning curve and everyone who is familiar with the HTTP will understand its architecture and convention without any difficulties. 6. What are the main reasons for choosing ASP.NET MVC? Since all of our team members are .NET engineers, the decision was very natural. ASP.NET MVC is the only Microsoft web stack that sticks to the HTTP behavioral standards. It is easy to work with, have a tiny learning curve and everyone who is familiar with the HTTP will understand its architecture and convention without any difficulties. 7. Did you use some of the latest Microsoft technologies? If yes, which ones? Yes, we like to rock the cutting edge tech house. Currently we are using Microsoft’s latest technologies like ASP.NET MVC, Web API (work in progress) and the best for the last; we are utilizing Windows Azure IaaS to the bone. 8. Can you please tell us shortly, what would be the benefit of regular bloggers in other blogging platforms to join Emit Knowledge? Well, unless you are some of the smoking ace gurus whose blogs are followed by a large number of users, our platform offers knowledge based segregated community equipped with tools that will enable both current and future users to expand their relations and to self-promote in the community based on their activity and knowledge sharing. 10. I see you are working very intensively and there is already integration with some third-party services to make the process of sharing and emitting knowledge easier, which services did you integrate until now and what do you plan do to next? We have “reemit” functionality for internal sharing and we also support external services like: Twitter; LinkedIn; Facebook; For the regular bloggers we have an extra cream, Windows Live Writer support for easy blog posts emitting. 11. What should we expect next? Currently, we are working on a new fancy community feature. This means that we are going to support user groups to be formed. So for all existing communities and user groups out there, wait us a little bit, we are coming for rescue :). One of the top next features they are developing is the Community Feature. It means, if you have your own User Group, Community Group or any other Group on which you and your users are mostly blogging or sharing (emitting) knowledge in various ways, Emit Knowledge as a platform will help you have everything you need to promote your group, make new followers and host all the necessary stuff that you have had need of. I would invite you to try the network and start sharing knowledge in a way that will help you gather new followers and spread your knowledge faster, easier and in a more efficient way! Let’s Emit Knowledge!

    Read the article

  • Neuste Version zum Download: Apex 4.2 ist da!

    - by britta wolf
    Seit dem 12. Oktober 2012 steht APEX 4.2 zum Download bereit. Schnell installieren..... und dann gleich die neuen Features ausprobieren, z.B. das einfache, deklarative Erstellen von APEX-Anwendungen für mobile Endgeräte oder HTML5-Diagramme. Aber auch darüber hinaus gibt es weitere Neuerungen.: so wurde zum Beispiel der Excel-Upload für den Endanwender verbessert. Ausserdem kann man nun 200 (anstelle von 100) Elemente auf eine Seite setzen.

    Read the article

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

  • Ati Radeon HD 3200 Graphics driver - Installation Problem

    - by samufi
    I have a fresh installation of Ubuntu 12.04 x86 and I am trying to install the proprietary driver for my "Radeon HD 3200 Graphics" video card. I know that there are already many threads about this topic, but I did not find a solution for my problem: For the installation I followed exactly these instructions: What is the correct way to install ATI Catalyst Video Drivers in 12.04 LTS? During the process I faced these problems: I executed ~$ debconf libstdc++6 dkms libqtgui4 wget execstack libelfg0 dh-modaliases and got: debconf: DbDriver "passwords" warning: could not open /var/cache/debconf/passwords.dat: Keine Berechtigung Can't exec "libstdc++6": Datei oder Verzeichnis nicht gefunden at /usr/share/perl/5.14/IPC/Open3.pm line 186. open2: exec of libstdc++6 dkms libqtgui4 wget execstack libelfg0 dh-modaliases failed at /usr/share/perl5/Debconf/ConfModule.pm line 59 (translation of the German parts: "Keine Berechtigung" means: "no permission"; "Datei oder Verzeichnis nicht gefunden" means: "File or folder not found") Because I had no idea if it was a big issue, I just continued: ~$ sudo apt-get install ia32-libs There I got: Paketlisten werden gelesen... Fertig Abhängigkeitsbaum wird aufgebaut Statusinformationen werden eingelesen... Fertig Paket ia32-libs ist nicht verfügbar, wird aber von einem anderen Paket referenziert. Das kann heißen, dass das Paket fehlt, dass es abgelöst wurde oder nur aus einer anderen Quelle verfügbar ist. E: Paket »ia32-libs« hat keinen Installationskandidaten (Translation: [...] the package ia32-libs is not available but is referenced by an other package [...] E: package »ia32-libs« has no installation candidate) Once more I went on. The next steps worked quite fine. But when I came to the point: ~$ sudo dpkg -i *.deb There I got A popup message, something like there was a problem with a system application but in the terminal no errors were reported, also the packages seemed to be installed. so now the Ati Catalyst Center works amdcccle but fglrxinfo gave me X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 139 (ATIFGLEXTENSION) Minor opcode of failed request: 66 () Serial number of failed request: 13 Current serial number in output stream: 13 So there is something wrong. (Also there is not the possibility to enable these nice graphical features - the reason why I installed the proprietary driver) Because I worked with a completely fresh Installation I don't know how to fix the problem. If anybody could help I would be very tahnkful! =)

    Read the article

  • Einmal nach oben, bitte. Mit den Oracle Communities.

    - by A&C Redaktion
    Stellen Sie sich vor, Sie treffen im Fahrstuhl einen bekannten und vertrauten Geschäftspartner. Theoretisch bleibt Ihnen nur die Fahrtzeit vom Erdgeschoss in den dritten Stock, um eine wichtige Angelegenheit zu erklären. Das nennt man „Elevator Pitch". Anwar Behan und Bernhard Adelmann haben dieses beliebte Szenario für Sie nachgestellt und erklären so auf unterhaltsame Weise, welchen Mehrwert die Oracle Communities für den Oracle Partner haben. Mit den Oracle Communities geht es für Partner nach oben und hier zum Video.

    Read the article

  • MySQL im 1-Click-Programm

    - by swalker
    OPN-Partner der Stufe „Silber“ sowie Remarketer, die Transaktionen über autorisierte Remarketer VADs abwickeln, können nun Abonnements für MySQL Standard Edition und Enterprise Edition über das 1-Click-Programm wiederverkaufen. Silber OPN-Mitglieder können außerdem unbefristete Lizenzen für MySQL SE und EE wiederverkaufen. Die neuesten Informationen finden Sie unter Oracle 1-Click Technology für mittelständische Unternehmen.

    Read the article

  • Can you help me fix my broken packages?

    - by Andreas Hartmann
    I would like to upgrade from 13.04 to 13.10, but some broken packages are preventing upgrade success: grep Broken /var/log/dist-upgrade/apt.log output: Broken libwayland-client0:amd64 Conflicts on libwayland0 [ amd64 ] < 1.0.5-0ubuntu1 > ( libs ) (< 1.1.0) Broken libunity9:amd64 Breaks on unity-common [ amd64 ] < 7.0.0daily13.06.19~13.04-0ubuntu1 > ( gnome ) (< 7.1.2) Broken cups-filters:amd64 Conflicts on ghostscript-cups [ amd64 ] < 9.07~dfsg2-0ubuntu3.1 > ( text ) Broken libpam-systemd:amd64 Conflicts on libpam-xdg-support [ amd64 ] < 0.2-0ubuntu2 > ( admin ) Broken libharfbuzz0a:amd64 Breaks on libharfbuzz0 [ amd64 ] < 0.9.13-1 > ( libs ) Broken libharfbuzz0a:amd64 Breaks on libharfbuzz0 [ i386 ] < 0.9.13-1 > ( libs ) Broken libunity-scopes-json-def-desktop:amd64 Conflicts on libunity-common [ amd64 ] < 6.90.2daily13.04.05-0ubuntu1 > ( gnome ) (< 7.0.7) Broken libunity-scopes-json-def-desktop:amd64 Conflicts on libunity-common [ i386 ] < none > ( none ) (< 7.0.7) Broken libaccount-plugin-generic-oauth:amd64 Conflicts on account-plugin-generic-oauth [ amd64 ] < 0.10bzr13.03.26-0ubuntu1.1 > ( gnome ) (< 0.10bzr13.04.30) Broken libaccount-plugin-generic-oauth:amd64 Breaks on account-plugin-generic-oauth [ amd64 ] < 0.10bzr13.03.26-0ubuntu1.1 > ( gnome ) (< 0.10bzr13.04.30) Broken libmutter0b:amd64 Breaks on libmutter0a [ amd64 ] < 3.6.3-0ubuntu2 > ( libs ) Broken python3-aptdaemon.pkcompat:amd64 Breaks on libpackagekit-glib2-14 [ amd64 ] < 0.7.6-3ubuntu1 > ( libs ) (<= 0.7.6-4) Broken apache2:amd64 Conflicts on apache2.2-common [ amd64 ] < 2.2.22-6ubuntu5.1 > ( httpd ) Broken chromium-codecs-ffmpeg-extra:amd64 Conflicts on chromium-codecs-ffmpeg [ amd64 ] < 28.0.1500.71-0ubuntu1.13.04.1 -> 29.0.1547.65-0ubuntu2 > ( universe/web ) Broken unity-scope-home:amd64 Conflicts on unity-lens-shopping [ amd64 ] < 6.8.0daily13.03.04-0ubuntu1 > ( gnome ) Broken libsnmp30:amd64 Breaks on libsnmp15 [ amd64 ] < 5.4.3~dfsg-2.7ubuntu1 > ( libs ) Broken apache2.2-bin:amd64 Breaks on gnome-user-share [ amd64 ] < 3.0.4-0ubuntu1 > ( gnome ) (< 3.8.0-2~) Broken libgjs0d:amd64 Conflicts on libgjs0c [ amd64 ] < 1.34.0-0ubuntu1 > ( libs ) Broken unity-gtk2-module:amd64 Conflicts on appmenu-gtk [ amd64 ] < 12.10.3daily13.04.03-0ubuntu1 > ( libs ) Broken lib32asound2:amd64 Depends on libasound2 [ amd64 ] < 1.0.25-4ubuntu3.1 -> 1.0.27.2-1ubuntu6 > ( libs ) (= 1.0.25-4ubuntu3.1) Broken unity-gtk3-module:amd64 Conflicts on appmenu-gtk3 [ amd64 ] < 12.10.3daily13.04.03-0ubuntu1 > ( libs ) Broken activity-log-manager:amd64 Conflicts on activity-log-manager-common [ amd64 ] < 0.9.4-0ubuntu6.2 > ( utils ) Broken libgtksourceview-3.0-0:amd64 Depends on libgtksourceview-3.0-common [ amd64 ] < 3.6.3-0ubuntu1 -> 3.8.2-0ubuntu1 > ( libs ) (< 3.7) Broken icaclient:amd64 Depends on lib32asound2 [ amd64 ] < 1.0.25-4ubuntu3.1 > ( libs ) Broken libunity-core-6.0-5:amd64 Depends on unity-services [ amd64 ] < 7.0.0daily13.06.19~13.04-0ubuntu1 -> 7.1.2+13.10.20131014.1-0ubuntu1 > ( gnome ) (= 7.0.0daily13.06.19~13.04-0ubuntu1) Broken libbamf3-1:amd64 Depends on bamfdaemon [ amd64 ] < 0.4.0daily13.06.19~13.04-0ubuntu1 -> 0.5.1+13.10.20131011-0ubuntu1 > ( libs ) (= 0.4.0daily13.06.19~13.04-0ubuntu1) Broken apache2-bin:amd64 Conflicts on apache2.2-bin [ amd64 ] < 2.2.22-6ubuntu5.1 -> 2.4.6-2ubuntu2 > ( httpd ) (< 2.3~) Output for cat /etc/apt/sources.list /etc/apt/sources.list.d/*.list # deb cdrom:[Ubuntu 13.04 _Raring Ringtail_ - Release amd64 (20130424)]/ raring main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://de.archive.ubuntu.com/ubuntu/ raring main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://de.archive.ubuntu.com/ubuntu/ raring-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://de.archive.ubuntu.com/ubuntu/ raring universe deb http://de.archive.ubuntu.com/ubuntu/ raring-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://de.archive.ubuntu.com/ubuntu/ raring multiverse deb http://de.archive.ubuntu.com/ubuntu/ raring-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://security.ubuntu.com/ubuntu raring-security main restricted deb http://security.ubuntu.com/ubuntu raring-security universe deb http://security.ubuntu.com/ubuntu raring-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu raring partner # deb-src http://archive.canonical.com/ubuntu raring partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu raring main # deb-src http://extras.ubuntu.com/ubuntu raring main # deb http://linux.dropbox.com/ubuntu precise main output for sudo dpkg -l | grep -e "^iU" -e "^rc": rc ibm-lotus-cae 8.5.2-20100805.0821 i386 IBM Lotus Composite Application Editor rc ibm-lotus-cae-nl1 8.5.2-20100805.0821 i386 IBM Lotus CAE NL1 rc ibm-lotus-feedreader 8.5.2-20100805.0821 i386 Feeds for IBM Lotus Notes 8.5.2 rc ibm-lotus-feedreader-nl1 8.5.2-20100805.0821 i386 IBM Lotus Feed Reader NL1 rc ibm-lotus-notes 8.5.2-20100805.0821 i386 IBM Lotus Notes rc ibm-lotus-notes-core-de 8.5.2-20100805.0821 i386 IBM Lotus Notes Native German (de) rc ibm-lotus-notes-nl1 8.5.2-20100805.0821 i386 IBM Lotus Notes Java NL1 rc ibm-lotus-sametime 8.5.2-20100805.0821 i386 IBM Lotus Sametime rc ibm-lotus-symphony 8.5.2-20100805.0821 i386 IBM Lotus Symphony rc ibm-lotus-symphony-nl1 8.5.2-20100805.0821 i386 IBM Lotus Symphony NL1 rc libapache2-mod-php5filter 5.4.9-4ubuntu2.2 amd64 server-side, HTML-embedded scripting language (apache 2 filter module) rc libavcodec53:amd64 6:0.8.6-1ubuntu2 amd64 Libav codec library rc libavutil51:amd64 6:0.8.6-1ubuntu2 amd64 Libav utility library rc libmotif4:amd64 2.3.3-7ubuntu1 amd64 Open Motif - shared libraries rc linux-image-3.8.0-25-generic 3.8.0-25.37 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-extra-3.8.0-25-generic 3.8.0-25.37 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP

    Read the article

  • Cinco podcasts marotos sobre desenvolvimento ou quase (pt-BR)

    - by srecosta
    Ando muito de ônibus e metrô.Se você também faz isto, sabe que você acaba desenvolvendo técnicas para não se dar conta de quanto tempo da sua vida você está desperdiçando ali, parado, no trânsito.Uma das minhas técnicas preferidas é ouvir podcasts. É fácil de baixar, a maioria cuida bem do aúdio e quando você percebe, já está em casa.Criei uma lista de cinco podcasts que você pode ler em: http://www.srecosta.com/2012/09/13/cinco-podcasts-marotos-sobre-desenvolvimento-ou-quase/ Grande abraço,Eduardo Costa

    Read the article

  • Creating a Training Lab on Windows Azure

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/06/17/153149.aspxThis week we are preparing for a training course that Alan Smith will be running for the support teams at one of my customers around Windows Azure. In order to facilitate the training lab we have a few prerequisites we need to handle. One of the biggest ones is that although the support team all have MSDN accounts the local desktops they work on are not ideal for running most of the labs as we want to give them some additional developer background training around Azure. Some recent Azure announcements really help us in this area: MSDN software can now be used on Azure VM You don't pay for Azure VM's when they are no longer used  Since the support team only have limited experience of Windows Azure and the organisation also have an Enterprise Agreement we decided it would be best value for money to spin up a training lab in a subscription on the EA and then we can turn the machines off when we are done. At the same time we would be able to spin them back up when the users need to do some additional lab work once the training course is completed. In order to achieve this I wanted to create a powershell script which would setup my training lab. The aim was to create 18 VM's which would be based on a prebuilt template with Visual Studio and the Azure development tools. The script I used is described below The Start & Variables The below text will setup the powershell environment and some variables which I will use elsewhere in the script. It will also import the Azure Powershell cmdlets. You can see below that I will need to download my publisher settings file and know some details from my Azure account. At this point I will assume you have a basic understanding of Azure & Powershell so already know how to do this. Set-ExecutionPolicy Unrestrictedcls $startTime = get-dateImport-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1"# Azure Publisher Settings $azurePublisherSettings = '<Your settings file>.publishsettings'  # Subscription Details $subscriptionName = "<Your subscription name>" $defaultStorageAccount = "<Your default storage account>"  # Affinity Group Details $affinityGroup = '<Your affinity group>' $dataCenter = 'West Europe' # From Get-AzureLocation  # VM Details $baseVMName = 'TRN' $adminUserName = '<Your admin username>' $password = '<Your admin password>' $size = 'Medium' $vmTemplate = '<The name of your VM template image>' $rdpFilePath = '<File path to save RDP files to>' $machineSettingsPath = '<File path to save machine info to>'    Functions In the next section of the script I have some functions which are used to perform certain actions. The first is called CreateVM. This will do the following actions: If the VM already exists it will be deleted Create the cloud service Create the VM from the template I have created Add an endpoint so we can RDP to them all over the same port Download the RDP file so there is a short cut the trainees can easily access the machine via Write settings for the machine to a log file  function CreateVM($machineNo) { # Specify a name for the new VM $machineName = "$baseVMName-$machineNo" Write-Host "Creating VM: $machineName"       # Get the Azure VM Image      $myImage = Get-AzureVMImage $vmTemplate   #If the VM already exists delete and re-create it $existingVm = Get-AzureVM -Name $machineName -ServiceName $serviceName if($existingVm -ne $null) { Write-Host "VM already exists so deleting it" Remove-AzureVM -Name $machineName -ServiceName $serviceName }   "Creating Service" $serviceName = "bupa-azure-train-$machineName" Remove-AzureService -Force -ServiceName $serviceName New-AzureService -Location $dataCenter -ServiceName $serviceName   Write-Host "Creating VM: $machineName" New-AzureQuickVM -Windows -name $machineName -ServiceName $serviceName -ImageName $myImage.ImageName -InstanceSize $size -AdminUsername $adminUserName -Password $password  Write-Host "Updating the RDP endpoint for $machineName" Get-AzureVM -name $machineName -ServiceName $serviceName ` | Add-AzureEndpoint -Name RDP -Protocol TCP -LocalPort 3389 -PublicPort 550 ` | Update-AzureVM    Write-Host "Get the RDP File for machine $machineName" $machineRDPFilePath = "$rdpFilePath\$machineName.rdp" Get-AzureRemoteDesktopFile -name $machineName -ServiceName $serviceName -LocalPath "$machineRDPFilePath"   WriteMachineSettings "$machineName" "$serviceName" }    The delete machine settings function is used to delete the log file before we start re-running the process.  function DeleteMachineSettings() { Write-Host "Deleting the machine settings output file" [System.IO.File]::Delete("$machineSettingsPath"); }    The write machine settings function will get the VM and then record its details to the log file. The importance of the log file is that I can easily provide the information for all of the VM's to our infrastructure team to be able to configure access to all of the VM's    function WriteMachineSettings([string]$vmName, [string]$vmServiceName) { Write-Host "Writing to the machine settings output file"   $vm = Get-AzureVM -name $vmName -ServiceName $vmServiceName $vmEndpoint = Get-AzureEndpoint -VM $vm -Name RDP   $sb = new-object System.Text.StringBuilder $sb.Append("Service Name: "); $sb.Append($vm.ServiceName); $sb.Append(", "); $sb.Append("VM: "); $sb.Append($vm.Name); $sb.Append(", "); $sb.Append("RDP Public Port: "); $sb.Append($vmEndpoint.Port); $sb.Append(", "); $sb.Append("Public DNS: "); $sb.Append($vmEndpoint.Vip); $sb.AppendLine(""); [System.IO.File]::AppendAllText($machineSettingsPath, $sb.ToString());  } # end functions    Rest of Script In the rest of the script it is really just the bit that orchestrates the actions we want to happen. It will load the publisher settings, select the Azure subscription and then loop around the CreateVM function and create 16 VM's  Import-AzurePublishSettingsFile $azurePublisherSettings Set-AzureSubscription -SubscriptionName $subscriptionName -CurrentStorageAccount $defaultStorageAccount Select-AzureSubscription -SubscriptionName $subscriptionName  DeleteMachineSettings    "Starting creating Bupa International Azure Training Lab" $numberOfVMs = 16  for ($index=1; $index -le $numberOfVMs; $index++) { $vmNo = "$index" CreateVM($vmNo); }    "Finished creating Bupa International Azure Training Lab" # Give it a Minute Start-Sleep -s 60  $endTime = get-date "Script run time " + ($endTime - $startTime)    Conclusion As you can see there is nothing too fancy about this script but in our case of creating a small isolated training lab which is not connected to our corporate network then we can easily use this to provision the lab. Im sure if this is of use to anyone you can easily modify it to do other things with the lab environment too. A couple of points to note are that there are some soft limits in Azure about the number of cores and services your subscription can use. You may need to contact the Azure support team to be able to increase this limit. In terms of the real business value of this approach, it was not possible to use the existing desktops to do the training on, and getting some internal virtual machines would have been relatively expensive and time consuming for our ops team to do. With the Azure option we are able to spin these machines up for a temporary period during the training course and then throw them away when we are done. We expect the costing of this test lab to be very small, especially considering we have EA pricing. As a ball park I think my 18 lab VM training environment will cost in the region of $80 per day on our EA. This is a fraction of the cost of the creation of a single VM on premise.

    Read the article

  • Pillar Axiom 600 Playbook für Partner

    - by swalker
    Erfahren Sie mehr über die Produktpositionierung und Funktionalität von Pillar Axiom 600. Das Pillar Axiom 600 Playbook (Vertriebsskript) unterstützt Sie beim Vertrieb, bei der Identifizierung und Qualifizierung von Vertriebs-Chancen, bei der Entwicklung von Vertriebsszenarien und bei der Abgrenzung zu Ihrem Wettbewerb. Setzen Sie Schwerpunkte bei der Verwendung Ihrer Ressourcen und erweitern Sie Ihr Angebot mit den OPN Specialized-Optionen, die Ihrem Unternehmen zur Verfügung stehen.

    Read the article

  • Developer Day @ OOP 2001with SOA Specialized Partners

    - by Jürgen Kress
    Oracle SOA Specialized Partners like Opitz Consulting participate in our key marketing events. Therefore make sure that you start your journey to SOA Specialization! ORACLE Developer Day auf der OOP: Entdecken Sie die Einsatzmöglichkeiten und Leistungsfähigkeit der Java-Technologie! incl. Live Hacking mit Special Guest: JAVA Guru Adam Bien! Enterprise-Anwendungen leicht gemacht! Beschleunigen Sie Ihre Entwicklung mit Java. Kommen Sie zum kostenlosen Ganztages-Workshop von ORACLE auf der OOP und lernen Sie die Leistungsfähigkeit von Java kennen. Erfahren Sie mehr über die Java Strategie und die Produktroadmap, welche Einsatzmöglichkeiten Java SE für Embedded erschließt und wie sich eine SOA und BPM-Lösung auf der Basis von Java realisieren lässt. Die vielfältigen Verbesserungen von Java EE6 erleichtern den Entwicklern das Leben erheblich. Kennen Sie bereits das Potential von Java EE6? Adam Bien wird Sie mit einem Live-Hacking von den Stühlen reißen. Torsten Winterberg, Oracle Fusion Middleware ACE Director und Danilo Schmiedel stellen vor wie Java Entwickler die Oracle SOA & BPM Lösungen einbinden können. Am Nachmittag können Sie dann in einer Hands-On Session mit Ihrem eigenen Laptop Java Persistence API, Java Beans, CDI und weitere Technologien ausprobieren. In diesem kostenlosen Workshop von Oracle können Sie sich mit Gleichgesinnten austauschen, sich die neueste Technik direkt von den Oracle Experten zeigen lassen und an praktischen Programmierübungen teilnehmen. Auf dieser Veranstaltung sind Sie richtig, wenn Sie mehr über den aktuellen Status der Java Roadmap wissen wollen, mehr über Java Technologie- und Lösungen (Java SE, ME, etc) erfahren wollen, die Plattform Java EE erproben, die Vorteile der Java EE 6 für Ihre Arbeit verstehen möchten, wenn Sie auf eine Enterprise-Landschaft hochskalieren wollen, mit Java Server Faces Front-Ends erstellen, neue Entwicklungsprojekte planen oder gerade in Angriff annehmen. Registrieren Sie sich jetzt!   ICM - Internationales Congress Center München Am Messesee, Trudering-Riem 81829 München 27. Januar 2011 9.00 Uhr - 16.30 Uhr For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: OOP,Adam Bien,Torsten Winterberg,Opitz Consulting,Oracle,SOA,SOA Specialization,OPN

    Read the article

  • DOAG Conference 2011: Seven Flavors of Database Upgrades

    - by Mike Dietrich
    Thanks to everybody who did attend at my DOAG Conference session in Nürnberg this year "Seven Flavor of Database Upgrades" (or in German: "7 Wege zum Datenbank-Upgrade - Geschichten, die das Leben schrieb"). And thanks for your patience staying with me in overtime as well In case you'd like to download the slides I've presented at the session please download them via this link or from the download section to your right.

    Read the article

  • Einladung zur FraOSUG am 17. April 2012 (20. Treffen)

    - by uligraef
    Das  20. Treffen der FraOSUG findet am 17. April 2012 statt.Wann?   17. April 2012, 18:00 - ca. 21:00 UhrWo?  Commerzbank AG, DLZ5/PHH, Hafenstraße 51, FrankfurtAgenda:  Darwin Calendar Server unter OpenIndiana  Illumos und OpenIndiana News SoftWORM mit ZFS (Teil 3) Diskussion Mehr Details und genaue Anfahrt siehe: http://www.fraosug.de Anmeldung via: http://www.doodle.com/ugsbaxxrunkbun66

    Read the article

  • Erfolgreich sein durch Reference Selling

    - by A&C Redaktion
    Referenzen sind eine hervorragende Möglichkeit, die Zuverlässigkeit von Partner-Lösungen auf Basis von Oracle Technologien darzustellen, denn sie sind ein Spiegelbild zufriedener Kunden. Sie dienen als Best Practices und beeinflussen damit positiv die Kaufentscheidung neuer Kunden. Iris Musiol, Customer Reference Manager DACH, erklärt das Oracle Referenzprogramm für Partner sowie deren Vorteile, Inhalte und Voraussetzungen.

    Read the article

  • Webcast Q&A: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the second webcast in our WebCenter in Action webcast series, "Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter, where customer Michael Chander from Qualcomm and Vince Casarez & Gourav Goyal from Oracle Partner Keste shared how Oracle WebCenter is powering Qualcomm’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Mike Chandler, Qualcomm Q: Did you run into any issues when integrating all of the different applications together?A: Definitely, our main challenges were in the area of user provisioning and security propagation, all the standard stuff you might expect when hooking up SSO for authentication and authorization. In addition, we spent several iterations getting the UI’s in sync. While everyone was given the same digital material to build too, each team interpreted and implemented it their own way. Initially as a user navigated, if you were looking for it, you could slight variations in color or font or width , stuff like that. So we had to pull all the developers responsible for the UI together and get pixel level agreement on a lot of things so we could ensure seamless transitions across applications. Q: What has been the biggest benefit your end users have seen?A: Wow, there have been several. An SSO enabled environment was huge a win for our users. The portal application that this replaced had not really been invested in by the business. With this project, we had full business participation and backing, and it really showed in some key areas like the shopping experience. For example, while ordering in the previous site, the items did not have any pictures or really usable descriptions. A tremendous amount of work was done to try and make the site more intuitive and user friendly. Site performance has also drastically improved thanks to new hardware, improved database design, and of course the fact that ADF has made great strides in runtime performance. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: Within a large company, I’m sure there is always going to be competition for large projects, as there was here. Once we got through the technical analysis and settled on the technology choices, it was actually no resistance to implementing the solution. This project was fully driven by the business with the aim of long term growth. I can confidently say that the fact that this project was given the utmost importance by both the business and IT really help put down any resistance that you would typically see while implementing a new solution. Q: Given the performance, what do you estimate to be the top end capacity of the system? A:I think our top end capacity is really only limited by our hardware. I’m comfortable saying we could grow 10x on our current hardware, both in terms of transactions and users. We can easily spin up new JVM instances if needed. We already use less JVM’s than we had planned. In addition, ADF is doing a very good job with his connection pooling and application module pooling, so we see a very good ratio of users connected to the systems vs db connections, without impacting performace. Q: What's the overview or summary of feedback from the users interacting with the site?A: Feedback has been overwhelmingly positive from both the business and our customers. They’re very happy with the new SSO environment , the new LAF, and the performance of the site. Of course, it’s not all roses. No matter what, there are always going to be people that don’t like the layout or the color scheme, etc. By and large though, customers are happy and the business is happy. Q: Can you describe the impressions about the site before and after the project within Qualcomm?A: Before the project, the site worked and people were using it, but most people were not happy with it. It was slow and tended to be a bit tempermental, for example a user would perform a transaction and the system would throw and unexpected error. The user could back up and retry the steps and things would work fine, so why didn’t work the first time?. From a UI perspective, we’d hear comments like it looked like it was built by a high school student.  Vince Casarez & Gourav Goyal, Keste Q: Did you run into any obstacles when implementing the solution?A: It's interesting some people call them "obstacles" on this project we just called them "dependencies".  There were both technical and business related dependencies that we had to work out. Mike points out the SSO dependencies and the coordination and synchronization between the teams to have a seamless login experience and a seamless end user experience.  There was also a set of dependencies on the User Acceptance testing to make sure that everyone understood the use cases for how the system would be used.  With a branching into a new market and trying to match a simple user experience as many consumer sites have today, there was always a tendency for the team members to provide their suggestions on how things could be simpler.  But with all the work up front on the user design and getting the business driving this set of experiences, this minimized the downstream suggestions that tend to distract a team.  In this case, all the work up front allowed us to enumerate the "dependencies" and keep the distractions to a minimum. Q: Was there a lot of custom work that needed to be done for this particular solution?A: The focus for this particular solution was really on the custom processes. The interesting thing is that with the data flows and the integration with applications, there are some pre-built integrations, but realistically for the process flow, we had to build those. The framework and tooling we used made things easier so we didn’t have to implement core functionality, like transitioning from screen to screen or from flow to flow. The design feature of Task Flows really helped speed the development and keep the component infrastructure in line with the dynamic processes.  Task flows and other elements like Skins are core to the infrastructure or technology stack of Oracle. This then allowed the team to center the project focus around the business flows and use cases to meet the core requirements and keep the project on time. Q: What do you think were the keys to success for rolling out WebCenter?A:  The 5 main keys to success were: 1) Sponsorship from the whole organization around this project from senior executive agreement, business owners driving functionality, and IT development alignment; 2) Upfront design planning and use case definition to clearly define the project scope and requirements; 3) Focussed development and project management aligned with the top level goals and drivers; 4) User acceptance and usability testing along the way to identify potential issues and direct resolution of the issues;  and 5) Constant prioritization of the issues for development to fix by the business.  It also helps to have great team chemistry and really smart people working on the project. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter from Oracle WebCenter

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >