Search Results

Search found 5568 results on 223 pages for 'dependency analysis'.

Page 181/223 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • Java EE 6 Pocket Guide from O'Reilly - Now Available in Paperback and Kindle Edition

    - by arungupta
    Hot off the press ... Java EE 6 Pocket Guide from 'OReilly Media is now available in Paperback and Kindle Edition. Here are the book details: Release Date: Sep 21, 2012 Language: English Pages: 208 Print ISBN: 978-1-4493-3668-4 | ISBN 10:1-4493-3668-X Ebook ISBN:978-1-4493-3667-7 | ISBN 10:1-4493-3667-1 The book provides a comprehensive summary of the Java EE 6 platform. Main features of different technologies from the platform are explained and accompanied by tons of samples. A chapter is dedicated to Managed Beans, Servlets, Java Persistence API, Enterprise JavaBeans, Contexts and Dependency Injection, JavaServer Faces, SOAP-Based Web Services, RESTful Web Services, Java Message Service, and Bean Validation in that format. Many thanks to Markus Eisele, John Yeary, and Bert Ertman for reviewing and providing valuable comments. This book was not possible without their extensive feedback! This book was mostly written by compiling my blogs, material from 2-day workshops, and several hands-on workshops around the world. The interactions with users of different technologies and whiteboard discussions with different specification leads helped me understand the technology better. Many thanks to them for helping me be a better user! The long international flights during my travel around the world proved extremely useful for authoring the content. No phone, no email, no IM, food served on the table, power outlet = a perfect recipe for authoring ;-) Markus wrote a detailed review of the book. He was one of the manuscript reviewers of the book as well and provided valuable guidance. Some excerpts from his blog: It covers the basics you need to know of Java EE 6 and gives good examples of all relevant parts. ... This is a pocket guide which is comprehensively written. I could follow all examples and it was a good read overall. No complicated constructs and clear writing. ... GO GET IT! It is the only book you probably will need about Java EE 6! It is comprehensive, wonderfully written and covers everything you need in your daily work. It is not a complete reference but provides a great shortcut to the things you need to know. To me it is a good beginners guide and also works as a companion for advanced users. Here is the first tweet feedback ... Jeff West was super prompt to place the first pre-order of my book, pretty much the hour it was announced. Thank you Jeff! @mike_neck posted the very first tweet about the book, thanks for that! The book is now available in Paperback and Kindle Edition from the following websites: O'Reilly Media (Ebook, Print & Ebook, Print) Amazon.com (Kindle Edition and Paperback) Barnes and Noble Overstock (1% off Amazon) Buy.com Booktopia.com Tower Books Angus & Robertson Shopping.com Here is how I can use your help: Help spread the word about the book If you bought a Paperback or downloaded Kindle Edition, then post your review here. If you have not bought, then you can buy it at amazon.com and multiple other websites mentioned above. If you are coming to JavaOne, you'll have an opportunity to get a free copy at O'Reilly's booth on Monday (October 1) from 2-3pm. And you can always buy it from the JavaOne Bookstore. I hope you enjoy reading it and learn something new from it or hone your existing skills. As always, looking forward to your feedback!

    Read the article

  • Why you need to tag your build servers in TFS

    - by Martin Hinshelwood
    At SSW we use gated check-in for all of our projects. The benefits are based on the number of developers you have working on your project. Lets say you have 30 developers and each developer breaks the build once per month. That could mean that you have a broken build every day! Gated check-ins help, but they have a down side that manifests as queued builds and moaning developers. The way to combat this is to have more build servers, but with that comes complexity. Inevitably you will need to install components that you would expect to be installed on target computers, but how do you keep track of which build servers have which bits? What about a geographically diverse team? If you have a centrally controlled infrastructure you might have build servers in multiple regions and you don’t want teams in Sydney copying files from Beijing and vice a versa on a regular basis. So, what is the answer. Its Tags. You can add a set of Tags to your agents and then set which tags to look for in the build definition. Figure: Open up your Build Controller Manager Select “Build | Manage Build Controllers…” to get a list of all of your controllers and he build agents that are associated with them. Figure: the list of build agents and their controllers Each of these Agents might be subtly different. For example only one of these agents has FTP software installed. This software is required for only one of the many builds we have set up. My ethos for build servers is to keep them as clean as possible and not to install anything that is not absolutely necessary. For me that means anything that does not add a *.target file is suspect, and should really be under version control and called via the command line from there. So, some of the things you may install are: Silverlight 4 SDK Visual Studio 2010 Visual Studio 2008 WIX etc You should not install things that will not end up on the target users computer. For a website that means something different to a client than to a server, but I am sure you get the idea. One thing you can do to make things easier is to create a tag for each of the things that you install. that way developers can find the things they need. We may change to using a more generic tagging structure (Like “Web Application” or “WinForms Application”) if this gets too unwieldy, but for now the list of tags is limited. Figure: Tags associated with one of our build agents Once you have your Build Agents all tagged up ALL your builds will start to fail This is because the default setting for a build is to look for an Agent that exactly matches the tags for the build, and we have not added any yet. The quick way to fix this is to change the “Tag Comparison Operator” from “ExactMatch” to “MatchAtLease” to get your build immediately working. Figure: Tag Comparison Operator changes to MatchAtLeast to get builds to run. The next thing to do is look for specific tags. You just select from the list of available tags and the controller will make sure you get to a build agent that uses them. Figure: I want Silverlight, VS2010 and WIX, but do not care about Location. And there you go, you can now have build agents for different purposes and regions within the same environment. You can also use name filtering, so if you have a good Agent naming convention you can filter by that for regions. For example, your Agents might be “SYDVMAPTFSBP01” and “SYDVMAPTFSBP02” so a name filter of “SYD*” would target all of the Sydney build agents. Figure: Agent names can be used for filtering as well This flexibility will allow you to build better software by reducing the likelihood of not having a certain dependency on the target machines. Figure: Setting the name filter based on server location  Used in combination there is a lot of power here to coordinate tens of build servers for multiple projects across multiple regions so your developers get the most out of your environment. Technorati Tags: ALM,TFBS,TFS 2010,TFS Admin

    Read the article

  • Application Performance Episode 2: Announcing the Judges!

    - by Michaela Murray
    The story so far… We’re writing a new book for ASP.NET developers, and we want you to be a part of it! If you work with ASP.NET applications, and have top tips, hard-won lessons, or sage advice for avoiding, finding, and fixing performance problems, we want to hear from you! And if your app uses SQL Server, even better – interaction with the database is critical to application performance, so we’re looking for database top tips too. There’s a Microsoft Surface apiece for the person who comes up with the best tip for SQL Server and the best tip for .NET. Of course, if your suggestion is selected for the book, you’ll get full credit, by name, Twitter handle, GitHub repository, or whatever you like. To get involved, just email your nuggets of performance wisdom to [email protected] – there are examples of what we’re looking for and full competition details at Application Performance: The Best of the Web. Enter the judges… As mentioned in my last blogpost, we have a mystery panel of celebrity judges lined up to select the prize-winning performance pointers. We’re now ready to reveal their secret identities! Judging your ASP.NET  tips will be: Jean-Phillippe Gouigoux, MCTS/MCPD Enterprise Architect and MVP Connected System Developer. He’s a board member at French software company MGDIS, and teaches algorithms, security, software tests, and ALM at the Université de Bretagne Sud. Jean-Philippe also lectures at IT conferences and writes articles for programming magazines. His book Practical Performance Profiling is published by Simple-Talk. Nik Molnar,  a New Yorker, ASP Insider, and co-founder of Glimpse, an open source ASP.NET diagnostics and debugging tool. Originally from Florida, Nik specializes in web development, building scalable, client-centric solutions. In his spare time, Nik can be found cooking up a storm in the kitchen, hanging with his wife, speaking at conferences, and working on other open source projects. Mitchel Sellers, Microsoft C# and DotNetNuke MVP. Mitchel is an experienced software architect, business leader, public speaker, and educator. He works with companies across the globe, as CEO of IowaComputerGurus Inc. Mitchel writes technical articles for online and print publications and is the author of Professional DotNetNuke Module Programming. He frequently answers questions on StackOverflow and MSDN and is an active participant in the .NET and DotNetNuke communities. Clive Tong, Software Engineer at Red Gate. In previous roles, Clive spent a lot of time working with Common LISP and enthusing about functional languages, and he’s worked with managed languages since before his first real job (which was a long time ago). Long convinced of the productivity benefits of managed languages, Clive is very interested in getting good runtime performance to keep managed languages practical for real-world development. And our trio of SQL Server specialists, ready to select your top suggestion, are (drumroll): Rodney Landrum, a SQL Server MVP who writes regularly about Integration Services, Analysis Services, and Reporting Services. He’s authored SQL Server Tacklebox, three Reporting Services books, and contributes regularly to SQLServerCentral, SQL Server Magazine, and Simple–Talk. His day job involves overseeing a large SQL Server infrastructure in Orlando. Grant Fritchey, Product Evangelist at Red Gate and SQL Server MVP. In an IT career spanning more than 20 years, Grant has written VB, VB.NET, C#, and Java. He’s been working with SQL Server since version 6.0. Grant volunteers with the Editorial Committee at PASS and has written books for Apress and Simple-Talk. Jonathan Allen, leader and founder of the PASS SQL South West user group. He’s been working with SQL Server since 1999 and enjoys performance tuning, development, and using SQL Server for business solutions. He’s spoken at SQLBits and SQL in the City, as well as local user groups across the UK. He’s also a moderator at ask.sqlservercentral.com.

    Read the article

  • Inappropriate Updates?

    - by Tony Davis
    A recent Simple-talk article by Kathi Kellenberger dissected the fastest SQL solution, submitted by Peter Larsson as part of Phil Factor's SQL Speed Phreak challenge, to the classic "running total" problem. In its analysis of the code, the article re-ignited a heated debate regarding the techniques that should, and should not, be deemed acceptable in your search for fast SQL code. Peter's code for running total calculation uses a variation of a somewhat contentious technique, sometimes referred to as a "quirky update": SET @Subscribers = Subscribers = @Subscribers + PeopleJoined - PeopleLeft This form of the UPDATE statement, @variable = column = expression, is documented and it allows you to set a variable to the value returned by the expression. Microsoft does not guarantee the order in which rows are updated in this technique because, in relational theory, a table doesn’t have a natural order to its rows and the UPDATE statement has no means of specifying the order. Traditionally, in cases where a specific order is requires, such as for running aggregate calculations, programmers who used the technique have relied on the fact that the UPDATE statement, without the WHERE clause, is executed in the order imposed by the clustered index, or in heap order, if there isn’t one. Peter wasn’t satisfied with this, and so used the ingenious device of assuring the order of the UPDATE by the use of an "ordered CTE", based on an underlying temporary staging table (a heap). However, in either case, the ordering is still not guaranteed and, in addition, would be broken under conditions of parallelism, or partitioning. Many argue, with validity, that this reliance on a given order where none can ever be guaranteed is an abuse of basic relational principles, and so is a bad practice; perhaps even irresponsible. More importantly, Microsoft doesn't wish to support the technique and offers no guarantee that it will always work. If you put it into production and it breaks in a later version, you can't file a bug. As such, many believe that the technique should never be tolerated in a production system, under any circumstances. Is this attitude justified? After all, both forms of the technique, using a clustered index to guarantee the order or using an ordered CTE, have been tested rigorously and are proven to be robust; although not guaranteed by Microsoft, the ordering is reliable, provided none of the conditions that are known to break it are violated. In Peter's particular case, the technique is being applied to a temporary table, where the developer has full control of the data ordering, and indexing, and knows that the table will never be subject to parallelism or partitioning. It might be argued that, in such circumstances, the technique is not really "quirky" at all and to ban it from your systems would server no real purpose other than to deprive yourself of a reliable technique that has uses that extend well beyond the running total calculations. Of course, it is doubly important that such a technique, including its unsupported status and the assumptions that underpin its success, is fully and clearly documented, preferably even when posting it online in a competition or forum post. Ultimately, however, this technique has been available to programmers throughout the time Sybase and SQL Server has existed, and so cannot be lightly cast aside, even if one sympathises with Microsoft for the awkwardness of maintaining an archaic way of doing updates. After all, a Table hint could easily be devised that, if specified in the WITH (<Table_Hint_Limited>) clause, could be used to request the database engine to do the update in the conventional order. Then perhaps everyone would be satisfied. Cheers, Tony.

    Read the article

  • Whether to use UNION or OR in SQL Server Queries

    - by Dinesh Asanka
    Recently I came across with an article on DB2 about using Union instead of OR. So I thought of carrying out a research on SQL Server on what scenarios UNION is optimal in and which scenarios OR would be best. I will analyze this with a few scenarios using samples taken  from the AdventureWorks database Sales.SalesOrderDetail table. Scenario 1: Selecting all columns So we are going to select all columns and you have a non-clustered index on the ProductID column. --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 874 So query 1 is using OR and the later is using UNION. Let us analyze the execution plans for these queries. Query 1 Query 2 As expected Query 1 will use Clustered Index Scan but Query 2, uses all sorts of things. In this case, since it is using multiple CPUs you might have CX_PACKET waits as well. Let’s look at the profiler results for these two queries: CPU Reads Duration Row Counts OR 78 1252 389 3854 UNION 250 7495 660 3854 You can see from the above table the UNION query is not performing well as the  OR query though both are retuning same no of rows (3854).These results indicate that, for the above scenario UNION should be used. Scenario 2: Non-Clustered and Clustered Index Columns only --Query 1 : OR SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 GO --Query 2 : UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 874 GO So this time, we will be selecting only index columns, which means these queries will avoid a data page lookup. As in the previous case we will analyze the execution plans: Query 1 Query 2 Again, Query 2 is more complex than Query 1. Let us look at the profile analysis: CPU Reads Duration Row Counts OR 0 24 208 3854 UNION 0 38 193 3854 In this analyzis, there is only slight difference between OR and UNION. Scenario 3: Selecting all columns for different fields Up to now, we were using only one column (ProductID) in the where clause.  What if we have two columns for where clauses and let us assume both are covered by non-clustered indexes? --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2: As we can see, the query plan for the second query has improved. Let us see the profiler results. CPU Reads Duration Row Counts OR 47 1278 443 1228 UNION 31 1334 400 1228 So in this case too, there is little difference between OR and UNION. Scenario 4: Selecting Clustered index columns for different fields Now let us go only with clustered indexes: --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2 Now both execution plans are almost identical except is an additional Stream Aggregate is used in the first query. This means UNION has advantage over OR in this scenario. Let us see profiler results for these queries again. CPU Reads Duration Row Counts OR 0 319 366 1228 UNION 0 50 193 1228 Now see the differences, in this scenario UNION has somewhat of an advantage over OR. Conclusion Using UNION or OR depends on the scenario you are faced with. So you need to do your analyzing before selecting the appropriate method. Also, above the four scenarios are not all an exhaustive list of scenarios, I selected those for the broad description purposes only.

    Read the article

  • Oracle Unified Method (OUM) 6.1

    - by user714714
    ORACLE® UNIFIED METHOD RELEASE 6.1 Oracle’s Full Lifecycle Methodfor Deploying Oracle-Based Business Solutions About | Release | Access | Previous Announcements About Oracle is evolving the Oracle® Unified Method (OUM) to achieve the vision of supporting the entire Enterprise IT Lifecycle, including support for the successful implementation of every Oracle product. OUM replaces Legacy Methods, such as AIM Advantage, AIM for Business Flows, EMM Advantage, PeopleSoft's Compass, and Siebel's Results Roadmap. OUM provides an implementation approach that is rapid, broadly adaptive, and business-focused. OUM includes a comprehensive project and program management framework and materials to support Oracle's growing focus on enterprise-level IT strategy, architecture, and governance. Release OUM release 6.1 provides support for Application Implementation, Cloud Application Services Implementation, and Software Upgrade projects as well as the complete range of technology projects including Business Intelligence (BI), Enterprise Security, WebCenter, Service-Oriented Architecture (SOA), Application Integration Architecture (AIA), Business Process Management (BPM), Enterprise Integration, and Custom Software. Detailed techniques and tool guidance are provided, including a supplemental guide related to Oracle Tutor and UPK. This release features: Project Manager and Consultant views provide quick access to material relevant to each role OUM Cloud Application Services Implementation Approach Solution Delivery Guide 3.0 and Project Workplan Template OUM Microsoft Project Workplan Template and User's Guide updated to facilitate review and removal of out-of-scope Activities and Tasks MC.050 Application Setup Template available in Microsoft Excel format in addition to Microsoft Word format BT.070 Abbreviated Project Management Framework Presentation Template Envision Examples for Enterprise Organization Structures (BA.020), Enterprise Business Context Diagram (BA.045), and High-Level Use Cases (BA.060) Implement Examples for System Context Diagram (RD.005), Business Use Case Model (RA.015), Use Case Model (RA.023), MoSCoW List (RD.045), and Analysis Specification (AN.100) Home Page drop-down menu allows access to the method by Role, Supplemental Guidance, Method Repository, or View For a comprehensive list of features and enhancements, refer to the "What's New" page of the Method Pack. Upcoming releases will provide expanded support for Oracle's Enterprise Application suites including product-suite specific materials and guidance for tailoring OUM to support various engagement types. Access Oracle Customers Oracle customers may obtain copies of the method for their internal use – including guidelines, templates, and tailored work breakdown structure – by contracting with Oracle for a consulting engagement of two weeks or longer and meeting some additional minimum criteria. Customers, who have a signed consulting contract with Oracle and meet the engagement qualification criteria, are permitted to download the current release of OUM for their perpetual use. They may also obtain subsequent releases published during a renewable, three-year access period. Training courses are also available to these customers. Contact your local Oracle Sales Representative about enrolling in the OUM Customer Program. Oracle PartnerNetwork (OPN) Diamond, Platinum, and Gold Partners OPN Diamond, Platinum, and Gold Partners are able to access the OUM method pack, training courses, and collateral from the OPN Portal at no additional cost: Go to the OPN Portal at partner.oracle.com. Select "Sign In / Register for Account". Sign In. From the Product Resources section, select "Applications". From the Applications page, locate and select the "Oracle Unified Method" link. From the Oracle Unified Method Knowledge Zone, locate the "I want to:" section. From the I want to: section, locate and select "Implement Solutions". From the Implement Solution page, locate the "Best Practices" section. Locate and select the "Download Oracle Unified Method (OUM)" link. Previous Announcements Oracle Unified Method (OUM) Release 6.1 Oracle Unified Method (OUM) Release 6.0 Oracle Unified Method (OUM) Release 5.6 Oracle Unified Method (OUM) Release 5.5 Oracle Unified Method (OUM) Release 5.4 Oracle EMM Advantage Retired Retirement of Oracle EMM Advantage Planned for December 01, 2011

    Read the article

  • Praise for Europe's Smart Metering & Conservation Efforts

    - by caroline.yu
    Recently, a writer at the Home Energy Team praised the UK for its efforts towards smart metering and energy conservation, with an article entitled UK Blazing A Trail With Smart Metering At Home? The article highlighted that the Department of Energy and Climate Change has announced that smart metering will be introduced in the next decade and that all UK households will have smart meters by the year 2020. In fact, the UK is not the only country striving to achieve carbon reduction targets, as many of its European counterparts have begun to take positive steps towards tackling the issue of energy conservation by implementing innovative new metering and billing technologies as well as promoting alternative energy solutions, such as wind and solar power. Since 1997, the states of the European Union, including France, Germany and Spain, have been working towards achieving a target of 12 percent renewable energy electricity by 2010. Germany in particular has made a significant achievement so far, having surpassed the target early in 2007. This success is largely due to the German Renewable Energy Act (EEG), which promoted the use of renewable energy. Recently, analysis from the European Wind Energy Association (EWEA) found that 21 of the EU Member States are meeting or exceeding their national target to achieve 20 percent renewable energy by 2020. However, six states - Belgium, Italy, Luxembourg, Malta, Bulgaria and Denmark - say they will not manage to reach their target through domestic action alone. Bulgaria and Denmark believe that with fresh national initiatives they could meet or exceed their targets, but others, including Italy, may need to import renewable energy from neighboring non-EU countries. Top achievers, according to the EWEA report, are Spain, which believes its renewable energy will reach 22.7 percent by 2020, as well as Germany, Estonia, Greece, Ireland, Poland, Slovakia and Sweden, who will all exceed their targets. "Importantly, the way that this renewable energy is controlled and distributed must be addressed in order to ensure its success," said Bastian Fischer, vice president and general manager EMEA, Oracle Utilities. "A smart gird infrastructure can enable utilities to deal with load distribution in times of increased need and ensure power is always available from these means. A smart grid also underpins the success of metering and billing technologies, such as smart metering, and allows utilities to deal with increased usage data and provide accurate billing." Outside of Europe, Australia has made significant steps towards improving water conservation. The Australian Department of Sustainability and Environment took some of the recent advancements made in the energy sector, including new metering and billing solutions, and applied them to the water industry, enhancing customer service and reducing consumption as a result. The adoption of smart metering in Europe is mainly driven by regulation, but significant technological improvements are being made the world over to change the way we use all kinds of energy. However, the developing markets are lagging behind. One of the primary reasons for this is the lack of infrastructure in place to use as a foundation for setting up energy-saving solutions, which is slowing the adoption of technologies such as smart meters. However, these countries do benefit from fewer outdated infrastructure and legacy systems, which is often cited by others as a difficult barrier to deploying new solutions. As a result, some countries should find new technologies easier to implement and adapt to in the immediate future, without this roadblock.

    Read the article

  • What packages do I need to compile .tex documents using XeLaTeX?

    - by maria
    Hi I'm aware of the existence of similar threads on this forum. But any of replies mach to my problem. I'm using Ubuntu 10.4 and I hadn't problems with fonts till I've decided to use XeLaTeX instead of LaTeX (cf http://tex.stackexchange.com/questions/12347/typesetting-a-document-using-arabic-script/12358#12358). The problem is that I'm not able to compile any .tex document using XeLaTeX, as well as properly display XeLaTeX documentation. As I've learn thanks to mentioned thread, XeLaTeX uses the fonts availables in general in the system. I was trying yo read fontspec documentation, but it opens in pdf with a lot of white gaps and terminal output (quite long) consist mostly of errors. This are just few lines of it: Error: Missing language pack for 'Adobe-Japan1' mapping Error: Unknown font tag 'F5.1' Error (24124): No font in show Error: Unknown font tag 'F5.1' I was trying to compile simple XeLaTeX file: \documentclass{article} \usepackage{fontspec} \setmainfont{Linux Libertine O} \begin{document} Hello World! \end{document} without succes. This is terminal output of compilation: This is XeTeX, Version 3.1415926-2.2-0.9995.2 (TeX Live 2009/Debian) restricted \write18 enabled. entering extended mode (./ex.tex LaTeX2e <2009/09/24> Babel <v3.8l> and hyphenation patterns for english, usenglishmax, dumylang, noh yphenation, polish, loaded. (/usr/share/texmf-texlive/tex/latex/base/article.cls Document Class: article 2007/10/19 v1.4h Standard LaTeX document class (/usr/share/texmf-texlive/tex/latex/base/size10.clo)) (/usr/share/texmf-texlive/tex/xelatex/fontspec/fontspec.sty (/usr/share/texmf-texlive/tex/generic/ifxetex/ifxetex.sty) (/usr/share/texmf-texlive/tex/latex/tools/calc.sty) (/usr/share/texmf-texlive/tex/latex/xkeyval/xkeyval.sty (/usr/share/texmf-texlive/tex/generic/xkeyval/xkeyval.tex (/usr/share/texmf-texlive/tex/generic/xkeyval/keyval.tex))) (/usr/share/texmf-texlive/tex/latex/base/fontenc.sty (/usr/share/texmf-texlive/tex/xelatex/euenc/eu1enc.def) (/usr/share/texmf-texlive/tex/xelatex/euenc/eu1lmr.fd)) fontspec.cfg loaded. (/usr/share/texmf-texlive/tex/xelatex/fontspec/fontspec.cfg))kpathsea: Invalid fontname `Linux Libertine O', contains ' ' ! Font \zf@basefont="Linux Libertine O" at 10.0pt not loadable: Metric (TFM) fi le or installed font not found. \zf@fontspec ...ntname \zf@suffix " at \f@size pt \unless \ifzf@icu \zf@set@... l.3 \setmainfont{Linux Libertine O} ? I can't find Linux Libertine O. Searching for otf- by aptitude gives as result: maria@maria-laptop:/etc/fonts$ aptitude search otf p emdebian-rootfs - emdebian root filesystem support p libotf-bin - A Library for handling OpenType Font - utilities p libotf-dev - A Library for handling OpenType Font - development i libotf0 - A Library for handling OpenType Font - runtime p libotf0-dbg - The libotf libraries and debugging symbols p libpam-dotfile - A PAM module which allows users to have more than one password p livecd-rootfs - construction script for the livecd rootfs p makebootfat - Utility to create a bootable FAT filesystem p otf-ipaexfont - Japanese OpenType font, IPAexFont (IPAexGothic/Mincho) p otf-ipaexfont-gothic - Japanese OpenType font, IPAexFont (IPAexGothic) p otf-ipaexfont-mincho - Japanese OpenType font, IPAexFont (IPAexMincho) p otf-ipafont - Japanese OpenType font set, IPAfont p otf-ipafont-gothic - Japanese OpenType font set, IPA Gothic font p otf-ipafont-mincho - Japanese OpenType font set, IPA Mincho font p otf-stix - the Scientific and Technical Information eXchange fonts p otf-thai-tlwg - Thai fonts in OpenType format p otf-yozvox-yozfont - Japanese proportional Handwriting OpenType font p otf2bdf - generate BDF bitmap fonts from OpenType outline fonts p robotfindskitten - Zen Simulation of robot finding kitten So font in question is not just uninstalled, but not available, if I'm not wrong. Does it mean that I lack some repositoires? I was trying also to apply solution from the thread How do I reinstall default fonts?, but the result is: maria@maria-laptop:~$ sudo apt-get install msttcorefonts [sudo] password for maria: Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting ttf-mscorefonts-installer instead of msttcorefonts ttf-mscorefonts-installer is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. maria@maria-laptop:~$ It seems that is not a usual problem for use of XeLaTeX; nobody in the mentioned thread suggested instalation of anything else than TeX Live. Thanks in advance

    Read the article

  • Two small issues with Windows Phone 7 ApplicationBar buttons (and workaround)

    - by Laurent Bugnion
    When you work with the ApplicationBar in Windows Phone 7, you notice very fast that it is not quite a component like the others. For example, the ApplicationBarIconButton element is not a dependency object, which causes issues because it is not possible to add attached properties to it. Here are two other issues I stumbled upon, and what workaround I used to make it work anyway. Finding a button by name returns null Since the ApplicationBar is not in the tree of the Silverlight page, finding an element by name fails. For example consider the following code: <phoneNavigation:PhoneApplicationPage.ApplicationBar> <shell:ApplicationBar> <shell:ApplicationBar.Buttons> <shell:ApplicationBarIconButton IconUri="/Resources/edit.png" Click="EditButtonClick" x:Name="EditButton"/> <shell:ApplicationBarIconButton IconUri="/Resources/cancel.png" Click="CancelButtonClick" x:Name="CancelButton"/> </shell:ApplicationBar.Buttons> </shell:ApplicationBar> </phoneNavigation:PhoneApplicationPage.ApplicationBar> with private void EditButtonClick( object sender, EventArgs e) { CancelButton.IsEnabled = false; // Fails, CancelButton is always null } The CancelButton, even though it is named through an x:Name attribute, and even though it appears in Intellisense in the code behind, is null when it is needed. To solve the issue, I use the following code: public enum IconButton { Edit = 0, Cancel = 1 } public ApplicationBarIconButton GetButton( IconButton which) { return ApplicationBar.Buttons[(int) which] as ApplicationBarIconButton; } private void EditButtonClick( object sender, EventArgs e) { GetButton(IconButton.Cancel).IsEnabled = false; } Updating a Binding when the icon button is clicked In Silverlight, a Binding on a TextBox’s Text property can only be updated in two circumstances: When the TextBox loses the focus. Explicitly by placing a call in code. In WPF, there is a third option, updating the Binding every time that the Text property changes (i.e. every time that the user types a character). Unfortunately this option is not available in Silverlight). To select option 1, 2 (and in WPF, 3), you use the Mode property of the Binding class. The issue here is that pressing a button on the ApplicationBar does not remove the focus from the TextBox where the user is currently typing. If the button is a Save button, this is super annoying: The Binding does not get updated on the data object, the object is saved anyway with the old state, and noone understands what just happened. In order to solve this, you can make sure that the Binding is updated explicitly when the button is pressed, with the following code: private void SaveButtonClick(object sender, EventArgs e) { // Force update binding first var binding = MessageTextBox.GetBindingExpression( TextBox.TextProperty); binding.UpdateSource(); // Property was updated for sure, now we can save var vm = DataContext as MainViewModel; vm.Save(); } Obviously this is less maintainable than the usual way to do things in Silverlight. So be careful when using the ApplicationBar and remember that it is not a Silverlight element like the others!! Happy coding! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Introducing MySQL for Excel

    - by Javier Treviño
    As part of the new product initiatives of the MySQL on Windows group we released a tool that makes the task of getting data in and out of a MySQL Database very friendly and intuitive, and we paired it with one of the preferred applications for data analysis and manipulation in Windows platforms, MS Excel. Welcome to MySQL for Excel, an add-in that is installed and accessed from within the MS Excel’s Data tab offering a wizard-like interface arranged in an elegant yet simple way to help users browse MySQL Schemas, Tables, Views and Procedures and perform data operations against them using MS Excel as the vehicle to drive the data in and out MySQL Databases. One of the coolest features we had in mind designing MySQL for Excel is simplicity. MS Excel is simple and easy to work with, thus liked by many Windows users because they don’t have to be software gurus to use it.  We applied the same principle by targeting MySQL for Excel to any kind of user, so if you are already familiarized with Excel’s interface you will find yourself working with MySQL data in no time. MySQL for Excel is shipped within the MySQL Installer as one of the tools in the suite; if prerequisites are already installed (.NET Framework 4.0, Visual Studio Tools for Office 4.0 and of course MS Office), installing the add-in involves a very few clicks and no further setup to use it. Being an Excel Add-In there is no executable file involved after the installation, running MS Excel and opening the add-in from its Data tab is all that is required. MySQL for Excel automatically integrates with MySQL Workbench (if installed) to share the same connections to MySQL Server installations, that way connections are defined just once in either product saving time.  Opening the Add-In brings the Welcome Panel at the right side of the Excel main window from which connections to MySQL Servers are shown grouped by Local VS Remote connections; then users can open any of those connections by double-clicking it and entering the password of the used account.  Additionally a user can create a connection by clicking on the New Connection action label or edit connections through MySQL Workbench (if installed) by clicking on the Manage Connections action label. Once a connection is opened, the Schema Selection panel is shown, at the top of it the selected connection (connection name, hostname/IP and username). Just below, a list of schemas is displayed where User Schemas are grouped first followed by System Schemas; users can double-click any selected schema to go to the next panel or select a schema and clicking the Next > button. Users can alternatively click on the < Back button to go back to the Welcome Panel to close the current connection and open a new one; also by clicking the Create New Schema action label they can create an empty new schema. Once a schema is opened the DB Object Selection panel is shown, this is actually the place where the fun stuff happens; from here users are able to perform actions against MySQL Tables, Views and Procedures. ">The actions available here are about importing data from a MySQL Table, View or Procedure to Excel, exporting Excel data to a new MySQL Table, appending Excel data to an existing MySQL Table or editing a MySQL Table’s data by using an Excel Worksheet as a user interface to update data in any row/column, insert new rows or delete existing rows in a very easy and friendly way. More blog posts will follow describing all of these actions, so stay tuned! Remember that your feedback is very important for us, so drop us a message: · MySQL on Windows (this) Blog - https://blogs.oracle.com/MySqlOnWindows/ · Forum - http://forums.mysql.com/list.php?172 · Facebook - http://www.facebook.com/mysql Cheers!

    Read the article

  • SQL SERVER – Introduction to Big Data – Guest Post

    - by pinaldave
    BIG Data – such a big word – everybody talks about this now a days. It is the word in the database world. In one of the conversation I asked my friend Jasjeet Sigh the same question – what is Big Data? He instantly came up with a very effective write-up.  Jasjeet is working as a Technical Manager with Koenig Solutions. He leads the SQL domain, and holds rich IT industry experience. Talking about Koenig, it is a 19 year old IT training company that offers several certification choices. Some of its courses include SharePoint Training, Project Management certifications, Microsoft Trainings, Business Intelligence programs, Web Design and Development courses etc. Big Data, as the name suggests, is about data that is BIG in nature. The data is BIG in terms of size, and it is difficult to manage such enormous data with relational database management systems that are quite popular these days. Big Data is not just about being large in size, it is also about the variety of the data that differs in form or type. Some examples of Big Data are given below : Scientific data related to weather and atmosphere, Genetics etc Data collected by various medical procedures, such as Radiology, CT scan, MRI etc Data related to Global Positioning System Pictures and Videos Radio Frequency Data Data that may vary very rapidly like stock exchange information Apart from difficulties in managing and storing such data, it is difficult to query, analyze and visualize it. The characteristics of Big Data can be defined by four Vs: Volume: It simply means a large volume of data that may span Petabyte, Exabyte and so on. However it also depends organization to organization that what volume of data they consider as Big Data. Variety: As discussed above, Big Data is not limited to relational information or structured Data. It can also include unstructured data like pictures, videos, text, audio etc. Velocity:  Velocity means the speed by which data changes. The higher is the velocity, the more efficient should be the system to capture and analyze the data. Missing any important point may lead to wrong analysis or may even result in loss. Veracity: It has been recently added as the fourth V, and generally means truthfulness or adherence to the truth. In terms of Big Data, it is more of a challenge than a characteristic. It is difficult to ascertain the truth out of the enormous amount of data and the one that has high velocity. There are always chances of having un-precise and uncertain data. It is a challenging task to clean such data before it is analyzed. Big Data can be considered as the next big thing in the IT sector in terms of innovation and development. If appropriate technologies are developed to analyze and use the information, it can be the driving force for almost all industrial segments. These include Retail, Manufacturing, Service, Finance, Healthcare etc. This will help them to automate business decisions, increase productivity, and innovate and develop new products. Thanks Jasjeet Singh for an excellent write up.  Jasjeet Sign is working as a Technical Manager with Koenig Solutions. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Big Data

    Read the article

  • Oredev 2011 Trip Report

    - by arungupta
    Oredev had its seventh annual conference in the city of Malmo, Sweden last week. The name "Oredev" signifies to the part that Malmo is connected with Copenhagen with Oresund bridge. There were about 1000 attendees with several speakers from all over the world. The first two days were hands-on workshops and the next three days were sessions. There were different tracks such as Java, Windows 8, .NET, Smart Phones, Architecture, Collaboration, and Entrepreneurship. And then there was Xtra(ck) which had interesting sessions not directly related to technology. I gave two slide-free talks in the Java track. The first one showed how to build an end-to-end Java EE 6 application using NetBeans and GlassFish. The complete instructions to build the application are explained in detail here. This 3-tier application used Java Persistence API, Enterprsie Java Beans, Servlet, Contexts and Dependency Injection, JavaServer Faces, and Java API for RESTful Services. The source code built during the application can be downloaded here (LINK TBD). The second session, slide-free again, showed how to take a Java EE 6 application into production using GlassFish cluster. It explained: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete instructions for this session are available here. Oredev has an interesting way of collecting attendee feedback. The attendees drop a green, yellow, or red card in a bucket as they walk out of the session. Not everybody votes but most do. Other than the instantaneous feedback provided on twitter, this mechanism provides a more coarse grained feedback loop as well. The first talk had about 67 attendees (with 23 green and 7 yellow) and the second one had 22 (11 green and 11 yellow). The speakers' dinner is a good highlight of the conference. It is arranged in the historic city hall and the mayor welcomed all the speakers. As you can see in the pictures, it is a very royal building with lots of history behind it. Fortunately the dinner was a buffet with a much better variety unlike last year where only black soup and geese were served, which was quite cultural BTW ;-) The sauna in 85F, skinny dipping in 35F ocean and alternating between them at Kallbadhus is always very Swedish. Also spent a short evening at a friend's house socializing with other speaker/attendees, drinking Glogg, and eating Pepperkakor.  The welcome packet at the hotel also included cinnamon rolls, recommended to drink with cold milk, for a little more taste of Swedish culture. Something different at this conference was how artists from Image Think were visually capturing all the keynote speakers using images on whiteboards. Here are the images captured for Alexis Ohanian (Reddit co-founder and now running Hipmunk): Unfortunately I could not spend much time engaging with other speakers or attendees because was busy preparing a new hands-on lab material. But was able to spend some time with Matthew Mccullough, Micahel Tiberg, Magnus Martensson, Mattias Karlsson, Corey Haines, Patrick Kua, Charles Nutter, Tushara, Pradeep, Shmuel, and several other folks. Here are a few pictures captured from the event: And the complete album here: Thank you Matthias, Emily, and Kathy for putting up a great show and giving me an opportunity to speak at Oredev. I hope to be back next year with a more vibrant representation of Java - the language and the ecosystem!

    Read the article

  • Oredev 2011 Trip Report

    - by arungupta
    Oredev had its seventh annual conference in the city of Malmo, Sweden last week. The name "Oredev" signifies to the part that Malmo is connected with Copenhagen with Oresund bridge. There were about 1000 attendees with several speakers from all over the world. The first two days were hands-on workshops and the next three days were sessions. There were different tracks such as Java, Windows 8, .NET, Smart Phones, Architecture, Collaboration, and Entrepreneurship. And then there was Xtra(ck) which had interesting sessions not directly related to technology. I gave two slide-free talks in the Java track. The first one showed how to build an end-to-end Java EE 6 application using NetBeans and GlassFish. The complete instructions to build the application are explained in detail here. This 3-tier application used Java Persistence API, Enterprsie Java Beans, Servlet, Contexts and Dependency Injection, JavaServer Faces, and Java API for RESTful Services. The source code built during the application can be downloaded here (LINK TBD). The second session, slide-free again, showed how to take a Java EE 6 application into production using GlassFish cluster. It explained: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete instructions for this session are available here. Oredev has an interesting way of collecting attendee feedback. The attendees drop a green, yellow, or red card in a bucket as they walk out of the session. Not everybody votes but most do. Other than the instantaneous feedback provided on twitter, this mechanism provides a more coarse grained feedback loop as well. The first talk had about 67 attendees (with 23 green and 7 yellow) and the second one had 22 (11 green and 11 yellow). The speakers' dinner is a good highlight of the conference. It is arranged in the historic city hall and the mayor welcomed all the speakers. As you can see in the pictures, it is a very royal building with lots of history behind it. Fortunately the dinner was a buffet with a much better variety unlike last year where only black soup and geese were served, which was quite cultural BTW ;-) The sauna in 85F, skinny dipping in 35F ocean and alternating between them at Kallbadhus is always very Swedish. Also spent a short evening at a friend's house socializing with other speaker/attendees, drinking Glogg, and eating Pepperkakor.  The welcome packet at the hotel also included cinnamon rolls, recommended to drink with cold milk, for a little more taste of Swedish culture. Something different at this conference was how artists from Image Think were visually capturing all the keynote speakers using images on whiteboards. Here are the images captured for Alexis Ohanian (Reddit co-founder and now running Hipmunk): Unfortunately I could not spend much time engaging with other speakers or attendees because was busy preparing a new hands-on lab material. But was able to spend some time with Matthew Mccullough, Micahel Tiberg, Magnus Martensson, Mattias Karlsson, Corey Haines, Patrick Kua, Charles Nutter, Tushara, Pradeep, Shmuel, and several other folks. Here are a few pictures captured from the event: And the complete album here: Thank you Matthias, Emily, and Kathy for putting up a great show and giving me an opportunity to speak at Oredev. I hope to be back next year with a more vibrant representation of Java - the language and the ecosystem!

    Read the article

  • UPK Hands-on Labs at OHUG

    - by Karen Rihs
    Going to OHUG, June 18-22? Be sure to attend one or more UPK hands-on labs! Choose from Basic, Advanced, What's New, and Prebuilt Content!   Oracle User Productivity Kit 11.1 Workshop – Basic Stephen Armbruster, Oracle Corporation June 19, 2012, 11:00 a.m. – 12:00 p.m. June 20, 2012, 4:30 – 5:30 p.m. The User Productivity Kit (UPK) is a comprehensive, cost-effective, customizable solution that helps your organization quickly create the critical documentation, training, and support materials needed to drive project team and user productivity throughout the lifecycle of your software. The User Productivity Kit provides system process documentation, user acceptance test scripts, comprehensive instructor-led training materials, web-based training materials, role-based performance support, and complete documentation. Also provided is the UPK Developer, which serves as a single-source development and customization tool to enable rapid content creation and customization. The User Productivity Kit delivers: Business process documentation for fit-gap analysis - providing time and cost savings that jump-start your implementation or upgrade User Acceptance test scripts to help test applications prior to go-live State-of-the-art instructional design tools to rapidly build and tailor documentation, instructor-led training materials, and web-based training to fit organizational needs Live-application performance support with transactional and procedural information to maximize user efficiency. By registering for this hands-on UPK workshop, participants will use UPK to build an application job aid and simulation that can be used as performance support for the application. But hurry, space is limited! Oracle User Productivity Kit 11.1 Workshop – Advanced Stephen Armbruster, Oracle Corporation June 20, 2012, 1:30 – 2:30 p.m. This special workshop is for those already familiar with UPK and will cover advanced concepts. In this workshop, you will gain an in-depth knowledge of working with the UPK Developer. Following this workshop, you will be able to: Create publishing categories Add a logo to a publishing project Publish using the newly created category Configure your own library view Manage topic history in a multi-user environment Oracle User Productivity Kit 11.1 Workshop – What’s NEW! Stephen Armbruster, Oracle Corporation June 19, 2012, 1:30 – 2:30 p.m. June 21, 2012, 1:00 – 2:00 p.m. This special workshop is for those already familiar with UPK and will focus on the new features included in the latest version 11.1. In this workshop, you will review most of the new features included in the UPK Developer. Oracle User Productivity Kit 11.1 Workshop – Prebuilt Content Stephen Armbruster, Oracle Corporation June 19, 2012, 4:30 – 5:30 p.m. June 21, 2012, 2:15 – 3:15 p.m. This special workshop is for those already familiar with UPK and will focus on the latest version 11.1. At the end of this workshop, you will be able to demonstrate how to: Import prebuilt content Modify content frames Add a decision frame Translate a topic into Spanish Stephen Armbruster is a principal sales consultant, specializing in HCM and UPK applications for Oracle over the past twelve years. In addition to his current role, he serves as an ambassador for the Fusion User Experience (UX) team and is tasked with evangelizing the UX for end users across all Oracle brands (Fusion, PSFT, JDE, and EBS).  He is also a trusted advisor to Oracle’s Product Management teams related to Learning Management Systems (LMS). Prior to joining Oracle, he was an instructor as well as an instructional technologist working in the medical diagnostics, high tech, and information management industries. As an expert in both LMS and UPK, he regularly speaks at Oracle conferences including Oracle OpenWorld and OHUG on topics that span using Oracle solutions to accomplish employee training, certification, and user adoption. His presentations are both entertaining and engaging.

    Read the article

  • Tap Into Tier 1 ERP

    - by Christine Randle
    By: Larry Simcox, Senior Director, Accelerate Corporate Programs     Your customers aren’t satisfied with so-so customer service. Your employees aren’t happy with below average salaries.   So why would you settle for second-rate or tier 2 ERP?   A recent report from Nucleus Research found that usability improvements and rapid implementation tools are simplifying deployments, putting tier 1 enterprise applications well within reach for midsize companies. So how can your business tap into the power of tier 1 ERP? And what are the best ways to manage a deployment?   The Reputation of ERP Implementations Overhauling internal operations and implementing ERP can be a challenging endeavor for organizations of all sizes. Midsize companies often shy away from enterprise-class ERP, fearing complexity, limited resources and perceived challenging deployments. Many forward thinking executives experienced ERP implementations in the late 90s and early 2000s and embrace a strategy to grow their business by investing in a foundation for innovation and growth via ERP modernization projects.   In recent years there has been a strong consumerization of IT with enterprise applications and their delivery methods evolving to become more user-friendly.  Today, usability improvements and modern implementation tools have made top-tier ERP solutions more accessible for growing companies. Nucleus found that because enterprise-class software can now be rapidly deployed, the payback is quicker, the risks are lower, the software is less disruptive and overall, companies can differentiate themselves from their competitors and achieve more success with the advantages these types of systems deliver.   Tapping into the power of tier 1 ERP can be made much easier with Oracle Accelerate solutions. Created by Oracle's expert partners and reviewed by Oracle, Oracle Accelerate solutions are simple to deploy, industry-specific, packaged solutions that provide a fast time to benefit, which means getting the right solution in place quickly, inexpensively with a controlled scope and predictable returns.   How are growing midsize companies successfully deploying tier 1 ERP? According to Nucleus Research, companies can increase success in their tier 1 ERP deployments by limiting customization, planning a rapid go-live, bettering communication across departments, and considering different delivery options. Oracle Accelerate solutions incorporate industry best practices and encourage rapid deployments. And even more, Nucleus found customers deploying tier 1 ERP with Oracle that had used Oracle Business Accelerators, Oracle’s rapid implementation tools, reduced the time to deploy Oracle E-Business Suite by at least 50 percent.   Industrial manufacturer L.H. Dottie is one company that needed ERP with enhanced capabilities to support its growth and streamline business processes. Using out-of-the-box configuration of Oracle E-Business Suite modules (provided by Oracle Business Accelerators and delivered by Oracle Partner C3 Business Solutions), L.H. Dottie was able to speed its implementation and went live in just six and a half months. With tier 1 ERP, the company was able to grow and do its business better, automating a variety of processes, accelerating product delivery and gaining powerful data analysis capabilities that helped drive its business into further regions. See more details about their ERP implementation here.   Tier 1 enterprise-class applications have proven to boost the success of Oracle’s midsize customers. As Nucleus Research iterates, companies poised for growth or seeking to compete against larger competitors absolutely can tap into the power of tier 1 ERP and position themselves as enterprise-class through leveraging Oracle Accelerate solutions.   You can learn more here about The Evolving Business Case for Tier - 1 ERP in Midsize Companies in our exclusive webcast with Nucleus.   ###  

    Read the article

  • Oracle Tutor: Installing Is Not Implementing or Why CIO's should care about End User Adoption

    - by emily.chorba(at)oracle.com
    Eighteen months ago I showed Tutor and UPK Productive Day One overview to a CIO friend of mine. He works in a manufacturing business which had been recently purchased by a global conglomerate. He had a major implementation coming up, but said that the corporate team would be coming in to handle the project. I asked about their end user training approach, but it was unclear to him at the time. We were in touch over the course of the implementation project. The major activities were data conversion, how-to workshops, General Ledger realignment, and report definition. The message was "Here's how we do it at corporate, and here's how you are going to do it." In short, it was an application software installation. The corporate team had experience and confidence and the effort through go-live was smooth. Some weeks after cutover, problems with customer orders began to surface. Orders could not be fulfilled in a timely fashion. The problem got worse, and the corporate emergency team was called in. After many days of analysis, the issue was tracked down and resolved, but by then there were weeks of backorders, and their customer base was impacted in a significant way. It took three months of constant handholding of customers by the sales force for good will to be reestablished, and this itself diminished a new product sales push. I learned of these results in a recent conversation with the CIO. I asked him what the solution to the problem was, and he replied that it was twofold. The first component was a lack of understanding by customer service reps about how a particular data item in order entry was to be filled in, resulting in discrepant order data. The second component was that product planners were using this data, along with data from other sources, to fill in a spreadsheet based on the abandoned system. This spreadsheet was the primary input for planning data. The result of these two inaccuracies was that key parts were not being ordered to effectively meet demand and the lead time for finished goods was pushed out by weeks. I reminded him about the Productive Day One approach, and it's focus on methodology and tools for end user training. A more collaborative solution workshop would have identified proper applications use in the new environment. Using UPK to document correct transaction entry would have provided effective guidelines to the CSRs for data entry. Using Oracle Tutor to document the manual tasks would have eliminated the use of an out of date spreadsheet. As we talked this over, he said, "I wish I knew when I started what I know now." Effective end user adoption is the most critical and most overlooked success factor in applications implementations. When the switch is thrown at go-live, employees need to know how to use the new systems to do their jobs. Their jobs are made up of manual steps and systems steps which must be performed in the right order for the implementing organization to operate smoothly. Use Tutor to document the manual policies and procedures, use UPK to document the systems tasks, and develop this documentation in conjunction with a solution workshop. This is the path to develop effective end user training material for a smooth implementation. Learn More For more information about Tutor, visit Oracle.com or the Tutor Blog. Post your questions at the Tutor Forum. Chuck Jones, Product Manager, Oracle Tutor and BPM

    Read the article

  • Everything You Need to Know About Monitoring Oracle GoldenGate

    - by Irem Radzik
    By Joe deBuzna Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Having over 16 years of database replication experience with 6 of those split between complex Oracle GoldenGate installations across three continents and researching monitoring requirements for both GoldenGate core replication and the GoldenGate monitoring GUIs, I've seen GoldenGate used and monitored in every way conceivable. And definite patterns have emerged. Next week at OpenWorld, on Tuesday Oct 2nd at 5pm please come by to Mascone West-3005 for "Everything you need to know about Monitoring Oracle GoldenGate"session to hear me discuss how GoldenGate customers are monitoring their implementations today, common methods and tricks, what's new in the GUIs, and a what's on the roadmap ahead. As you may have seen in previous blog posts and in our launch webcast we have now Plug-in for Oracle Enterprise Manager in addition to the new Oracle GoldenGate Monitor product. For those of you who won't be at OpenWorld, please check out our Management Pack for Oracle GoldenGate data sheet and Oracle GoldenGate 11gR2 New Features white paper to learn more about the new Oracle GoldenGate 11gR2 release. In this latest release we also have enhanced conflict detection and resolution. It is a cornerstone of any Active-Active database replication solution. And in the latest release we just took ours to the next level with built in optimized resolution routines (no more dependency on sqlexec!). At OpenWorld we have a session CON8557 - Best Practice for Conflict Detection & resolution 3:30-4:30 on Wed Oct 3rd at Mascone West- 3005. Oracle Development Manager Bharath Aleti and I will highlight the most commonly used options and best practices gained from our interaction with numerous customers and consultants. Hope you can join us next week. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • SQLAuthority News – Who I Am And How I Got Here – True Story as Blog Post

    - by pinaldave
    Here are few of the sample questions I get every day? “Give me shortcut to become superstar?” “How do I become like you?” “Which book I should read so I know everything?” “Can you share your secret to be successful? I want to know it but do not share with others.” There is generic answer I always give is to work hard and read good educational material or watch good online videos. One of the emails really caught my attention. It was from a friend and SQL Server Expert John Sansom (Blog | Twitter). He wrote if I would like to share my story with the world about “Who I am and How I got Here”. I was very much intrigued with his suggestion. John is one guy I respect a lot. Every single topic he writes, I read it with dedication. I eagerly wait for his Weekly Summary of Best SQL Links. If you have not read them, you are missing something out. Writing a guest post for him was like walking in memory lane. I remembered the time when I was beginning my career and I was bit overconfident and bit naive. I had my share of mistakes when I started my career. As time passed by I realize the truth. Well, we all do mistakes. Though, I am proud that as soon as I know my mistakes I corrected them. I never acted on impulse or when I am angry. I think that alone has helped me analysis the situation better and become better human being. During the course, I have lost my ego and it is replaced by passion. I am much more happy and successful in my work. Quite often people ask me if I am always online and wether I have family or not. Honestly, I am able to work hard because of my family. They support me and they encourage me to be enjoy in what I do. They support everything I do and personally, I do not miss a single occasion to join them in daily chores of fun. If there was a shortcut to success – I want know. I learnt SQL Server hard way and I am still learning. There are so many things, I have to learn. There is not enough time to learn everything which we want to learn. I am constantly working on it every day. I welcome you to join my journey as well. Please join me with my journey to learn SQL Server – more the merrier. I have written a story of my life as a guest post.  Read Here: A Journey to SQL Authority Special thanks to John Sansom (Blog | Twitter) for giving me space to talk my story. Indeed I am honored. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Lost in Translation – Common Mistakes Interpreting Patterns – Mark Simpson, Griffiths-Waite @ SOA, Cloud & Service Technology Symposium 2012

    - by JuergenKress
    ORACLE PROMOTIONAL DISCOUNT FOR EXCLUSIVE ORACLE DISCOUNT, ENTER PROMO CODE: DJMXZ370 For details please visit the registration page International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). Speaker: Mark Simpson, Griffiths-Waite Mark has been specialising in Oracle technology for 13 years, the last 10 of these with Griffiths Waite. Mark leads our SOA technology practice (covering SOA, Business Process Management and Enterprise Architecture). He is a much sought after presenter on the Oracle and SOA conference circuits, and a respected authority on these technologies. Mark has advised a host of UK leading organisations on the deployment of BPM / SOA solutions. Working closely with Oracle US Product Development Mark has contributed to Oracle's SOA Methodology and Oracle's SOA Maturity Model. Lost in Translation – Common Mistakes Interpreting Patterns Learn how small misinterpretations of high-level design patterns can have large and costly project ramifications. Good SOA design benefits from the use of a reference architecture and standardised design patterns. However both of these concepts give an abstracted view of the intended solution, which needs to be interpreted to become realised. A reference implementation is important to demonstrate how key design guidelines can be implemented in the toolset of choice, but the main success factor is how these are used through the build and post live phases of the project. This session will introduce practical design patterns with supporting implementation examples that, if used correctly, will give long term benefit. We will highlight implementations where misinterpretations or misalignment from pattern aims have led to issues post implementation. The session will add depth to the pattern discussions you are already having enabling confidence in proceeding to the next level of realisation whilst considering how they may be implemented within your solution and chosen toolset. September 25, 2012 - 13:55 KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. CONFERENCE THEMES & TRACKS Cloud Computing Architecture & Patterns New SOA & Service-Orientation Practices & Models Emerging Service Technology Innovation Service Modeling & Analysis Techniques Service Infrastructure & Virtualization Cloud-based Enterprise Architecture Business Planning for Cloud Computing Projects Real World Case Studies Semantic Web Technologies (with & without the Cloud) Governance Frameworks for SOA and/or Cloud Computing Projects Service Engineering & Service Programming Techniques Interactive Services & the Human Factor New REST & Web Services Tools & Techniques Oracle Specialized SOA & BPM Partners Oracle Specialized partners have proven their skills by certifications and customer references. To find a local Specialized partner please visit http://solutions.oracle.com SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Mark Simpson,Griffiths Waite,SOA Patterns,SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • Should I break contract early?

    - by cbang
    About 7 months ago I made the switch from a 5 year permie role (as a support developer in C#) to a contract role. I did this because I was stagnating in my old role. The extra cash contracting is really helping too. Unfortunately my team leader has taken a dislike to me from day 1. He regularly tells me I went out contracting too early, and frequently remarks that people in their 20's have no idea what they are talking about (I am 29). I was recently given the task of configuring our reports via our in house reporting library. It works off of a database driven criteria base, with controls being loaded as needed. The configs can get fairly complex, with controls having various levels of dependency on each other. I had a short time frame to get 50 reports working, and I was told to just get the basic configuration done, after which they will be handed over to the reporting team for fine tuning, then the test team. Our updated system was deployed 2 weeks ago, and it turned out that about 15 reports had issues causing incorrect data to be returned. Upon investigation I discovered that the reporting team hadn't even looked at them, and the test team hadn't bothered to test the reports. In spite of this, my team leader has told me that it is 100% my fault. As a result, our help desk got hit hard. I worked back until 2am that night to fix the highest priority issues (on my wedding anniversary!). The next day I arrive at work at 7:45 am to continue with the fixes. I got no thanks, but keep getting repeatedly told by my manager that "I fucked up" and "this is all my fault". I told my team leader I would spend part of my weekend working to fix the remaining issues. His response was "so you fucking should! you fucked it all up!" in front of the rest of the team. I responded "No worries." and left. I spent a decent chunk of my weekend working on it. Within 2 business days of finding out about the issues, I had all the medium and high priority issues fixed. The only comments my team leader has made to me in the last 2 weeks is to tell me how I have caused a big mess, and to tell me it was all my fault. I get this multiple times a day. If I make any jokes to anyone else in the team, I get told not to be a smartass... even though the rest of the team jokes throughout the day. Apart from that, all I get is angry looks any time I am anywhere near the guy. I don't give any response other than "alright" or silence when he starts giving me a hard time. Today we found out that the pilot release for the next stage has been pushed back. My team leader has said this was caused by me (but the higher ups said no such thing). He also said I have "no understanding of the ramifications of my actions". My question is, should I break contract (I am contracted until June 30) and find another role? No one else in my team will speak up in my favour, as they are contractors too and have no interest in rocking the boat. I could complain to my team leaders boss, but I can't see that helping, as I will still be stuck in the same team. As this is my first contract, I imagine getting the next one will be hard without a reference. I can't figure out if this guy is trying to get me fired up to provoke a confrontation (the guy loves conflict), or if he is just venting anger, or what. Copping this blame day after day is really wearing me down and making me depressed... especially since I have a wife and kid to support).

    Read the article

  • What's up with LDoms: Part 9 - Direct IO

    - by Stefan Hinker
    In the last article of this series, we discussed the most general of all physical IO options available for LDoms, root domains.  Now, let's have a short look at the next level of granularity: Virtualizing individual PCIe slots.  In the LDoms terminology, this feature is called "Direct IO" or DIO.  It is very similar to root domains, but instead of reassigning ownership of a complete root complex, it only moves a single PCIe slot or endpoint device to a different domain.  Let's look again at hardware available to mars in the original configuration: root@sun:~# ldm ls-io NAME TYPE BUS DOMAIN STATUS ---- ---- --- ------ ------ pci_0 BUS pci_0 primary pci_1 BUS pci_1 primary pci_2 BUS pci_2 primary pci_3 BUS pci_3 primary /SYS/MB/PCIE1 PCIE pci_0 primary EMP /SYS/MB/SASHBA0 PCIE pci_0 primary OCC /SYS/MB/NET0 PCIE pci_0 primary OCC /SYS/MB/PCIE5 PCIE pci_1 primary EMP /SYS/MB/PCIE6 PCIE pci_1 primary EMP /SYS/MB/PCIE7 PCIE pci_1 primary EMP /SYS/MB/PCIE2 PCIE pci_2 primary EMP /SYS/MB/PCIE3 PCIE pci_2 primary OCC /SYS/MB/PCIE4 PCIE pci_2 primary EMP /SYS/MB/PCIE8 PCIE pci_3 primary EMP /SYS/MB/SASHBA1 PCIE pci_3 primary OCC /SYS/MB/NET2 PCIE pci_3 primary OCC /SYS/MB/NET0/IOVNET.PF0 PF pci_0 primary /SYS/MB/NET0/IOVNET.PF1 PF pci_0 primary /SYS/MB/NET2/IOVNET.PF0 PF pci_3 primary /SYS/MB/NET2/IOVNET.PF1 PF pci_3 primary All of the "PCIE" type devices are available for SDIO, with a few limitations.  If the device is a slot, the card in that slot must support the DIO feature.  The documentation lists all such cards.  Moving a slot to a different domain works just like moving a PCI root complex.  Again, this is not a dynamic process and includes reboots of the affected domains.  The resulting configuration is nicely shown in a diagram in the Admin Guide: There are several important things to note and consider here: The domain receiving the slot/endpoint device turns into an IO domain in LDoms terminology, because it now owns some physical IO hardware. Solaris will create nodes for this hardware under /devices.  This includes entries for the virtual PCI root complex (pci_0 in the diagram) and anything between it and the actual endpoint device.  It is very important to understand that all of this PCIe infrastructure is virtual only!  Only the actual endpoint devices are true physical hardware. There is an implicit dependency between the guest owning the endpoint device and the root domain owning the real PCIe infrastructure: Only if the root domain is up and running, will the guest domain have access to the endpoint device. The root domain is still responsible for resetting and configuring the PCIe infrastructure (root complex, PCIe level configurations, error handling etc.) because it owns this part of the physical infrastructure. This also means that if the root domain needs to reset the PCIe root complex for any reason (typically a reboot of the root domain) it will reset and thus disrupt the operation of the endpoint device owned by the guest domain.  The result in the guest is not predictable.  I recommend to configure the resulting behaviour of the guest using domain dependencies as described in the Admin Guide in Chapter "Configuring Domain Dependencies". Please consult the Admin Guide in Section "Creating an I/O Domain by Assigning PCIe Endpoint Devices" for all the details! As you can see, there are several restrictions for this feature.  It was introduced in LDoms 2.0, mainly to allow the configuration of guest domains that need access to tape devices.  Today, with the higher number of PCIe root complexes and the availability of SR-IOV, the need to use this feature is declining.  I personally do not recommend to use it, mainly because of the drawbacks of the depencies on the root domain and because it can be replaced with SR-IOV (although then with similar limitations). This was a rather short entry, more for completeness.  I believe that DIO can usually be replaced by SR-IOV, which is much more flexible.  I will cover SR-IOV in the next section of this blog series.

    Read the article

  • Top 5 Mobile Apps To Keep Track Of Cricket Scores [ICC World Cup]

    - by Gopinath
    The ICC World Cup 2011 has started with a bang today and the first match between India vs Bangladesh was a cracker. India trashed Bangladesh with a huge margin, thanks to Sehwag for scoring an entertaining 175 runs in 140 runs. At the moment it’s very clear that whole India is gripped with cricket fever and so the rest of fans across the globe. Couple of days ago we blogged about how to watch live streaming of ICC cricket world cup online for free as well as top 10 websites to keep track live scores on your computers. What about tracking live cricket scores on mobiles phones? Here is our guide to top mobile apps available for Symbian(Nokia), Android, iOS and Windows mobiles. By the way, we are covering free apps alone in this post. Why to waste money when free apps are available? SnapTu – Symbian Mobile App SnapTu is a multi feature application that lets you to track live cricket scores, read latest news and check stats published on cric info. SnapTu has tie up with Cric Info and accessing all of CricInfo website on your mobile is very easy. Along with live scores, SnapTu also lets you access your Facebook, Twitter and Picassa on your mobile. This is my favourite application to track cricket on Symbian mobiles. Download SnapTu for your mobiles here Yahoo! Cricket – Symbian & iOS App Yahoo! Cricket Scores is another dedicated application to catch up with live scores and news on your Nokia mobiles and iPhones. This application is developed by Yahoo!, the web giant as well as the official partner of ICC. Features of the app at a glance Cricket: Get a summary page with latest scores, upcoming matches and details of the recent matches News: View sections devoted to the latest news, interviews and photos Statistics: Find the latest team and player stats Download Yahoo! Cricket For Symbian Phones   Download Yahoo! Cricket For iOS ESPN CricInfo – Android and iOS App Is there any site that is better than CricInfo to catch up with latest cricket news and live scores? I say No. ESPN CricInfo is the best website available on the web to get up to the minute  cricket information with in-depth analysis from cricket experts. The live commentary provided by CricInfo site is equally enjoyable as watching live cricket on TV. CricInfo guys have their official applications for Android mobiles and iOS devices and you accessing ball by ball updates on these application is joy. Download ESPN Crick Info App: Android Version, iPhone Version NDTV Cricket – Android, iOS and Blackberry App NDTV Cricket App is developed by NDTV, the most popular English TV news channel in India. This application provides live coverage of international and domestic cricket (Test, ODI & T20) along with latest News, Photos, Videos and Stats. This application is available for iOS devices(iPhones, iPads, iPod Touch), Android mobiles and Blackberry devices. Download NDTV Cricket for iOS here & here    Download NDTV Apps For Rest of OSs ECB Cricket – Symbian, iOS & Android App If you are an UK citizen then  this may be the right application to download for getting live cricket score updates as well as latest news about England Cricket Board. ECB Cricket is an official application of England Cricket Board Download ECB Cricket : Android Version, iPhone Version, Symbian Version Are there any better apps that we missed to feature in this list? This article titled,Top 5 Mobile Apps To Keep Track Of Cricket Scores [ICC World Cup], was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Abstracting entity caching in XNA

    - by Grofit
    I am in a situation where I am writing a framework in XNA and there will be quite a lot of static (ish) content which wont render that often. Now I am trying to take the same sort of approach I would use when doing non game development, where I don't even think about caching until I have finished my application and realise there is a performance problem and then implement a layer of caching over whatever needs it, but wrap it up so nothing is aware its happening. However in XNA the way we would usually cache would be drawing our objects to a texture and invalidating after a change occurs. So if you assume an interface like so: public interface IGameComponent { void Update(TimeSpan elapsedTime); void Render(GraphicsDevice graphicsDevice); } public class ContainerComponent : IGameComponent { public IList<IGameComponent> ChildComponents { get; private set; } // Assume constructor public void Update(TimeSpan elapsedTime) { // Update anything that needs it } public void Render(GraphicsDevice graphicsDevice) { foreach(var component in ChildComponents) { // draw every component } } } Then I was under the assumption that we just draw everything directly to the screen, then when performance becomes an issue we just add a new implementation of the above like so: public class CacheableContainerComponent : IGameComponent { private Texture2D cachedOutput; private bool hasChanged; public IList<IGameComponent> ChildComponents { get; private set; } // Assume constructor public void Update(TimeSpan elapsedTime) { // Update anything that needs it // set hasChanged to true if required } public void Render(GraphicsDevice graphicsDevice) { if(hasChanged) { CacheComponents(graphicsDevice); } // Draw cached output } private void CacheComponents(GraphicsDevice graphicsDevice) { // Clean up existing cache if needed var cachedOutput = new RenderTarget2D(...); graphicsDevice.SetRenderTarget(renderTarget); foreach(var component in ChildComponents) { // draw every component } graphicsDevice.SetRenderTarget(null); } } Now in this example you could inherit, but your Update may become a bit tricky then without changing your base class to alert you if you had changed, but it is up to each scenario to choose if its inheritance/implementation or composition. Also the above implementation will re-cache within the rendering cycle, which may cause performance stutters but its just an example of the scenario... Ignoring those facts as you can see that in this example you could use a cache-able component or a non cache-able one, the rest of the framework needs not know. The problem here is that if lets say this component is drawn mid way through the game rendering, other items will already be within the default drawing buffer, so me doing this would discard them, unless I set it to be persisted, which I hear is a big no no on the Xbox. So is there a way to have my cake and eat it here? One simple solution to this is make an ICacheable interface which exposes a cache method, but then to make any use of this interface you would need the rest of the framework to be cache aware, and check if it can cache, and to then do so. Which then means you are polluting and changing your main implementations to account for and deal with this cache... I am also employing Dependency Injection for alot of high level components so these new cache-able objects would be spat out from that, meaning no where in the actual game would they know they are caching... if that makes sense. Just incase anyone asked how I expected to keep it cache aware when I would need to new up a cachable entity.

    Read the article

  • Professional Developers, may I join you?

    - by Ben
    I currently work in technical support for a software/hardware company and for the most part it's a good job, but it's feeling more and more like I'm getting 'stuck' here. No raises in the 5 years I've been here, and lately there seems to be more hiring from the outside than promotion from within. The work I do is more technical than end-user support, as we deal primarily with our field technicians who have a little more technical skill than the general user base. As a result I get into much more technical support issues... often tracking down bugs in our software, finding performance bottlenecks in our database schema, etc. The work I'm most proud of are the development projects I've come up with on my own, and worked on during lunch breaks and slow periods in Support. Over the years I've written a number of useful utilities for the company. Diagnostic type applications that several departments use and appreciate. These include apps that simulate our various hardware devices, log file analysis, time-saving utilities for our work processes, etc. My best projects have been the hardware simulation programs, which are the type of thing we probably wouldn't have put a full-time developer on had anyone thought to do it, but they've ended up being popular and useful enough to be used by development, QA, R&D, and Support. They allow us to interface our software with simulated hardware, rather than clutter up our work areas with bulky, hard to acquire equipment. Since starting here my life has moved forward (married, kid, one more on the way), but it feels like my career has not. I still earn what I earned walking in the door my first day. Company budget is tight, bonuses have gone down, and no raises or cost of living / inflation adjustments either. As the sole source of income for my family I feel I need to do more, and I'd like to have a more active role in creating something at work, not just cleaning up other people's mistakes. I enjoy technical work, and I think development is the next logical step in my career. I'd like to bring some "legitimacy" to my part-time development work, and make myself a more skilled and valuable employee. Ultimately if this can help me better support my family, that would be ideal. Can I make the jump to professional developer? I have an engineering degree, but no formal education in computer science. I write WinForms apps using the .NET framework, do some freelance web development, have volunteered to write software for a nonprofit, and have started experimenting with programming microcontrollers. I enjoy learning new things in the limited free time I have available. I think I have the aptitude to take on a development role, even in an 'apprentice' capacity if such an option is possible. Have any of you moved into development like this? Do any of you developers have any advice or cautionary tales? Are there better career options I haven't thought of? I welcome any and all related comments and thank you in advance for posting them.

    Read the article

  • ffmpeg unmet dependencies

    - by Nikki
    I faced an issue recently when tried to install ffmpeg on my Ubuntu computer. I am running Ubuntu 11.10 64 bit, all latest updates are installed and system runs perfectly, however i feel need in recording my desktop and have read many articles that ffmpeg is one of the best recording tools for it (besides providing packages for video) So I tried to run sudo apt-get install ffmpeg However i wasn't able to do this because packages have unmet dependencies. Here is a full text I receive after trying to install package above. Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ffmpeg : Depends: libavcodec53 (< 4:0.7.3-99) but it is not going to be installed or libavcodec-extra-53 (< 4:0.7.3.99) but 4:0.8.0.1~ppa2 is to be installed Depends: libavdevice53 (>= 4:0.7.3-0ubuntu0.11.10.1) but it is not going to be installed or libavdevice-extra-53 (>= 4:0.7.3) but it is not going to be installed Depends: libavdevice53 (< 4:0.7.3-99) but it is not going to be installed or libavdevice-extra-53 (< 4:0.7.3.99) but it is not going to be installed Depends: libavfilter2 (>= 4:0.7.3-0ubuntu0.11.10.1) but it is not going to be installed or libavfilter-extra-2 (>= 4:0.7.3) but it is not going to be installed Depends: libavfilter2 (< 4:0.7.3-99) but it is not going to be installed or libavfilter-extra-2 (< 4:0.7.3.99) but it is not going to be installed Depends: libavformat53 (< 4:0.7.3-99) but 4:0.8-1u1~ppa2 is to be installed or libavformat-extra-53 (< 4:0.7.3.99) but it is not going to be installed Depends: libavutil51 (< 4:0.7.3-99) but it is not going to be installed or libavutil-extra-51 (< 4:0.7.3.99) but 4:0.8.0.1~ppa2 is to be installed Depends: libpostproc52 (< 4:0.7.3-99) but 4:0.8-1u1~ppa2 is to be installed or libpostproc-extra-52 (< 4:0.7.3.99) but it is not going to be installed Depends: libswscale2 (< 4:0.7.3-99) but 4:0.8-1u1~ppa2 is to be installed or libswscale-extra-2 (< 4:0.7.3.99) but it is not going to be installed E: Unable to correct problems, you have held broken packages. This problem hasn't existed on my previous laptop which runs the same Ubuntu 11.10 64 bit as my new one. Can anyone please help me find a solution without "messing and braking" the whole system? Thank you for helping in advance.

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >