Search Results

Search found 6941 results on 278 pages for 'drupal views'.

Page 232/278 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • SSAS: Utility to export SQL code from your cube's Data Source View (DSV)

    - by DrJohn
    When you are working on a cube, particularly in a multi-person team, it is sometimes necessary to review what changes that have been done to the SQL queries in the cube's data source view (DSV). This can be a problem as the SQL editor in the DSV is not the best interface to review code. Now of course you can cut and paste the SQL into SSMS, but you have to do each query one-by-one. What is worse your DBA is unlikely to have BIDS installed, so you will have to manually export all the SQL yourself and send him the files. To make it easy to get hold of the SQL in a Data Source View, I developed a C# utility which connects to an OLAP database and uses Analysis Services Management Objects (AMO) to obtain and export all the SQL to a series of files. The added benefit of this approach is that these SQL files can be placed under source code control which means the DBA can easily compare one version with another. The Trick When I came to implement this utility, I quickly found that the AMO API does not give direct access to anything useful about the tables in the data source view. Iterating through the DSVs and tables is easy, but getting to the SQL proved to be much harder. My Google searches returned little of value, so I took a look at the idea of using the XmlDom to open the DSV’s XML and obtaining the SQL from that. This is when the breakthrough happened. Inspecting the DSV’s XML I saw the things I was interested in were called TableType DbTableName FriendlyName QueryDefinition Searching Google for FriendlyName returned this page: Programming AMO Fundamental Objects which hinted at the fact that I could use something called ExtendedProperties to obtain these XML attributes. This simplified my code tremendously to make the implementation almost trivial. So here is my code with appropriate comments. The full solution can be downloaded from here: ExportCubeDsvSQL.zip   using System;using System.Data;using System.IO;using Microsoft.AnalysisServices; ... class code removed for clarity// connect to the OLAP server Server olapServer = new Server();olapServer.Connect(config.olapServerName);if (olapServer != null){ // connected to server ok, so obtain reference to the OLAP databaseDatabase olapDatabase = olapServer.Databases.FindByName(config.olapDatabaseName);if (olapDatabase != null){ Console.WriteLine(string.Format("Succesfully connected to '{0}' on '{1}'",   config.olapDatabaseName,   config.olapServerName));// export SQL from each data source view (usually only one, but can be many!)foreach (DataSourceView dsv in olapDatabase.DataSourceViews){ Console.WriteLine(string.Format("Exporting SQL from DSV '{0}'", dsv.Name));// for each table in the DSV, export the SQL in a fileforeach (DataTable dt in dsv.Schema.Tables){ Console.WriteLine(string.Format("Exporting SQL from table '{0}'", dt.TableName)); // get name of the table in the DSV// use the FriendlyName as the user inputs this and therefore has control of itstring queryName = dt.ExtendedProperties["FriendlyName"].ToString().Replace(" ", "_");string sqlFilePath = Path.Combine(targetDir.FullName, queryName + ".sql"); // delete the sql file if it exists... file deletion code removed for clarity// write out the SQL to a fileif (dt.ExtendedProperties["TableType"].ToString() == "View"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["QueryDefinition"].ToString());}if (dt.ExtendedProperties["TableType"].ToString() == "Table"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["DbTableName"].ToString()); } } } Console.WriteLine(string.Format("Successfully written out SQL scripts to '{0}'", targetDir.FullName)); } }   Of course, if you are following industry best practice, you should be basing your cube on a series of views. This will mean that this utility will be of limited practical value unless of course you are inheriting a project and want to check if someone did the implementation correctly.

    Read the article

  • Apprentice Boot Camp in South Africa (Part 2)

    - by Tim Koekkoek
    By Maximilian Michel (DE), Jorge Garnacho (ES), Daniel Maull (UK), Adam Griffiths (UK), Guillermo De Las Nieves (ES), Catriona McGill (UK), Ed Dunlop (UK) Today we have the second part of the adventures of seven apprentices from all over Europe in South-Africa!  Kruger National Park & other experiences Going to the Kruger National Park was definitely an experience we will all remember for the rest of our lives. This trip,organised by Patrick Fitzgerald, owner of the Travellers Nest (where we all stayed), took us from the hustle and bustle of Joburg to experience what Africa is all about, the wild! Although the first week’s training we had prior to this trip to the Kruger was going very well, we all knew this was to be a very nice break before we started the second week of training. And we were right, the animals, scenery and sights we saw were just simply incredible and like I said something we will remember for the rest of our lives. To see lions, elephants, cheetahs and rhinos and many more in a zoo is one thing, but to see them in the wild, in their natural habitat is very special and I personally only realised this from the early 5 am start on the first morning in the Kruger, which was definitely worth it. Not only was it all about the safari, we ate some wonderful food, in particular on the Saturday night, Patrick made us a traditional South African Braai which was one of my favourite meals of the whole two weeks. After the Kruger National Park we had a whole day of traveling back to Johannesburg but even this was made to be a good day by our hosts. Despite the early start on the road it was all worth it by the time we reached God's Window. The walk to the top was made a lot harder by all the steaks we had eaten in the first week but the hard walk was worth it at the top, with views that stretched for miles. The Food The food in South Africa is typically meat and in big amounts, while there we ate a lot of big beef steaks, ribs and kudu sausage. All of the meat we ate was usually cooked with a sauce such as a Barbeque glaze. The restaurants we visited were: Upperdeck Restaurant, with live music and a great terrace to eat, the atmosphere was good for enjoying the music and eating our food. Most of us ate  Spare ribs that weighed 600 kg, with barbecue sauce that was delicious. Die Bosvelder Pub & Restaurant is a restaurant with a very surprising decor, this is because the walls had many of south Africa’s famous animals on them. The food was maybe the best we ate in South Africa. Our orders were: Springbokvlakte Lambs' Neck Stew, beef in gravy and steaks topped with cheese and then more meat on top! All meals were accompanied by a selection of white sauce cauliflower, spinach and zanhorias. Pepper Chair Restaurant, where the specialty is T-Bone steaks of 1.4 kg, but most of us were happy to attempt the 1 kg. Cooked with barbecue sauce over the meat, it was very good!  The only problem was their size causing the  the meat to get cold if you did not eat it very fast! We’re all waiting for our 1.0 kg t-bone steak including our Senior Director EMEA Systems Support Germany & Switzerland: Werner Hoellrigl The Godfather Restaurant, the food here was more meat in abundance. We ate: great ribs, hamburgers, steaks and all accompanied with a small plates of carrot and sauteed spinach, very good. We had two great weeks in South-Africa! If you want to join Oracle, then check http://campus.oracle.com 

    Read the article

  • Free Document/Content Management System Using SharePoint 2010

    - by KunaalKapoor
    That’s right, it’s true. You can use the free version of SharePoint 2010 to meet your document and content management needs and even run your public facing website or an internal knowledge bank.  SharePoint Foundation 2010 is free. It may not have all the features that you get in the enterprise license but it still has enough to cater to your needs to build a document management system and replace age old file shares or folders. I’ve built a dozen content management sites for internal and public use exploiting SharePoint. There are hundreds of web content management systems out there (see CMS Matrix).  On one hand we have commercial platforms like SharePoint, SiteCore, and Ektron etc. which are the most frequently used and on the other hand there are free options like WordPress, Drupal, Joomla, and Plone etc. which are pretty common popular as well. But I would be very surprised if anyone was able to find a single CMS platform that is all things to all people. Infact not a lot of people consider SharePoint’s free version under the free CMS side but its high time organizations benefit from this. Through this blog post I wanted to present SharePoint Foundation as an option for running a FREE CMS platform. Even if you knew that there is a free version of SharePoint, what most people don’t realize is that SharePoint Foundation is a great option for running web sites of all kinds – not just team sites. It is a great option for many reasons, but in reality it is supported by Microsoft, and above all it is FREE (yay!), and it is extremely easy to get started.  From a functionality perspective – it’s hard to beat SharePoint. Even the free version, SharePoint Foundation, offers simple data connectivity (through BCS), cross browser support, accessibility, support for Office Web Apps, blogs, wikis, templates, document support, health analyzer, support for presence, and MUCH more.I often get asked: “Can I use SharePoint 2010 as a document management system?” The answer really depends on ·          What are your specific requirements? ·          What systems you currently have in place for managing documents. ·          And of course how much money you have J Benefits? Not many large organizations have benefited from SharePoint yet. For some it has been an IT project to see what they can achieve with it, for others it has been used as a collaborative platform or in many cases an extended intranet. SharePoint 2010 has changed the game slightly as the improvements that Microsoft have made have been noted by organizations, and we are seeing a lot of companies starting to build specific business applications using SharePoint as the basis, and nearly every business process will require documents at some stage. If you require a document management system and have SharePoint in place then it can be a relatively straight forward decision to use SharePoint, as long as you have reviewed the considerations just discussed. The collaborative nature of SharePoint 2010 is also a massive advantage, as specific departmental or project sites can be created quickly and easily that allow workers to interact in a variety of different ways using one source of information.  This also benefits an organization with regards to how they manage the knowledge that they have, as if all of their information is in one source then it is naturally easier to search and manage. Is SharePoint right for your organization? As just discussed, this can only be determined after defining your requirements and also planning a longer term strategy for how you will manage your documents and information. A key factor to look at is how the users would interact with the system and how much value would it get for your organization. The amount of data and documents that organizations are creating is increasing rapidly each year. Therefore the ability to archive this information, whilst keeping the ability to know what you have and where it is, is vital to any organizations management of their information life cycle. SharePoint is best used for the initial life of business documents where they need to be referenced and accessed after time. It is often beneficial to archive these to overcome for storage and performance issues. FREE CMS – SharePoint, Really? In order to show some of the completely of what comes with this free version of SharePoint 2010, I thought it would make sense to use Wikipedia (since every one trusts it as a credible source). Wikipedia shows that a web content management system typically has the following components: Document Management:   -       CMS software may provide a means of managing the life cycle of a document from initial creation time, through revisions, publication, archive, and document destruction. SharePoint is king when it comes to document management.  Version history, exclusive check-out, security, publication, workflow, and so much more.  Content Virtualization:   -       CMS software may provide a means of allowing each user to work within a virtual copy of the entire Web site, document set, and/or code base. This enables changes to multiple interdependent resources to be viewed and/or executed in-context prior to submission. Through the use of versioning, each content manager can preview, publish, and roll-back content of pages, wiki entries, blog posts, documents, or any other type of content stored in SharePoint.  The idea of each user having an entire copy of the website virtualized is a bit odd to me – not sure why anyone would need that for anything but the simplest of websites. Automated Templates:   -       Create standard output templates that can be automatically applied to new and existing content, allowing the appearance of all content to be changed from one central place. Through the use of Master Pages and Themes, SharePoint provides the ability to change the entire look and feel of site.  Of course, the older brother version of SharePoint – SharePoint Server 2010 – also introduces the concept of Page Layouts which allows page template level customization and even switching the layout of an individual page using different page templates.  I think many organizations really think they want this but rarely end up using this bit of functionality.  Easy Edits:   -       Once content is separated from the visual presentation of a site, it usually becomes much easier and quicker to edit and manipulate. Most WCMS software includes WYSIWYG editing tools allowing non-technical individuals to create and edit content. This is probably easier described with a screen cap of a vanilla SharePoint Foundation page in edit mode.  Notice the page editing toolbar, the multiple layout options…  It’s actually easier to use than Microsoft Word. Workflow management: -       Workflow is the process of creating cycles of sequential and parallel tasks that must be accomplished in the CMS. For example, a content creator can submit a story, but it is not published until the copy editor cleans it up and the editor-in-chief approves it. Workflow, it’s in there. In fact, the same workflow engine is running under SharePoint Foundation that is running under the other versions of SharePoint.  The primary difference is that with SharePoint Foundation – you need to configure the workflows yourself.   Web Standards: -       Active WCMS software usually receives regular updates that include new feature sets and keep the system up to current web standards. SharePoint is in the fourth major iteration under Microsoft with the 2010 release.  In addition to the innovation that Microsoft continuously adds, you have the entire global ecosystem available. Scalable Expansion:   -       Available in most modern WCMSs is the ability to expand a single implementation (one installation on one server) across multiple domains. SharePoint Foundation can run multiple sites using multiple URLs on a single server install.  Even more powerful, SharePoint Foundation is scalable and can be part of a multi-server farm to ensure that it will handle any amount of traffic that can be thrown at it. Delegation & Security:  -       Some CMS software allows for various user groups to have limited privileges over specific content on the website, spreading out the responsibility of content management. SharePoint Foundation provides very granular security capabilities. Read @ http://msdn.microsoft.com/en-us/library/ee537811.aspx Content Syndication:  -       CMS software often assists in content distribution by generating RSS and Atom data feeds to other systems. They may also e-mail users when updates are available as part of the workflow process. SharePoint Foundation nails it.  With RSS syndication and email alerts available out of the box, content syndication is already in the platform. Multilingual Support: -       Ability to display content in multiple languages. SharePoint Foundation 2010 supports more than 40 languages. Read More Read more @ http://msdn.microsoft.com/en-us/library/dd776256(v=office.12).aspxYou can download the free version from http://www.microsoft.com/en-us/download/details.aspx?id=5970

    Read the article

  • Don't Forget To Enjoy Life

    - by Justin
    I have a pretty clear stance on posting personal information in my blogs. I tend to avoid it almost instinctively. Part of that is because I am a somewhat private person. And the other is because I know how easy it is for personal information to be gathered and collected from sources such as blogs. So, this has remained a tech only blog for me. I've only posted topics mostly related to issues I have encountered at work. In a way this blog is a 'bookmark' for me. If I post something here and run into the issue again it allows me to refer back to a convenient place where the 'fix' is documented in a way that I understand. But today, I am posting something that speaks to everyone. Something PERSONAL. Honestly, I expect this entry to receive zero views. But if nothing else, I can come back to this blog one day when I'm having a bad day or something and run across this post. And I will be reminded... DON'T FORGET TO ENJOY LIFE. Say this to yourself out loud, right now. People, we can get caught up in some rather mundane details as we trek through life. It's so easy to lose track of what really matters that it should be no surprise to find yourself reading something like this and thinking to yourself 'Yeah. You are right, man. Some of this crap I'm clinging on to right now is so small in the grand scheme of things'. I have no reservation, no shame, in saying that I am more often than not caught up in the ever evolving world of 'shit that does not matter'. When you work in technology, you are surrounded by deadlines, upgrades, new versions, support 'end of life', etc. And by time you get done with your 8 hours you go home and put in a few more because you are STILL CAUGHT UP in the things you dealt with at work all day. DO YOURSELF A FAVOR. DO YOUR FAMILY AND FRIEND A FAVOR. When you are done for the day, and you drive home, get those work-related things out of your head before you pull into the driveway. If you are still thinking on them when you park the car, leave the engine running, close your eyes and take a deep breath. If you believe in God, pray. If you don't then meditate for a second with the INTENTION of letting go of the day and becoming the 'real you'. You may have forgotten who the real you is so I'll remind you.... THE REAL YOU IS THAT GUY OR GAL THAT LAUGHS, LOVES, AND LIVES. Be the real you as often as possible. If you can't do it during your 9 - 5, do it at home. YOUR RELATIONSHIPS AND YOUR PERSONAL HAPPINESS DEPEND ON IT. I am going to make you a promise right now. If you do what I've just said, your days will be longer and your joy will be exponential. I can't explain why I know this to be true. But I do know it. And if you are there reading this right now, you know it is true too. We both know it is true because it COMES FROM WITHIN EVERY MAN, WOMAN and CHILD. We are born into love and happiness. Lets not fade away into the darkness so easily found in this world. Lets keep the flame burning. The flame of passion. Passion for LIFE. Peace be with you.

    Read the article

  • SQL SERVER – Finding Different ColumnName From Almost Identitical Tables

    - by pinaldave
    I have mentioned earlier on this blog that I love social media – Facebook and Twitter. I receive so many interesting questions that sometimes I wonder how come I never faced them in my real life scenario. Well, let us see one of the similar situation. Here is one of the questions which I received on my social media handle. “Pinal, I have a large database. I did not develop this database but I have inherited this database. In our database we have many tables but all the tables are in pairs. We have one archive table and one current table. Now here is interesting situation. For a while due to some reason our organization has stopped paying attention to archive data. We did not archive anything for a while. If this was not enough we  even changed the schema of current table but did not change the corresponding archive table. This is now becoming a huge huge problem. We know for sure that in current table we have added few column but we do not know which ones. Is there any way we can figure out what are the new column added in the current table and does not exist in the archive tables? We cannot use any third party tool. Would you please guide us?” Well here is the interesting example of how we can use sys.column catalogue views and get the details of the newly added column. I have previously written about EXCEPT over here which is very similar to MINUS of Oracle. In following example we are going to create two tables. One of the tables has extra column. In our resultset we will get the name of the extra column as we are comparing the catalogue view of the column name. USE AdventureWorks2012 GO CREATE TABLE ArchiveTable (ID INT, Col1 VARCHAR(10), Col2 VARCHAR(100), Col3 VARCHAR(100)); CREATE TABLE CurrentTable (ID INT, Col1 VARCHAR(10), Col2 VARCHAR(100), Col3 VARCHAR(100), ExtraCol INT); GO -- Columns in ArchiveTable but not in CurrentTable SELECT name ColumnName FROM sys.columns WHERE OBJECT_NAME(OBJECT_ID) = 'ArchiveTable' EXCEPT SELECT name ColumnName FROM sys.columns WHERE OBJECT_NAME(OBJECT_ID) = 'CurrentTable' GO -- Columns in CurrentTable but not in ArchiveTable SELECT name ColumnName FROM sys.columns WHERE OBJECT_NAME(OBJECT_ID) = 'CurrentTable' EXCEPT SELECT name ColumnName FROM sys.columns WHERE OBJECT_NAME(OBJECT_ID) = 'ArchiveTable' GO DROP TABLE ArchiveTable; DROP TABLE CurrentTable; GO The above query will return us following result. I hope this solves the problems. It is not the most elegant solution ever possible but it works. Here is the puzzle back to you – what native T-SQL solution would you have provided in this situation? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Last Two Days to Get FREE Book – Joes 2 Pros Certification 70-433

    - by pinaldave
    Earlier this week we announced that we will be giving away FREE SQL Wait Stats book to everybody who will get SQL Server Joes 2 Pros Combo Kit. We had a fantastic response to the contest. We got an overwhelming response to the offer. We knew there would be a great response but we want to honestly say thank you to all of you for making it happen. Rick and I want to make sure that we express our special thanks to all of you who are reading our books. The offer is still on and there are two more days to avail this offer. We want to make sure that everybody who buys our most selling combo kits, we will send our other most popular SQL Wait Stats book. Please read all the details of the offer here. The books are great resources for anyone who wants to learn SQL Server from fundamentals and eventually go on the certification path of 70-433. Exam 70-433 contains following important subject and the book covers the subject of fundamental. If you are taking the exam or not taking the exam – this book is for every SQL Developer to learn the subject from fundamentals.  Create and alter tables. Create and alter views. Create and alter indexes. Create and modify constraints. Implement data types. Implement partitioning solutions. Create and alter stored procedures. Create and alter user-defined functions (UDFs). Create and alter DML triggers. Create and alter DDL triggers. Create and deploy CLR-based objects. Implement error handling. Manage transactions. Query data by using SELECT statements. Modify data by using INSERT, UPDATE, and DELETE statements. Return data by using the OUTPUT clause. Modify data by using MERGE statements. Implement aggregate queries. Combine datasets. INTERSECT, EXCEPT Implement subqueries. Implement CTE (common table expression) queries. Apply ranking functions. Control execution plans. Manage international considerations. Integrate Database Mail. Implement full-text search. Implement scripts by using Windows PowerShell and SQL Server Management Objects (SMOs). Implement Service Broker solutions. Track data changes. Data capture Retrieve relational data as XML. Transform XML data into relational data. Manage XML data. Capture execution plans. Collect output from the Database Engine Tuning Advisor. Collect information from system metadata. Availability of Book USA - Amazon | India - Flipkart | Indiaplaza Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL Authority News – Secret Tool Box of Successful Bloggers: 52 Tips to Build a High Traffic Top Ranking Blog

    - by Pinal Dave
    When I started this blog, it was meant as a bookmark for myself for helpful tips and tricks.  Gradually, it grew into a blog that others were reading and commenting on.  While SQL and databases are my first love and the reason I started this blog, the side effect was that I discovered I loved writing.  I discovered a secret goal I didn’t even know I wanted – I wanted to become an author.  For a long time, writing this blog satisfied that urge.  Gradually, though, I wanted to see my name in print. 12th Book Over the past few years I have authored and co-authored a number of books – they are all based on my knowledge of SQL Server, and were meant to spread my years of experience into the world, to share what I have learned with my community.  I currently have elevan of these “manuals” available for sale.  As exciting as it was to see my name in print, I still felt that there was more I could do as an author. That is when I realized that I am more than just a SQL expert.  I have been writing this blog now for more than 10 years, and it grew from a personal bookmark to a thriving website with over 2 million views per month.  I thought to myself “I could write a book about how to create a successful blog!”  And that is exactly what I did.  I am extremely excited to share with all of you my new book – “Secret Toolbox of Successful Bloggers.” A Labor of Love This project has been a labor of love for me.  It started out as a series for this blog – I would post one article a week until I felt the topic had been covered.  I found that as I wrote, new topics kept popping up in my mind, and eventually this small blog series grew into a full book.  The blog series was large enough to last a whole year, so I definitely thought that it could be a full book.  Ideas on how to become a successful blogger were so frequent that, I will admit, I feel like there is so much I left out of this book.  I had a lot more to say than I originally thought! I am so excited to be sharing this book with all of you.  I am so passionate about this topic, and I feel like there are so many people who can benefit from this book.  I know that when I started this blog, I did not know what I was doing, and I would have loved a “helping hand” to tell what to do and what not to do.  If this book can act that way to any of my readers, I feel it is a success. Rules of Thumb If you are interested in the topic of becoming a blogger, as you read this book, keep in mind that it is suggestions only.  Blogging is so new to the world that while there are “rules of thumb” about what to do and what not to do, a map of steps (“first, do x, then do y”) is not going to work for every single blogger.  This book is meant to encourage new bloggers to put their content out there in the world, to be brave and create a community like the one I have here at SQL Authority.  I have gained so much from this community, I wanted to give something back, and this book is just one small part. I hope that everyone who reads this books finds at least one helpful tip, and that everyone can experience the joy of blogging.  That is the whole reason I wrote this book, and what I hope everyone takes away from it. Where Can You Get It? You can get the book from following URL: Kindle eBook | Print Book Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL

    Read the article

  • Updated SOA Documents now available in ITSO Reference Library

    - by Bob Rhubart
    Nine documents within the IT Strategies from Oracle (ITSO) reference library have recently been updated. (Access to the ITSO collection is free to registered Oracle.com members -- and that membership is free.) All nine documents fall within the Service Oriented Architecture section of the ITSO collection, and cover the following topics: SOA Practitioner Guides Creating an SOA Roadmap (PDF, 54 pages, published: February 2012) The secret to successful SOA is to build a roadmap that can be successfully executed. SOA offers an opportunity to adopt an iterative technique to deliver solutions incrementally. This document offers a structured, iterative methodology to help you stay focused on business results, mitigate technology and organizational risk, and deliver successful SOA projects. A Framework for SOA Governance (PDF, 58 pages, published: February 2012) Successful SOA requires a strong governance strategy that designs-in measurement, management, and enforcement procedures. Enterprise SOA adoption introduces new assets, processes, technologies, standards, roles, etc. which require application of appropriate governance policies and procedures. This document offers a framework for defining and building a proper SOA governance model. Determining ROI of SOA through Reuse (PDF, 28 pages, published: February 2012) SOA offers the opportunity to save millions of dollars annually through reuse. Sharing common services intuitively reduces workload, increases developer productivity, and decreases maintenance costs. This document provides an approach for estimating the reuse value of the various software assets contained in a typical portfolio. Identifying and Discovering Services (PDF, 64 pages, published: March 2012) What services should we build? How can we promote the reuse of existing services? A sound approach to answer these questions is a primary measure for the success of a SOA initiative. This document describes a pragmatic approach for collecting the necessary information for identifying proper services and facilitating service reuse. Software Engineering in an SOA Environment (PDF, 66 pages, published: March 2012) Traditional software delivery methods are too narrowly focused and need to be adjusted to enable SOA. This document describes an engineering approach for delivering projects within an SOA environment. It identifies the unique software engineering challenges faced by enterprises adopting SOA and provides a framework to remove the hurdles and improve the efficiency of the SOA initiative. SOA Reference Architectures SOA Foundation (PDF, 70 pages, published: February 2012) This document describes they key tenets for SOA design, development, and execution environments. Topics include: service definition, service layering, service types, the service model, composite applications, invocation patterns, and standards. SOA Infrastructure (PDF, 86 pages, published: February 2012) Properly architected, SOA provides a robust and manageable infrastructure that enables faster solution delivery. This document describes the role of infrastructure and its capabilities. Topics include: logical architecture, deployment views, and Oracle product mapping. SOA White Papers and Data Sheets Oracle's Approach to SOA (white paper) (PDF, 14 pages, published: February 2012) Oracle has developed a pragmatic, holistic approach, based on years of experience with numerous companies to help customers successfully adopt SOA and realize measureable business benefits. This executive datasheet and whitepaper describe Oracle's proven approach to SOA. Oracle's Approach to SOA (data sheet) (PDF, 3 pages, published: March 2012) SOA adoption is complex and success is far from assured. This is why Oracle has developed a pragmatic, holistic approach, based on years of experience with numerous companies, to help customers successfully adopt SOA and realize measurable business benefits. This data sheet provides an executive overview of Oracle's proven approach to SOA.

    Read the article

  • How to use DI and DI containers

    - by Pinetree
    I am building a small PHP mvc framework (yes, yet another one), mostly for learning purposes, and I am trying to do it the right way, so I'd like to use a DI container, but I am not asking which one to use but rather how to use one. Without going into too much detail, the mvc is divided into modules which have controllers which render views for actions. This is how a request is processed: a Main object instantiates a Request object, and a Router, and injects the Request into the Router to figure out which module was called. then it instantiates the Module object and sends the Request to that the Module creates a ModuleRouter and sends the Request to figure out the controller and action it then creates the Controller and the ViewRenderer, and injects the ViewRenderer into the Controller (so that the controller can send data to the view) the ViewRenderer needs to know which module, controller and action were called to figure out the path to the view scripts, so the Module has to figure out this and inject it to the ViewRenderer the Module then calls the action method on the controller and calls the render method on the ViewRenderer For now, I do not have any DI container set up, but what I do have are a bunch of initX() methods that create the required component if it is not already there. For instance, the Module has the initViewRenderer() method. These init methods get called right before that component is needed, not before, and if the component was already set it will not initialize it. This allows for the components to be switched, but it does not require manually setting them if they are not there. Now, I'd like to do this by implementing a DI container, but still keep the manual configuration to a bare minimum, so if the directory structure and naming convention is followed, everything should work, without even touching the config. If I use the DI container, do I then inject it into everything (the container would inject itself when creating a component), so that other components can use it? When do I register components with the DI? Can a component register other components with the DI during run-time? Do I create a 'common' config and use that? How do I then figure out on the fly which components I need and how they need to be set up? If Main uses Router which uses Request, Main then needs to use the container to get Module (or does the module need to be found and set beforehand? How?) Module uses Router but needs to figure out the settings for the ViewRenderer and the Controller on the fly, not in advance, so my DI container can't be setting those on the Module before the module figures out the controller and action... What if the controller needs some other service? Do I inject the container into every controller? If I start doing that, I might just inject it into everything... Basically I am looking for the best practices when dealing with stuff like this. I know what DI is and what DI containers do, but I am looking for guidance to using them in real life, and not some isolated examples on the net. Sorry for the lengthy post and many thanks in advance.

    Read the article

  • How to use correctly the comments in C/++

    - by Lucio
    I'm learning to program in C and in my stage, the best form to use correctly the comments is writing good comments from the beginning. As the comments are not just for that one understands better the code but others too, I want to know the views of all of you to reach a consensus. So what I want is that the most experienced users edit the following code as you please. (If it's unnecessary, delete it; If it's wrong, correct it; If needed, add more) Thus there'll be multiple answers with different syntax and the responses with the most votes will be taken as referring when commenting. The code to copy, paste and edit to your pleasure is: (And I remark again, just import the comments, not the code) /* This programs find 1 number in 1 file. The file is binary type and has integers in series. The number is integer type and it's entered from the keyboard. When finished the program, a poster will show the results: Saying if the number is in the file or not. */ #include <stdio.h> //FUNCTION 1 //Open file 'path' and closes it. void openf(char path[]) { int num; //Read from Keyboard a Number and it save it into 'num' var printf("Ready for read number.\n\nNumber --> "); fflush(stdin); scanf("%d",&num); //Open file 'path' in READ mode FILE *fvar; fvar=fopen(path,"rb"); //IF error happens when open file, exit of function if (fvar==NULL) { printf("ERROR while open file %s in read mode.",path); exit(1); } /*Verify the result of 'funct' function IF TRUE, 'num' it's in the file*/ if (funct(path,fvar,num)) printf("The number %d it is in the file %s.",num,path); else printf("The number %d it is not in the file %s.",num,path); fclose(fvar); } /*FUNCTION 2 It is a recursive function. Reads number by number until the file is empty or the number is found. Parameters received: 'path' -> Directory file 'fvar' -> Pointer file 'num' -> Number to compare */ int funct(char path[],FILE *fvar,int num) { int compare; //FALSE condition when the pointer reaches the end if (fread(&compare,sizeof(int),1,fvar)>0) /*TRUE condition when the number readed is iqual that 'num' ELSE will go to the function itself*/ if (compare!=num) funct(path,fvar,num); else return 1; else return 0; } int main(int argc, char **argv) { char path[30]="file.bin"; //Direction of the file to process openf(path); //Function with algorithm return 0; }

    Read the article

  • Lead, Follow, or Get out of the way

    - by Daniel Moth
    This is one of the sayings (attributed to Thomas Paine) that totally resonated with me from the first time I heard it, which was only 3 years ago during some training course at work: "Lead, Follow, or Get out of the way" You'll find many books with this title and you'll find it quoted by politicians and other leaders in various countries at various times... the quote is open to interpretation and works on many levels. To set the tone of what this means to me, I'll use a simple micro example: In any given conversation, you are either leading it or following it, at different times/snapshots of the conversation. If you are not willing or able to lead it, and you are not willing or able to follow it, then you should depart. The bad alternative which this guidance encourages you NOT to do is to stick around and obstruct progress by not following, not leading, and simply complaining or trying to derail the discussion in no particular direction. The same pattern applies at your position/role at work. Either follow your management/leadership team, or try to lead them to what you think is a better place, or change jobs. Don't stick around complaining about the direction things are going, while not actively trying to either change things or make peace with it. In the previous paragraph you can replace the word "your management" with "the people reporting to you" and the guidance still holds. Either lead your direct reports to where you think they should go, or follow their lead, or change jobs. Complaining about folks not taking direction while doing nothing is not a maintainable state. To me this quote is not about a permanent state, it is not about some people always leading and some always following: It is about a role/hat that anybody can play/wear at any given moment. One minute I am leading you, the next I am following you, and the next we are both following someone else and so on... When there is disagreement, debate the different directions for as long as it takes for you to be comfortable that you can either follow or lead. If you don't become comfortable with either of those, get out of the way. Something to remember is that it is impossible to learn how to lead well, without learning how to follow well (probably deserves its own blog entry)... Things go wrong when someone thinks that they must always be leading, or when everybody wants to follow and nobody steps up to lead... Things go wrong when more than one person wants to lead and they don't try to reach agreement on a shared direction, stubbornly sticking to their guns pulling the rest of the team in multiple directions... Things go wrong when more than one person wants to lead and after numerous and lengthy discussions, none of them decides to follow or get out of the way... Things go wrong when people don't want to lead, don't want to follow, and insist on sticking around... While there are a few ways things that can go wrong as enumerated in the previous paragraph, the most common one in my experience is the last one I mentioned. You'll recognize these folks as the ones that always complain about everything that is wrong with their company/product but do nothing about it. Every time you hear someone giving feedback on how something is wrong or suboptimal, ask them "So now that you identified the problem, what do you think the solution is and what are you doing to drive us to that solution?" The next time things start going wrong, step up and remind everyone: Lead, Follow, or Get out of the way. For more perspectives, and for input to help you form your own interpretation, search the web for this phrase to see in what contexts it is being used (bing, google). Finally, regardless of your political views, I hope you can appreciate if only as an example this perspective of someone leading by actually getting out of the way. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Advanced Record-Level Business Intelligence with Inner Queries

    - by gt0084e1
    While business intelligence is generally applied at an aggregate level to large data sets, it's often useful to provide a more streamlined insight into an individual records or to be able to sort and rank them. For instance, a salesperson looking at a specific customer could benefit from basic stats on that account. A marketer trying to define an ideal customer could pull the top entries and look for insights or patterns. Inner queries let you do sophisticated analysis without the overhead of traditional BI or OLAP technologies like Analysis Services. Example - Order History Constancy Let's assume that management has realized that the best thing for our business is to have customers ordering every month. We'll need to identify and rank customers based on how consistently they buy and when their last purchase was so sales & marketing can respond accordingly. Our current application may not be able to provide this and adding an OLAP server like SSAS may be overkill for our needs. Luckily, SQL Server provides the ability to do relatively sophisticated analytics via inner queries. Here's the kind of output we'd like to see. Creating the Queries Before you create a view, you need to create the SQL query that does the calculations. Here we are calculating the total number of orders as well as the number of months since the last order. These fields might be very useful to sort by but may not be available in the app. This approach provides a very streamlined and high performance method of delivering actionable information without radically changing the application. It's also works very well with self-service reporting tools like Izenda. SELECT CustomerID,CompanyName, ( SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID ) As Orders, DATEDIFF(mm, ( SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) ,getdate() ) AS MonthsSinceLastOrder FROM Customers Creating Views To turn this or any query into a view, just put CREATE VIEW AS before it. If you want to change it use the statement ALTER VIEW AS. Creating Computed Columns If you'd prefer not to create a view, inner queries can also be applied by using computed columns. Place you SQL in the (Formula) field of the Computed Column Specification or check out this article here. Advanced Scoring and Ranking One of the best uses for this approach is to score leads based on multiple fields. For instance, you may be in a business where customers that don't order every month require more persistent follow up. You could devise a simple formula that shows the continuity of an account. If they ordered every month since their first order, they would be at 100 indicating that they have been ordering 100% of the time. Here's the query that would calculate that. It uses a few SQL tricks to make this happen. We are extracting the count of unique months and then dividing by the months since initial order. This query will give you the following information which can be used to help sales and marketing now where to focus. You could sort by this percentage to know where to start calling or to find patterns describing your best customers. Number of orders First Order Date Last Order Date Percentage of months order was placed since last order. SELECT CustomerID, (SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) As Orders, (SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS LastOrder, (SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS FirstOrder, DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) AS MonthsSinceFirstOrder, 100*(SELECT COUNT(DISTINCT 100*DATEPART(yy,OrderDate) + DATEPART(mm,OrderDate)) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) / DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) As OrderPercent FROM Customers

    Read the article

  • Perm SSIS Developer Urgently Required

    - by blakmk
      Job Role To provide dedicated data services support to the company, by designing, creating, maintaining and enhancing database objects, ensuring data quality, consistency and integrity. Migrating data from various sources to central SQL 2008 data warehouse will be the primary function. Migration of data from bespoke legacy database’s to SQL 2008 data warehouse. Understand key business requirements, Liaising with various aspects of the company. Create advanced transformations of data, with focus on data cleansing, redundant data and duplication. Creating complex business rules regarding data services, migration, Integrity and support (Best Practices). Experience ·         Minimum 3 year SSIS experience, in a project or BI Development role and involvement in at least 3 full ETL project life cycles, using the following methodologies and tools o    Excellent knowledge of ETL concepts including data migration & integrity, focusing on SSIS. o    Extensive experience with SQL 2005 products, SQL 2008 desirable. o    Working knowledge of SSRS and its integration with other BI products. o    Extensive knowledge of T-SQL, stored procedures, triggers (Table/Database), views, functions in particular coding and querying. o    Data cleansing and harmonisation. o    Understanding and knowledge of indexes, statistics and table structure. o    SQL Agent – Scheduling jobs, optimisation, multiple jobs, DTS. o    Troubleshoot, diagnose and tune database and physical server performance. o    Knowledge and understanding of locking, blocks, table and index design and SQL configuration. ·         Demonstrable ability to understand and analyse business processes. ·         Experience in creating business rules on best practices for data services. ·         Experience in working with, supporting and troubleshooting MS SQL servers running enterprise applications ·         Proven ability to work well within a team and liaise with other technical support staff such as networking administrators, system administrators and support engineers. ·         Ability to create formal documentation, work procedures, and service level agreements. ·         Ability to communicate technical issues at all levels including to a non technical audience. ·         Good working knowledge of MS Word, Excel, PowerPoint, Visio and Project.   Location Based in Crawley with possibility of some remote working Contact me for more info: http://sqlblogcasts.com/blogs/blakmk/contact.aspx      

    Read the article

  • Utility to Script SQL Server Configuration

    - by Bill Graziano
    I wrote a small utility to script some key SQL Server configuration information. I had two goals for this utility: Assist with disaster recovery preparation Identify configuration changes I’ve released the application as open source through CodePlex. You can download it from CodePlex at the Script SQL Server Configuration project page. The application is a .NET 2.0 console application that uses SMO. It writes its output to a directory that you specify.  Disaster Planning ScriptSqlConfig generates scripts for logins, jobs and linked servers.  It writes the properties and configuration from the instance to text files. The scripts are designed so they can be run against a DR server in the case of a disaster. The properties and configuration will need to be manually compared. Each job is scripted to its own file. Each linked server is scripted to its own file. The linked servers don’t include the password if you use a SQL Server account to connect to the linked server. You’ll need to store those somewhere secure. All the logins are scripted to a single file. This file includes windows logins, SQL Server logins and any server role membership.  The SQL Server logins are scripted with the correct SID and hashed passwords. This means that when you create the login it will automatically match up to the users in the database and have the correct password. This is the only script that I programmatically generate rather than using SMO. The SQL Server configuration and properties are scripted to text files. These will need to be manually reviewed in the event of a disaster. Or you could DIFF them with the configuration on the new server. Configuration Changes These scripts and files are all designed to be checked into a version control system.  The scripts themselves don’t include any date specific information. In my environments I run this every night and check in the changes. I call the application once for each server and script each server to its own directory.  The process will delete any existing files before writing new ones. This solved the problem I had where the scripts for deleted jobs and linked servers would continue to show up.  To see any changes I just need to query the version control system to show many any changes to the files. Database Scripting Utilities that script database objects are plentiful.  CodePlex has at least a dozen of them including one I wrote years ago. The code is so easy to write it’s hard not to include that functionality. This functionality wasn’t high on my list because it’s included in a database backup.  Unless you specify the /nodb option, the utility will script out many user database objects. It will script one object per file. It will script tables, stored procedures, user-defined data types, views, triggers, table types and user-defined functions. I know there are more I need to add but haven’t gotten around it yet. If there’s something you need, please log an issue and get it added. Since it scripts one object per file these really aren’t appropriate to recreate an empty database. They are really good for checking into source control every night and then seeing what changed. I know everyone tells me all their database objects are in source control but a little extra insurance never hurts. Conclusion I hope this utility will help a few of you out there. My goal is to have it script all server objects that aren’t contained in user databases. This should help with configuration changes and especially disaster recovery.

    Read the article

  • Part 6: Extensions vs. Modifications

    - by volker.eckardt(at)oracle.com
    Customizations = Extensions + Modifications In the EBS terminology, a customization can be an extension or a modification. Extension means that you mainly create your own code from scratch. You may utilize existing views, packages and java classes, but your code is unique. Modifications are quite different, because here you take existing code and change or enhance certain areas to achieve a slightly different behavior. Important is that it doesn't matter if you place your code at the same or at another place – it is a modification. It is also not relevant if you leave the original code enabled or not! Why? Here is the answer: In case the original code piece you have taken as your base will get patched, you need to copy the source again and apply all your changes once more. If you don't do that, you may get different results or write different data compared to the standard – this causes a high risk! Here are some guidelines how to reduce the risk: Invest a bit longer when searching for objects to select data from. Rather choose a view than a table. In case Oracle development changes the underlying tables, the view will be more stable and is therefore a better choice. Choose rather public APIs over internal APIs. Same background as before: although internal structure might change, the public API is more stable. Use personalization and substitution rather than modification. Spend more time to check if the requirement can be covered with such techniques. Build a project code library, avoid that colleagues creating similar functionality multiple times. Otherwise you have to review lots of similar code to determine the need for correction. Use the technique of “flagged files”. Flagged files is a way to mark a standard deployment file. If you run the patch analyse (within Application Manager), the analyse result will list flagged standard files in case they will be patched. If you maintain a cross reference to your own CEMLIs, you can easily determine which CEMLIs have to be reviewed. Implement a code review process. This can be done by utilizing team internal or external persons. If you implement such a team internal process, your team members will come up with suggestions how to improve the code quality by themselves. Review heavy customizations regularly, to identify options to reduce complexity; let's say perform this every 6th month. You may not spend days for such a review, but a high level cross check if the customization can be reduced is suggested. De-install customizations which are no more required. Define a process for this. Add a section into the technical documentation how to uninstall and what are possible implications. Maintain a cross reference between CEMLIs and between CEMLIs, EBS modules and business processes. Keep this list up to date! Share this list! By following these guidelines, you are able to improve product stability. Although we might not be able to avoid modifications completely, we can give a much better advise to developers and to our test team. Summary: Extensions and Modifications have to be handled differently during their lifecycle. Modifications implicate a much higher risk and should therefore be reviewed more frequently. Good cross references allow you to give clear advise for the testing activities.

    Read the article

  • Nokia Lumia 920 Windows Phone 8 Announcement

    - by Tim Murphy
    Today Nokia and Microsoft had an event to officially introduce the Lumia 920.  Below is a rundown of some of the things I found interesting. As a person who likes photography there was a lot to drool over.  The main feature that caught my attention was PureView with its optical stabilization.  This alone should improve the majority of you pictures.  Add to that the SmartShoot Object remover that uses multiple images to remove unwanted people or objects that move through your picture and you never have to accept reality again. For the most part the lenses concept introduced in Windows Phone 8 just makes the usability of leveraging camera better.  Of course that is Microsoft’s selling point.  One lens that caught my attention was the Bing lens.  I have to say it is about time that we can take pictures and use them to search for answers using Bing. There were a couple of features shown that involved augmented reality.  One was similar to the yapf application that is already in the market which overlays restaurants and other destination over live camera views.  The other was using the navigation directions with a live view. Then you get down to some of the physical features of the Lumia 920.  The one that got the most stage time is that it has a great 2000mah battery which can be charged wirelessly.  They also pointed out the improved glare reduction of the 4.5 in. curved glass screen.  This hardware improvement is improved further with software that detects glare conditions and adjusts the display attributes to enhance viewing ease. Adding to the wireless cool factor of the Lumia 920 is the general NFC capabilities.  This was demonstrated with NFC docking stations as well as JBL speakers and headphones. There was one more hardware feature that I applauded.  The super sensitive touch screen did away with one of my pet peeves with capacitive touch screens.  You will never have to remove you gloves to operate your phone again.  The mittens that they did the demo with looked more like boxing gloves. I was disappointed with Joe Belfiore said that they were only going to show a couple of new features of the Windows Phone 8 and would hear more at future events.  One of the things he did show is the ability to customize which buttons you preferred as defaults in IE10.  For example you could have the folders button where the refresh button normally is.  He also showed that at long last you can natively take screenshots on your phone.  Hopefully he will be back quickly to give us the rest of the features. The most disappointing part of the event was that we never found out when they would be released or how much they would cost.  Let’s hope this comes soon.  Even with these couple of items still left on my wish list I can’t wait to get my hands on a Lumia 920.  del.icio.us Tags: Windows Phone,Windows Phone 8,Nokia,Lumia,Lumia 920,Microsoft

    Read the article

  • Development Quirk From ASP.NET Dynamic Compilation

    - by jkauffman
    The Problem I got a compilation error in my ASP.NET MVC3 project that tested my sanity today. (As always, names are changed to protect the innocent) The type or namespace name 'FishViewModel' does not exist in the namespace 'Company.Product.Application.Models' (are you missing an assembly reference?) Sure looks easy! There must be something in the project referring to a FishViewModel. The Confusing Part The first thing I noticed was the that error was occuring in a folder clearly not in my project and in files that I definitely had not created: %SystemRoot%\Microsoft.NET\Framework\(versionNumber)\Temporary ASP.NET Files\ App_Web_mezpfjae.1.cs I also ascertained these facts, each of which made me more confused than the last: Rebuild and Clean had no effect. No controllers in the project ever returned a ViewResult using FishViewModel. No views in the project defined that they use FishViewModel. Searching across all files included in the project for “FishViewModel” provided no results. The build server did not report a problem. The Solution The problem stemmed from a file that was not included in the project but still present on the file system: (By the way, if you don’t know this trick already, there is a toolbar button in the Solution Explorer window to “Show All Files” which allows you to see files all files in the file system) In my situation, I was working on the mission-critical Fish view before abandoning the feature. Instead of deleting the file, I excluded it from the project. However, this was a bad move. It caused the build failure, and in order to fix the error, this file must be deleted. By the way, this file was not in source control, so the build server did not have it. This explains why my build server did not report a problem for me. The Explanation So, what’s going on? This file isn’t even a part of the project, so why is it failing the build? This is a behavior of the ASP.NET Dynamic Compilation. This is the same process that occurs when deploying a webpage; ASP.NET compiles the web application’s code. When this occurs on a production server, it has to do so without the .csproj file (which isn’t usually deployed, if you’ve taken your time to do a deployment cleanly). This process has merely the file system available to identify what to compile. So, back in the world of developing the webpage in visual studio on my developer box, I run into the situation because the same process is occuring there. This is true even though I have more files on my machine than will actually get deployed. I can’t help but think that this error could be attributed back to the real culprit file (Fish.cshtml, rather than the temporary files) with some work, but at least the error had enough information in it to narrow it down. The Conclusion I had previously been accustomed to the idea that for c# projects, the .csproj file always “defines” the build behavior. This investigation has taught me that I’ll need to shift my thinking a bit to remember that the file system has the final say when it comes to web applications, even on the developer’s machine!

    Read the article

  • My .NET Technology picks for 2011

    - by shiju
    My Technology predictions for 2011 Cloud computing and Mobile application development will be the hottest trends for 2011. I hope that Windows Azure will be very hot in year 2011 and lot of cloud computing adoption will be happen with Windows Azure on 2011. Web application scalability will be the big challenge for Architects in the next year and architecture approaches like CQRS will get some attention on next year. Architects will look on different options for web application scalability and adoption of NoSQL and Document databases will be more in the year 2011. The following are the my technology picks for .Net stack Windows Azure Windows Azure will be one of the hottest technologies of 2011. Adoption of Cloud and Windows Azure will get big attention on next year. The Windows Azure platform is a flexible cloud–computing platform that lets you focus on solving business problems and addressing customer needs. No need to invest upfront on expensive infrastructure. Pay only for what you use, scale up when you need capacity and pull it back when you don’t. We handle all the patches and maintenance — all in a secure environment with over 99.9% uptime. Silverlight 5 Silverlight is becoming a common technology for variety of development platforms. You can develop Silverlight applications for web, desktop and windows phone. The new Silverlight 5 beta will be available during the starting quarter of the next year with new capabilities and lot of new features. Silverlight 5 will be powerful development platform for both web-based business apps and rich media solutions. We can expect final version of Silverlight 5 on end of 2011. Windows Phone 7 Development Tools Mobile application development will be very hot in year 2011 and Windows Phone 7 will be one of the hottest technologies of next year. You can get introduction on Windows Phone 7 Development Tools from somasegar’s blog post and MSDN documentation available from here. EF Code First I am a big fan of Entity Framework’s Code First approach and hope that Code First approach will attract more people onto Entity Framework 4. EF Code First lets you focus on domain model which will enable Domain-Driven Development for applications. I hope that DDD fans will love the EF Code First approach. The Entity Framework 4 now supports three types of approaches and these will attract different types of developer audience. ASP.NET MVC 3 The ASP.NET MVC 3 will be the hottest technology of Microsoft web stack on the next year. ASP.NET developers will widely move to the ASP.NET MVC Framework from their WebForms development. The new Razor view engine is great and it will increase the adoption of ASP.NET MVC 3. Razor the will improve the productivity when working with ASP.NET MVC 3 Views. You can build great web applications using ASP.NET MVC 3 and jQuery with better maintainability, generation of clean HTML and even better performance. In my opinion, the best technology stack for web development is ASP.NET MVC 3 and Entity Framework 4 Code First as ORM. On the next year, you can expect more articles from my blog on ASP.NET MVC 3 and Entity Framework 4 Code First. RavenDB NoSQL and Document databases will get more attention on the coming year and RavenDB will be the most notable document database in the .NET stack. RavenDB is an Open Source (with a commercial option) document database for the .NET/Windows platform developed by Ayende Rahien. RavenDB is .NET focused document database which comes with a fully functional .NET client API and supports LINQ. I have written few articles on RavenDB and you can read it from here. Managed Extensibility Framework (MEF) Many people didn't realized the power of MEF. The MEF lets you create extensible applications and provides a great solution for the runtime extensibility problem. I hope that .NET developers will more adopt the MEF on the next year for their .NET applications. You can get an excellent introduction on MEF from Anoop Madhusudanan’s blog post MEF or Managed Extensibility Framework – Creating a Zoo and Animals

    Read the article

  • Best approach to depth streaming via existing codec

    - by Kevin
    I'm working on a development system (and game) intended for games set mostly in static third-person views. We produce our scenery by CG and photographic techniques. Our background art is rendered off-line by a production-grade renderer. To allow the runtime imagery to properly interact with the background art, I wrote a program to convert from depth output by Mental Ray into a texture, and a pixel shader to draw a quad such that the Z data comes from the texture. This technique is working out very well, but now we've decided that some of the camera angle changes between scenes should be animated. The animation itself is straightforward to produce from our CG models. We intend to encode it to some HD video codec such as H.264. The problem is that in order to maintain our runtime imagery on the screen, the depth buffer will need to be loaded for each video frame. Due to the bandwidth, the video's depth data will need to be compressed efficiently. I've looked into methods for performing temporal compression of depth info and found an interesting research paper here: http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/depth-streaming.pdf The method establishes a mapping between 16-bit depth values and YCbCr values. The mapping is tuned to the properties of existing video codecs in order to maximize precision of the decoded depths after the YCbCr has undergone video compression. It allows an existing, unmodified video codec to be used on the backend. I'm looking at how to pull this off with the least possible work. (This design change was unplanned.) Our game engine itself is native C++, presently for Win32 and DirectX, although we've worked hard to keep platform dependence segregated because we intend other ports. We don't have motion video facilities in the engine yet but will ultimately need that anyway for cinematics. I was planning on using some off-the-shelf motion video solution we can plug into our engine, and haven't chosen one yet. This new added requirement makes selecting one harder since, among other things, we'll now need to bypass colourspace conversion on one of the streams, and also will need to be playing two streams simultaneously in lockstep, on top of in some cases audio on one of them (for the cinematics). I'm also wondering if it's possible (or even useful) to do the conversion from YCbCr to depth in a pixel shader, or if it's better to just do it in CPU and separately load the resulting depth values into a locked tex. The conversion unfortunately does involve branching logic per-pixel. (There are more naive mappings that don't need branching, but they produce inferior results.) It could be reduced to a table lookup but the table would be 32MB. Programming is second-nature to me but I'm not that experienced with pix shaders and have zero knowledge of off-the-shelf video solutions. I'd therefore be interested in advice from others who may have dealt more with depth streaming, pixel shaders, and/or off-the-shelf codecs, regarding how feasible the proposed application is and what off-the-shelf video systems out there would best get along with this usage case.

    Read the article

  • Advice on refactoring PHP Project

    - by b0x
    I have a small SAS ERP that was written some years ago using PHP. At that time, it didn't use any framework, but the code isn't a mess. Nowadays, the project grows and I’m now working with 3 more programmers. Often, they ask to me why we don’t migrate to a framework such as Laravel. Although I'd love trying Laravel, I’m a small business and I don't have time nor money to stop and spend a whole year building everything from scratch. I need to live and pay the bills. So, I've read a lot about this matter, and I decided that doing a refactoring is the best way to do it. Also, I'm not so sure that a framework will make things easy. Business goals are: Make the code easier to new hired programmers Separate the "view", in order to: release different versions of this product (using the same code), but under different brands and websites at the minimum cost (just changing view) release different versions to fit mobile/tablet. Make different types of this product, selling packages as if they were plugins. Develop custom packages for some costumers (like plugins/addon's that they can buy to put on the main application). Code goals: Introduce best pratices, standards for everyone Try to build my own MVC structure Improve validation of data/forms (today they are mixed in both ajax and classes) Create automated testing routines for quality assurance. My current structure project: class\ extra\ hd\ logs\ public_html\ public_html\includes\ public_html\css|js|images\ class\ There are three types of classes. They are all “autoloaded” with something similar with PSR-0, but I don’t use namespaces. 1. class.Something.php Connects to Database using specific methods. I.e: Costumer-list(); It uses “class.Db.php”, that it’s an abstraction of mysql on every method. 2. class.SomethingProc.php Do things that “join” things that come from “class.Something.php”. Like IF/ELSE, math operations. 3. class.SomethingHTML.php The classes with “HTML” suffix implements only static methods and HTML code only. A real life example: All the programmers need to use $cSomething ($c to class) and $arrSomething (to array). Costumer.php (view) <?php $cCosumter = new Costumer(); $arrCostumer = $cCostumer->list(); echo CostumerHTML::table($arrCostumer); ?> Extra\ Store 3rdparty projects/classes from others, such MPDF, PHPMailer, etc. Hd\ Store user’s files outsite wwwroot dir. Logs\ Store phplogs and the system itself logs (We have a static Log::error() method, that we put in every method of every class) Public_html\ Stores the files that people use. Public_html\includes\ Store the main “config.php” file and all files that do “ajax things” ajax.Costumer.php, for example. Help is needed ;) So, as you can see we have some standards, and also for database things. But I want to write a manual of our rules. Something that I can give to any new programmer at my company and he can go on. This is not totally a mess, but it could be better seeing the new practices. What could I do to separate this as MVC, to have multiple views. Could you give me some tips considering my goals? Keep im mind the different products/custom things for specific costumers without breaking the main application. URL for tutorials, books, etc, would be nice.

    Read the article

  • Documentation and Test Assertions in Databases

    - by Phil Factor
    When I first worked with Sybase/SQL Server, we thought our databases were impressively large but they were, by today’s standards, pathetically small. We had one script to build the whole database. Every script I ever read was richly annotated; it was more like reading a document. Every table had a comment block, and every line would be commented too. At the end of each routine (e.g. procedure) was a quick integration test, or series of test assertions, to check that nothing in the build was broken. We simply ran the build script, stored in the Version Control System, and it pulled everything together in a logical sequence that not only created the database objects but pulled in the static data. This worked fine at the scale we had. The advantage was that one could, by reading the source code, reach a rapid understanding of how the database worked and how one could interface with it. The problem was that it was a system that meant that only one developer at the time could work on the database. It was very easy for a developer to execute accidentally the entire build script rather than the selected section on which he or she was working, thereby cleansing the database of everyone else’s work-in-progress and data. It soon became the fashion to work at the object level, so that programmers could check out individual views, tables, functions, constraints and rules and work on them independently. It was then that I noticed the trend to generate the source for the VCS retrospectively from the development server. Tables were worst affected. You can, of course, add or delete a table’s columns and constraints retrospectively, which means that the existing source no longer represents the current object. If, after your development work, you generate the source from the live table, then you get no block or line comments, and the source script is sprinkled with silly square-brackets and other confetti, thereby rendering it visually indigestible. Routines, too, were affected. In our system, every routine had a directly attached string of unit-tests. A retro-generated routine has no unit-tests or test assertions. Yes, one can still commit our test code to the VCS but it’s a separate module and teams end up running the whole suite of tests for every individual change, rather than just the tests for that routine, which doesn’t scale for database testing. With Extended properties, one can get the best of both worlds, and even use them to put blame, praise or annotations into your VCS. It requires a lot of work, though, particularly the script to generate the table. The problem is that there are no conventional names beyond ‘MS_Description’ for the special use of extended properties. This makes it difficult to do splendid things such ensuring the integrity of the build by running a suite of tests that are actually stored in extended properties within the database and therefore the VCS. We have lost the readability of database source code over the years, and largely jettisoned the use of test assertions as part of the database build. This is not unexpected in view of the increasing complexity of the structure of databases and number of programmers working on them. There must, surely, be a way of getting them back, but I sometimes wonder if I’m one of very few who miss them.

    Read the article

  • Html.ValidationSummary and Multiple Forms

    - by MightyZot
    Originally posted on: http://geekswithblogs.net/MightyZot/archive/2013/11/11/html.validationsummary-and-multiple-forms.aspxThe Html.ValidationSummary helper writes a div with a list of general errors added to the model state while a request is being serviced. There is generally one form per view or partial view, I think, so often there is only one call to Html.ValidationSummary in the page resulting from the assembly of your views. And, consequently, there is no problem with the markup that Html.ValidationSummary spits out as a result. What if you want to put multiple forms in one view? Even if you create a view model that’s an aggregate of the view models for each form, the error validation summary is going to contain errors from both forms. Check out this screen shot, which shows a page with multiple forms. Notice how the error validation summary shows up twice. Grrr! Errors for the login form also show up in the registration form. Luckily, there is an easy way around this. Pull the errors out of the model state and separate them for each form. You’ll need to identify the appropriate form by setting the key when you make calls to ModelState.AddModelError. Assume in my example that errors for the login form are added to model state using the “LoginForm” key. And, likewise, assume that errors for the registration form are added to model state using the “RegistrationForm” key. An example of that might look like this… // If we got this far, something failed, redisplay form ModelState.AddModelError("LoginForm", "User name or password is not right..."); return View(model); Over in the code for your View, you can pull each form’s errors from the model state using lambda expressions that look like these… var LoginFormErrors = ViewData.ModelState.Where(ms => ms.Key == "LoginForm"); var RegistrationFormErrors = ViewData.ModelState.Where(ms => ms.Key == "RegistrationForm"); Now that you have two collections containing errors, you can display only the errors specific to each form. I’m doing that in my code by removing the calls to Html.ValidationSummary and replacing them with enumerators that look like this… if(LoginFormErrors.Count() > 0) { <div class="cdt-error-list">     <ul>     @foreach (var entry in LoginFormErrors)     {         foreach (var error in entry.Value.Errors)         {             <li>@error.ErrorMessage</li>         }     }     </ul> </div> } …and for the registration form, the code looks like this… @if(RegistrationFormErrors.Count() > 0) { <div class="cdt-error-list">     <ul>     @foreach (var entry in RegistrationFormErrors)     {         foreach (var error in entry.Value.Errors)         {             <li>@error.ErrorMessage</li>         }     }     </ul> </div> } The result is a nice clean separation of the list of errors that are specific to each form. And, this is important because each form is submitted separately in my case, so both forms don’t generate errors in the same context. As you’ll see in the screen shot below, errors added to the model state when the login form is submitted do not show up in the registration form’s validation summary.

    Read the article

  • CFOs: Do You Have a Playbook for Growth?

    - by Oracle Accelerate for Midsize Companies
    by Jim Lein, Oracle Midsize Programs In most global markets, CFOs are optimistic about their company's growth opportunities. Deloitte's CFO Signals Report, "Time to Accelerate" found that: In the U.K. business optimism is at its highest level in three-and-a-half years Optimism in North America rose from a strong +42% last quarter (Q2 to Q3 2013) to an even stronger +54%. The inaugural Southeast Asia survey, 44% of CFOs reported a positive outlook despite worries over the Chinese economy and political uncertainty. Sustainable and profitable business growth doesn't usually happen by accident. Company's need a playbook for growth that's owned by the CFO. And today, that playbook must leverage the six enabling technologies--Social, Big Data, Mobile, Cloud, Analytics, and The Internet of Things (or, as Oracle president Mark Hurd explains, "The Internet of the People"). On Monday June 9 at  2:00 pm Eastern, CFO.com is hosting a webcast, "The CFO Playbook on Growth: How CFOs Can Boost Efficiency and Performance with Automation". Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} “Investing in technology begins with a business metric driven business case with clear tangible business results expected," says John Lieblang, Affiliate Partner with Waterstone Management Group. "The progressive CFO has learned how to forge a partnership with the CIO to align everyone in the 'result value chain' to be accountable for the business results not just for functional technology.” Click HERE to register  Looking for more news and information about Oracle Solutions for Midsize Companies? Read the latest Oracle for Midsize Companies Newsletter Sign-up to receive the latest communications from Oracle’s industry leaders and experts Jim Lein I evangelize Oracle's enterprise solutions for growing midsize companies. I recently celebrated 15 years with Oracle, having joined JD Edwards in 1999. I'm based in Evergreen, Colorado and love relating stories about creativity and innovation whether they be about software, live music, or the mountains. The views expressed here are my own, and not necessarily those of Oracle.

    Read the article

  • Customer Loyalty vs. Customer Engagement: Who Cares?

    - by Jeb Dasteel-Oracle
    Have you read the recent Forbes OracleVoice blog titled Customer Loyalty is Dead. Long Live Engagement!? If you haven’t, take a look. This article prompted lots of conversation in the social realm. Many who read the article voiced their reactions to the headline and now I’m jumping in to add my view. Normal 0 false false false EN-US X-NONE X-NONE Customer loyalty is still key. It’s the effect and engagement is the cause. We at least know that to be true for our customers. We are in an age where customers are demanding to be heard. We need them to be actively involved – or engaged – as well. Greater levels of customer engagement, properly targeted, positively correlate with satisfaction. Our data has shown us this over and over. Satisfied customers are more loyal and more willing to vocalize their satisfaction through referencing, and are more likely to purchase again, all of which in turn drives incremental revenue – from the customer doing the referencing AND the customer on the receiving end of that reference. Turning this around completely, if we begin to see the level of a customer’s engagement start to wane, this is an indicator that their satisfaction, loyalty, and future revenue are likely at risk. At Oracle, we’ve put in place many programs to target, encourage, and then track engagement, allowing us to measure engagement as a determinant of loyalty. Some of these programs include our Key Accounts, solution design and architectural, Executive Sponsorship, as well as executive advisory boards. Specific programs allow us to engage specific contacts within specific customer organizations (based on role) and then systematically track their engagement activities over time, along side of tracking customer satisfaction, loyalty, referenceability, and incremental revenue contribution. Continuous measurement of engagement allows us to better understand customer views of what it means to partner with a provider and adjust program participation to better meet the needs of the partnership. We can also track across customer segments, and design new programs that are even more effective than the ones we have in place today. In case you missed any of my previous Forbes articles, I’ve included links below for easy access. Award-Winning Companies Put Customers First The Power of Peer Networks: 5 Reasons to Get (and Stay) Involved Technology At Work: Traveling In Style Customer Central: 8 Strategies for Putting Customers at the Core of Your Business Technology at Work: Five Companies Doing IT Right /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Access Control Lists for Roles

    - by Kyle Hatlestad
    Back in an earlier post, I wrote about how to enable entity security (access control lists, aka ACLs) for UCM 11g PS3.  Well, there was actually an additional security option that was included in that release but not fully supported yet (only for Fusion Applications).  It's the ability to define Roles as ACLs to entities (documents and folders).  But now in PS5, this security option is now fully supported.   The benefit of defining Roles for ACLs is that those user roles come from the enterprise security directory (e.g. OID, Active Directory, etc) and thus the WebCenter Content administrator does not need to define them like they do with ACL Groups (Aliases).  So it's a bit of best of both worlds.  Users are managed through the LDAP repository and are automatically granted/denied access through their group membership which are mapped to Roles in WCC.  A different way to think about it is being able to add multiple Accounts to content items...which I often get asked about.  Because LDAP groups can map to Accounts, there has always been this association between the LDAP groups and access to the entity in WCC.  But that mapping had to define the specific level of access (RWDA) and you could only apply one Account per content item or folder.  With Roles for ACLs, it basically takes away both of those restrictions by allowing users to define more then one Role and define the level of access on-the-fly. To turn on ACLs for Roles, there is a component to enable.  On the Component Manager page, click the 'advanced component manager' link in the description paragraph at the top.   In the list of Disabled Components, enable the RoleEntityACL component. Then restart.  This is assuming the other configuration settings have been made for the other ACLs in the earlier post.   Once enabled, a new metadata field called xClbraRoleList will be created.  If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. For Users and Groups, these values are automatically picked up from the corresponding database tables.  In the case of Roles, there is an explicitly defined list of choices that are made available.  These values must match the roles that are coming from the enterprise security repository. To add these values, go to Administration -> Admin Applets -> Configuration Manager.  On the Views tab, edit the values for the ExternalRolesView.  By default, 'guest' and 'authenticated' are added.  Once added, you can assign the roles to your content or folder. If you are a user that can both access the Security Group for that item and you belong to that particular Role, you now have access to that item.  If you don't belong to that Role, you won't! [Extra] Because the selection mechanism for the list is using a type-ahead field, users may not even know the possible choices to start typing to.  To help them, one thing you can add to the form is a placeholder field which offers the entire list of roles as an option list they can scroll through (assuming its a manageable size)  and view to know what to type to.  By being a placeholder field, it won't need to be added to the custom metadata database table or search engine.  

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >