Search Results

Search found 2945 results on 118 pages for 'scala designer'.

Page 101/118 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • SharePoint 2010 release date - is it that important?

    - by CharlesLee
    There has been lots of excitement in the SharePoint community over the last few days as Microsoft have announced the official release date of SharePoint 2010. May 12th is the date for your diaries (RTM in April.) The twittersphere has been telling everyone for the last few days about this news and there is much excitement. The major conferences this year all seem to have a SharePoint 2010 focus and some are entirely focussed on the new product (e.g. SharePoint Evolution Conference.)  Now by all accounts Microsoft have plugged some significant functionality gaps that exist in WSS 3.0 and MOSS 2007 and provided some exciting new functionality.  You don't need me to tell you about these as the MVPs (and other community members) are doing a sterling job, after all that is why Microsoft has MVPs in the first place. Lets get real for a second though as there is a significant investment involved in moving to SharePoint 2010:  Firstly you need 64 bit architecture across the board, now for some environments that is no inconsequential hurdle, that's a pretty significant roadblock.   The development farm, test farm and UAT farm are all going to require the same infrastructure upgrades. To take advantage of the tooling for SP2010 you will need to upgrade to Visual Studio 2010 and your development team is going to require 64 bit hardware/OS too.  I would not recommend installing SP 2010 in client installation mode (i.e. for Windows 7) on your developer machines, I would use this for demo machines only. Something that lots of people seem to forget in all their whooping and hollering about the new release is that there is a large amount of end user training going to be required as the browser UI has now adopted the omnipotent ribbon interface and there are other new and more complicated features. SharePoint Designer has also entirely changed in both look and feel and some significant feature changes have taken place. Lest we should forget that some companies have not long upgraded to MOSS 2007 and are yet to see a significant ROI for that project. And the reticence that most companies feel about implementing v1 Microsoft products.  This is only the surface of the deeper issues which would be involved in any upgrade process, so I guess I share a small part of the concern voiced by Mark Miller of EndUserSharePoint.com.  Is SharePoint 2010 relevant? I don't share this sentiment in its entirety as I firmly believe that all companies should be looking at SharePoint 2010 from day one, however most large scale existing implementations of MOSS 2007 are going to be several years away from a serious upgrade project.  So should the conference organisers and the SharePoint community as a whole be a little more understanding of the real world issues?  It's easy to get carried away in the excitement of a new product and new tools to play with but there needs to be a focus on the real world issues that most people are facing day to day and at the moment and for the short term future (at the very least the next 12 months) that is fairly and squarely in the WSS 2.0/3.0 and SPS 2003/MOSS 2007 camps. Don't get me wrong, I am very very excited about getting to grips with SharePoint 2010 in the real world and I cannot wait for my first real project to come along, but for now I am just being realistic about the reality for most people who work with SharePoint. I have been spending a lot of time on www.sharepointoverflow.com recently as there is a community of people building up who are committed to answering the real world questions that folks are dealing with every day.  I urge you to take a look and either ask or answer some questions direct from the front line of the SharePoint world.

    Read the article

  • Getting Started with Chart control in ASP.Net 4.0

    - by sreejukg
    In this article I am going to demonstrate the Chart control available in ASP.Net 4 and Visual Studio 2010. Most of the web applications need to generate reports for business users. The business users are happy to view the results in a graphical format more that seeing it in numbers. For the purpose of this demonstration, I have created a sales table. I am going to create charts from this sale data. The sale table looks as follows I have created an ASP.Net web application project in Visual Studio 2010. I have a default.aspx page that I am going to use for the demonstration. First I am going to add a chart control to the page. Visual Studio 2010 has a chart control. The Chart Control comes under the Data Tab in the toolbox. Drag and drop the Chart control to the default.aspx page. Visual Studio adds the below markup to the page. <asp:Chart ID="Chart1" runat="server"></asp:Chart> In the designer view, the Chart controls gives the following output. As you can see this is exactly similar to other server controls in ASP.Net, and similar to other controls under the data tab, Chart control is also a data bound control. So I am going to bind this with my sales data. From the design view, right click the chart control and select “show smart tag” Here you need so choose the Data source property and the chart type. From the choose data source drop down, select new data source. In the data source configuration wizard, select the SQL data base and write the query to retrieve the data. At first I am going to show the chart for amount of sales done by each sales person. I am going to use the following query inside sqldatasource select command. “SELECT SUM(SaleAmount) AS Expr1, salesperson FROM SalesData GROUP BY SalesPerson” This query will give me the amount of sales achieved by each sales person. The mark up of SQLDataSource is as follows. <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:SampleConnectionString %>" SelectCommand="SELECT SUM(SaleAmount) as amount, SalesPerson FROM SalesData GROUP BY SalesPerson"></asp:SqlDataSource> Once you selected the data source for the chart control, you need to select the X and Y values for the columns. I have entered salesperson in the X Value member and amount in the Y value member. After modifications, the Chart control looks as follows Click F5 to run the application. The output of the page is as follows. Using ASP.Net it is much easier to represent your data in graphical format. To show this chart, I didn’t even write any single line of code. The chart control is a great tool that helps the developer to show the business intelligence in their applications without using third party products. I will write another blog that explore further possibilities that shows more reports by using the same sales data. If you want to get the Project in zipped format, post your email below.

    Read the article

  • Pie Charts Just Don't Work When Comparing Data - Number 10 of Top 10 Reasons to Never Ever Use a Pie

    - by Tony Wolfram
    When comparing data, which is what a pie chart is for, people have a hard time judging the angles and areas of the multiple pie slices in order to calculate how much bigger one slice is than the others. Pie Charts Don't Work A slice of pie is good for serving up a portion of desert. It's not good for making a judgement about how big the slice is, what percentage of 100 it is, or how it compares to other slices. People have trouble comparing angles and areas to each other. Controlled studies show that people will overestimate the percentage that a pie slice area represents. This is because we have trouble calculating the area based on the space between the two angles that define the slice. This picture shows how a pie chart is useless in determing the largest value when you have to compare pie slices.   You can't compare angles and slice areas to each other. Human perception and cognition is poor when viewing angles and areas and trying to make a mental comparison. Pie charts overload the working memory, forcing the person to make complicated calculations, and at the same time make a decision based on those comparisons. What's the point of showing a pie chart when you want to compare data, except to say, "well, the slices are almost the same, but I'm not really sure which one is bigger, or by how much, or what order they are from largest to smallest. But the colors sure are pretty. Plus, I like round things. Oh,was I suppose to make some important business decision? Sorry." Bad Choices and Bad Decisions Interaction Designers, Graphic Artists, Report Builders, Software Developers, and Executives have all made the decision to use pie charts in their reports, software applications, and dashboards. It was a bad decision. It was a poor choice. There are always better options and choices, yet the designer still made the decision to use a pie chart. I'll expore why people make such poor choices in my upcoming blog entires. (Hint: It has more to do with emotions than with analytical thinking.) I've outlined my opinions and arguments about the evils of using pie charts in "Countdown of Top 10 Reasons to Never Ever Use a Pie Chart." Each of my next 10 blog entries will support these arguments with illustrations, examples, and references to studies. But my goal is not to continuously and endlessly rage against the evils of using pie charts. This blog is not about pie charts. This blog is about understanding why designers choose to use a pie chart. Why, when give better alternatives, and acknowledging the shortcomings of pie charts, do designers over and over again still freely choose to place a pie chart in a report? As an extra treat and parting shot, check out the nice pie chart that Wikipedia uses to illustrate the United States population by state.   Remember, somebody chose to use this pie chart, with all its glorious colors, and post it on Wikipedia for all the world to see. My next blog will give you a better alternative for displaying comparable data - the sorted bar chart.

    Read the article

  • SharePoint 2010 Hosting :: SharePoint 2010 Custom Web Template

    - by mbridge
    SharePoint 2010 offers some changes and additions to the SharePoint 2007 approach. Site definitions and publishing providers remain largely the same, but site templates created from the SharePoint UI or SharePoint Designer are now saved to a .WSP file, the same solution deployment packaging file format used for deploying custom SharePoint solutions. Site Templates saved to a .WSP solution file can be imported into Visual Studio for additional customization. Introducing the WebTemplate Feature Element The WebTemplate element, introduced in SharePoint 2010, allows site templates to be defined and deployed as a Feature as part of a solution package. A WebTemplate element feature can be used to deploy site templates in either a Farm or Sandbox solution - without modification. If deployed as a Farm feature and solution, site templates will appear in the site collection provisioning page in Central Administration and can be used to provision new site collections, or within a Site Collection to create sub-sites. If deployed as a Site feature and Sandbox solution, site templates will appear within the site collection to support creating a root site or sub-sites. Creating a new WebTemplate Feature in Visual Studio 2010 In addition to supporting the ability to save and import Site Templates created from the SharePoint UI into Visual Studio for customization, it can also be used to create new site templates from scratch. In the following sample we will walk through how to create a new WebTemplate solution based on  a customized version of the out-of-box Blank Site. 1. Create a new Empty SharePoint Project in Visual Studio 2010. 2. Add a new Empty Element to the project. we like to create folders for each type of element in our solution, so in our sample, we have created a Web Templates folder, and then added the BLANKENT element. NOTE: The Elements folder MUST share the same name as the WebTemplate name property. 3. Open the empty Elements.xml and add the <WebTemplate /> element block. 4. Copy the default.aspx and ONET.XML files from the STS site definition location at 14\TEMPLATES\Site Templates\STS. We will customize the ONET.XML in the next section. Open the properties for each file and set the Deployment Type to ElementFile. This ensures the files are deployed with the Element when included in a Feature. 5. By default a new feature is added to the solution for you automatically when a new element is added to the solution. Rename and edit the feature as appropriate. Select Farm for the scope to deploy the WebTemplate to the entire farm, or Site for a sandboxed solution. Customize the ONET.XML At this point, you have a working WebTemplate solution that will deploy the identical site to the out-of-box Blank Site, however the ONET.XML supporting the STS site definition contains 3 configurations – essentially 3 separate site templates and can be simplified before customizing. In the following sample, we have trimmed the ONET.XML to the essentials for a single Site Template, and added references to the <SiteFeatures /> and <WebFeatures /> elements to include the SharePoint Standard and Enterprise features. We have left the top-level navigation bar, and the default page module intact, but removed all other extraneous markup.

    Read the article

  • They may block off Howard Street—but Oracle OpenWorld is a two-way street.

    - by Oracle Accelerate for Midsize Companies
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 by Jim Lein, Sr. Director, Oracle Accelerate for Midsize Companies “Engineered to Inform and Inspire”—that’s the theme of Oracle OpenWorld 2012. In early October, tens of thousands of attendees will descend on the streets of San Francisco because they share one thing in common: the desire to learn more about Oracle. You might think that’s the way we, Oracle employees, look at this event—as just another opportunity for attendees to learn about what we do. But it’s really a two way street. Every year I’m amazed by how informed and inspired I am by our customers and their companies. Midsize companies buy Oracle to grow. As part of the Oracle Accelerate for Midsize Companies team I get to talk with our partners and business leaders at growing companies almost every day, usually via phone. Oracle OpenWorld presents the perfect opportunity to meet some of them in person, in an informal setting, and in one of the most beautiful cities in the world. The stories our customers tell me about their businesses provide vivid examples of how they have overcome the challenges of managing increasingly complex global operations and growing during uncertain economic conditions. It’s no secret that my favorite session at Oracle OpenWorld (besides Larry Ellison’s keynotes and the Customer Appreciation Event, of course) is the Oracle Accelerate Customer Panel. This year we’re featuring executives from three companies who deployed Oracle ERP rapidly to support their company’s growth: Chris Powell, VP and Corporate Controller of Beats by Dr. Dre, a California based designer and manufacturer of premium headphones (sorry, no free samples), Iñaki Zuazo, CIO of Industrias Juno, a building materials provider based in Spain, Kamran Moosa, Project Coordinator for Spartan Engineering, a provider of engineering and construction support services for an LPG storage project in Texas, and That’s a pretty diverse lineup and it will be interesting to hear the perspectives of both IT and financial project stakeholders. The session, “Oracle Accelerate Customer Case Studies: Rapid Deployment of Oracle Applications”, is at 3:30 pm on Wednesday, October 3, in the Concert room at the Palace Hotel. Oracle loves our hometown of San Francisco and it’s a great place to host Oracle OpenWorld. It’s now San Francisco’s largest conference and the city closes off Howard Street to better accommodate the attendees. Some Bay Area commuters may be inconvenienced for a few days by this closure but the conference brings about $100 million into the local economy. Now that’s a two-way street. More Oracle Accelerate at Oracle OpenWorld “Faster, Better, Cheaper Application Deployment with Oracle Business Accelerators”, Monday, October 1st, 10:45 a.m., Moscone West Room 3016 “Oracle Accelerate and Oracle Business Accelerators for Midsize Companies”, (partners only), Wednesday, October 3, 10:15 a.m., Marriott – Golden Gate B Visit the Oracle Accelerate and Oracle Business Accelerator Kiosk in the Moscone West Exhibit Grounds Download the Focus On Oracle Accelerate for Midsize Companies Focus document /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Silverlight Cream for March 30, 2010 -- #825

    - by Dave Campbell
    In this Issue: Jeremy Likness, Tim Greenfield, Tim Heuer, ondrejsv, XAML Ninja, Nikhil Kothari, Sergey Barskiy, Shawn Oster, smartyP, Christian Schormann(-2-), and John Papa And Glenn Block. Shoutouts: Victor Gaudioso produced a RefCard for DZone: Getting Started with Silverlight and Expression Blend Way to go Victor... it looks great! Gavin Wignall announced Metia launch FourSquare and Bing maps mash up – called Near.me Cheryl Simmons talks about VS2010 and the design surface: Changing Templates with the Silverlight Designer (and seeing the changes immediately) Michael S. Scherotter posted that New York Times Silverlight Kit Updated for Windows Phone 7 Series Jaime Rodriguez posted about 2 free chapters in his new book (with Yochay Kiriaty): A Journey Into Silverlight On Windows Phone -Via Learning WIndows PHone Programming Did you know there was "MSDN Radio"?? Tim Heuer posted follow-up answers to this morning's show: MSDN Radio follow-up answers: Prism for Silverlight, DomainServices and relationships Michael Klucher posted a great set of links for WP7 game development this morning: Great Game Development Tutorials for Windows Phone Zhiming Xue has 3 pages of synopsis and links for everything Windows Phone at MIX. This is the 1st, but at the top of the pages are links to the other two: Windows Phone 7 Content From MIX10 – Part I From SilverlightCream.com: Using WriteableBitmap to Simplify Animations with Clones Jeremy Likness takes a break from his LOB posts to demonstrate a page flip animation using WriteableBitmap to simplify the animation using clones. SAX-like Xml parsing Want some experience or fun with Rx? Tim Greenfield has a post up on building an observable XmlReader. nstalling Silverlight applications without the browser involved Last night I blogged Mike Taulty's take on the "Silent Install" for an OOB app, tonight, I'm posting Tim Heuer's insight on the topic. How to: Create computed/custom properties for sample data in Blend/Sketchflow ondrejsv posted an example of digging into the files that control the sample data for Blend to get what you really want. PathListBox Adventures – radial layout Check out the radial layout XAML Ninja did using the PathListBox ... and all code available. RIA Services and Validation Nikhil Kothari has a great (duh!) post up that follows his Silverlight TV on the same subject: RIA Services and validation... lots of good external links also. Windows Phone 7 Application with OData Sergey Barskiy did an OData to WP7 app by using the feed from MIX10. You can see a list of sessions, and click on one to see details. Getting Blur And DropShadow to work in the Windows Phone Emulator Shawn Oster responds to some forum questions about Blur and DropShadow effects not showing up in the WP7 emulator, and gives the code trick we have to do for now. Metro Icons for Windows Phone 7 We all got the other icon set for WP7 from MSDN, but smartyP pulled the Metro Icons from the PPT deck of the MIX10 presentations... good job! Fonts in SketchFlow Christian Schormann talks about fonts in Sketchflow, where they live on your machine, and how you can use them. Blend 4: About Path Layout, Part III Christian Schormann also has Part III of his epic tutorial up on Path Layout and Blend. This one is on dynamic resizing layouts, and he has links back to the other two if you missed them... or you can find them with a search at SilverlightCream... :) Simple ViewModel Locator for MVVM: The Patients Have Left the Asylum John Papa And Glenn Block teamed up to solve the View First model only without the maintenance involved with the ViewModel locator by using MEF. It only took these guys and hour... sigh... :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Adobe Photoshop CS5 vs Photoshop CS5 extended

    - by Edward
    Adobe Photoshop has been an industry standard for most web designers & photographers worldwide. Photoshop CS5 has made photography editing much more refined and the composition process has become much easier than ever before.  To study the advantage of Photoshop CS5 extended over Photoshop CS5 we have written this comparison article, with both a Designer’s & Photographer’s perspective. Hopefully it shall help you in your buying/upgrade decision. Photoshop CS5 Photoshop CS5 has refining feature with powerful photography tools. It made editing process easy as fewer steps are involved to remove noise, add grain, create vignettes, correct lens distortions, sharpen, and create HDR images. It has quick image correction and color and tone control for professional purpose. Intelligent image editing and enhancement , extraordinary advanced compositing has made it a better tool than earlier versions for photographers. It allows users to accelerate workflow with fast performance on 64-bit Windows® and Mac hardware systems and smoother interactions due to more GPU-accelerated features. It also boasts of a state-of-the-art processing with Adobe Photoshop Camera Raw 6 and helps to maximize creative impact. It provides for tremendous precision and freedom. It allows user to easily select intricate image elements, such as hair and create realistic painting effects. It also allows to remove any image element and see the space fill in almost magically. It has easy access to core editing and streamlined work flow and flexible work ambience. It has creative tools and contents. Photoshop CS5 Extended Photoshop CS5 extended is quite innovative and has incorporated 3D elements to 2D artwork directly within digital imaging application, which enables user to do an easy on-ramp to 3D image creation. It also provides for 3D editing. It has intelligent image editing and enhancement. It offers advance composing and has extraordinary painting and drawing toolset. It provides for video and animation designing. It helps to work with specialized images for architecture, manufacturing, engineering, science, and medicine. Where CS5 extended scores over CS5 CS5 extended has many features, which were not included in CS5. These features make it score more over CS5. These features are: Technology for creating 3D extrusion 3D material library and picker Field depth for 3D 3D merging and scene composition improvements 3D workflow improvement Customization of 3D features Image based light source Shadow catcher for shadow creation Enhanced ray tracer Context sensitive widgets, which allows easy control of objects, lights and cameras. Overlays for materials and mesh boundaries Photoshop CS5 extended is far better than CS5 as it incorporates all the features of CS5 and have more advanced features. It allows 3D creation and editing and has other advanced tools to make it better. Redefining the Image-Editing Experience  : A Photographer’s point of View Photoshop CS5 delivers amazing features and creative options so even new users can perform advanced image manipulations and compositions. Breath taking image intelligence behind Content-Aware Fill magically removes any image detail or object, examines the surroundings and seamlessly fills in the space left behind. Lighting, tone and noise of the surrounding area can be matched. New Refine Edge makes nearly-impossible image selections possible. Masking was never easier, the toughest types of edges, such as hair and foliage seem easier to fix. To sum up following are few advantages of CS5 extended over previous versions 64-bit processing Content Aware Fill Refine Edge, “makes nearly-impossible image selections impossible” HDR Pro, including ghost artifact removal and HDR toning, which gives the look of HDR with a single exposure New brush options Improved image management with enhanced Adobe Bridge Lens corrections Improved black-and-white conversions Puppet Warp: Precisely reposition or warp any image element Adobe Camera Raw 6 Upgrade Buy Online Pricing and Availability Adobe Photoshop CS5 and CS5 Extended are available through Adobe Authorized Resellers & the Adobe Store. Estimated street price for Adobe Photoshop CS5 is US$699 and US$999 for Photoshop CS5 Extended. Upgrade pricing and volume licensing are also available. Related posts:10 Free Alternatives for Adobe Photoshop Software Web based Alternatives to Photoshop 15 Useful Adobe Illustrator Tutorials For Designers

    Read the article

  • Closer look at the SOA 12c Feature: Oracle Managed File Transfer

    - by Tshepo Madigage-Oracle
    The rapid growth of cloud-based applications in the enterprise, combined with organizations' desire to integrate applications with mobile technologies, is dramatically increasing application integration complexity. To meet this challenge, Oracle introduced Oracle SOA Suite 12c, the latest version of the industry's most complete and unified application integration and SOA solution. With simplified cloud, mobile, on-premises, and Internet of Things (IoT) integration capabilities, all within a single platform, Oracle SOA Suite 12c helps organizations speed time to integration, improve productivity, and lower TCO. To extend its B2B solution capabilities with Oracle SOA Suite 12c, Oracle unveiled Oracle Managed File Transfer, an integrated solution that enables organizations to virtually eliminate file transfer complexities. This allows customers to load data securely into Oracle Cloud applications as well as third-party cloud or partner applications. Oracle Managed File Transfer (Oracle MFT) enables secure file exchange and management with internal departments and external partners. It protects against inadvertent access to unsecured files at every step in the end-to-end transfer of files. It is easy to use especially for non technical staff so you can leverage more resources to manage the transfer of files. The extensive reporting capabilities allow you to get quick status of a file transfer and resubmit it as required. You can protect data in your DMZ by using the SSH/FTP reverse proxy. Oracle Managed File Transfer can help integrate applications by transferring files between them in complex use case patterns. Standalone: Transferring files on its own using embedded FTP and sFTP servers and the file systems to which it has access. SOA Integration: a SOA application can be the source or target of a transfer. A SOA application can also be the common endpoint for the target of one transfer and the source of another. B2B Integration: B2B application can be the source or target of a transfer. A B2B application can also be the common endpoint for the target of one transfer and the source of another. Healthcare Integration:  Healthcare application can be the source or target of a transfer. A Healthcare application can also be the common endpoint for the target of one transfer and the source of another. Oracle Service Bus (OSB) integration: OMT can integrate with Oracle Service Bus web service interfaces. OSB interface can be the source or target of a transfer. An Oracle Service Bus interface can also be the common endpoint for the target of one transfer and the source of another. Hybrid Integration: can be one participant in a web of data transfers that includes multiple application types. Oracle Managed File Transfers has four user roles: file handlers, designers, monitors, and administrators. File Handlers: - Copy files to file transfer staging areas, which are called sources. - Retrieve files from file transfer destinations, which are called targets. Designers: - Create, read, update and delete file transfer sources. - Create, read, update and delete file transfer targets. - Create, read, update and delete transfers, which link sources and targets in complete file delivery flows. - Deploy and test transfers. Monitors: - Use the Dashboard and reports to ensure that transfer instances are successful. - Pause and resume lengthy transfers. - Troubleshoot errors and resubmit transfers. - View artifact deployment details and history. - View artifact dependence relationships. - Enable and disable sources, targets, and transfers. - Undeploy sources, targets, and transfers. - Start and stop embedded FTP and sFTP servers. Administrators: - All file handler tasks - All designer tasks - All monitor tasks - Add other users and determine their roles - Configure user directory permissions - Configure the Oracle Managed File Transfer server - Configure embedded FTP and sFTP servers, including security - Configure B2B and Healthcare domains - Back up and restore the Oracle Managed File Transfer configuration - Purge transferred files and instance data - Archive and restore instance data and payloads - Import and export metadata You will find all the related information about SOA 12.1.3. Oracle Manages File Transfer OMT in the documentation: Using Oracle Manages File Transfer Resources and links: Oracle Unveils Oracle SOA Suite 12c Oracle Managed Files Transfer Oracle Managed Files Transfer SOA 12c White Paper For further enquiries don't hesitate to contact us at [email protected] and join our Partner Webcast on Oracle SOA Suite 12c

    Read the article

  • Unreal Tournament 3 vs UDK: What Should I Choose?

    - by Matt Christian
    Many people in the mod community were very excited to see the release of the Unreal Developer Kit (UDK) a few months ago.  Along with generating excitement into a very dedicated community, it also introduced many new modders into a flourishing area of indie-development.  However, since UDK is free, most beginners jump right into UDK, which is OK though you might just benefit more from purchasing a shelf-copy of Unreal Tournament 3. UDK UDK is a free full version of UnrealEd (the editor environment used to create games like Gears of War 1/2, Bioshock 1/2, and of course Unreal Tournament 3).  The editor gives you all the features of the editor from the shelf-copy of the game plus some refinements in many of the tools.  (One of the first things you'll find about UnrealEd is that it's a collection of tools grouped into the same editor so it really isn't a single 'tool') Interestingly enough, Epic is allowing you to sell any game made in UDK with a few catches.  First off, you must purchase a liscense for your game (which, I THINK is aproximately $99 starting).  Secondly, you must pay 25% of all profits for the first $5,000 of your game revenue to them (about $1250).  Finally, you cannot use any of the 'media' provided in UDK for your game.  UDK provides sample meshes, textures, materials, sounds, and other sample pieces of media pulled (mostly) from Unreal Tournament 3. The final point here will really determine whether you should use UDK.  There is a very small amount of media provided in UDK for someone to go in and begin creating levels without first developing your own meshes, textures, and other media.  Sure, you can slap together a few unique levels, though you will end up finding yourself restriced to the same items over and over and over.  This is absolutely how professional game development is; you are 'given' (typically liscensed or built in-house) an engine/editor and you begin creating all the content for the game and placing it.  UDK is aimed toward those who really want to build their game content from scratch with a currently existing engine.  It is not suited for someone who would like to simply build levels and quick mods without learning external 3D programs and image editing software. Unreal Tournament 3 Unless you have a serious grudge against FPS's, Epic, or your computer sucks, there really is no reason not to own this game for PC.  You can pick it up on Steam or Amazon for around $20 brand new.  Not only are you provided with a full single-player and multiplayer game, but you are given the entire UnrealEd 3.0 including all of the content used to build UT3.  If you want to start building levels and mods quickly for UT3, you should absolutely pick up a shelf-copy. However, as off-the-shelf UT3 is a few years old now, the tools have not been updated for quite a while.  Compared to UDK, the menus are more difficult to navigate through and take more time getting used to.  Since UDK is updated almost every month, there are new inclusions to the editor that may not be in UT3 (including the future addition of 3D!).  I haven't worked enough with shelf UT3 to see if there are more features in UDK or if they both feature the same stuff in different forms, however you should remember that the Unreal Engine 3.0 has undergone numerous upgrades between it's launch and Gears of War 2 (in fact, Epic had a conference to show off what changed just between the Gears of Wars games). Since UT3 has much more core content, someone who wants to focus on level editing or modding the core UT3 game may find their needs better suited with an off-the-shelf copy of UT3.  If that level designer has a team that is generating custom assets, they may be better off with UDK. The choice is now yours...

    Read the article

  • SQL Server source control from Visual Studio

    - by David Atkinson
    Developers have long since had to context switch between two IDEs, Visual Studio for application code development and SQL Server Management Studio for database development. While this is accepted, especially given the richness of the database development feature set in SSMS, loading a separate tool can seem a little overkill. This is where SQL Connect comes in. This is an add-in to Visual Studio that provides a connected development experience for the SQL Server developer. Connected database development involves modifying a development sandbox database, as opposed to offline development, where SQL text files are modified independently of the database. One of the main complaints of Data Dude (VS DBPro) is that it enforces the offline approach. This gripe is what SQL Connect addresses. If you don't already use SQL Source Control, you can get up and running with SQL Connect by adding a new project to your Visual Studio solution as follows: Then choose your existing development database and you're ready to go. If you already use SQL Source Control, you will need to link SQL Connect to your existing database scripts folder repository, so SQL Connect and SQL Source Control can be used collaboratively (note that SQL Source Control v.3.0.9.18 or later is required). Locate the repository (this can be found in the Setup tab in SQL Source Control). .and create a working folder for it (here I'm using TortoiseSVN). Back in Visual Studio, locate the SQL Connect panel (in the View menu if it hasn't auto loaded) and select Import SQL Source Control project Locate your working folder and click Import. This creates a Red Gate database project under your solution: From here you can modify your development database, and manage your changes in source control. To associate your development database with the project, right click on the project node, select Properties, set the database and Save. Now you're ready to make some changes. Locate the object you'd like to modify in the Solution Explorer, and double click it to invoke a query window or table designer. You also have the option to edit the creation SQL directly using Edit SQL File in Project. Keeping the development database and Visual Studio project in sync is as easy as clicking on a button. One you've made your change, you can use whichever mechanism you choose to commit to source control. Here I'm using the free open-source AnkhSVN to integrate Subversion with Visual Studio. Maintaining your database in a Visual Studio solution means that you can commit database changes and application code changes in the same changeset. This is desirable if you have continuous integration set up as you want to ensure that all files related to a change are committed atomically, so you avoid an interim "broken build". More discussion on SQL Connect and its benefits can be found in the following article on Simple Talk: No More Disconnected SQL Development in Visual Studio The SQL Connect project team is currently assessing the backlog for the next development effort, and they'd appreciate your feature suggestions, as well as your votes on their suggestions site: http://redgate.uservoice.com/forums/140800-sql-connect-for-visual-studio- A 28-day free trial of SQL Connect is available from the Red Gate website. Technorati Tags: SQL Server

    Read the article

  • Customize Entity Framework SSDL &amp; SQL Generation

    - by Dane Morgridge
    In almost every talk I have done on Entity Framework I get questions on how to do custom SSDL or SQL when using model first development.  Quite a few of these questions have required custom changes to the SSDL, which of course can be a problem if it is getting auto generated.  Luckily, there is a tool that can help.  In the Visual Studio Gallery on MSDN, there is the Entity Designer Database Generation Power Pack. You have the ability to select different generation strategies and it also allows you to inject custom T4 Templates into the generation workflow so that you can customize the SSDL and SQL generation.  When you select to generate a database from a model the dialog is replaced by one with more options:   You can clone the individual workflow for either the current project or current machine.  The templates are installed at “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Extensions\Microsoft\Entity Framework Tools\DBGen” on my local machine and you can make a copy of any template there.  If you clone the strategy and open it up, you will get the following workflow: Each item in the sequence is defining the execution of a T4 template.  The XAML for the workflow is listed below so you can see where the T4 files are defined.  You can simply make a copy of an existing template and make what ever changes you need.   1: <Activity x:Class="GenerateDatabaseScriptWorkflow" ... > 2: <x:Members> 3: <x:Property Name="Csdl" Type="InArgument(sde:EdmItemCollection)" /> 4: <x:Property Name="ExistingSsdl" Type="InArgument(s:String)" /> 5: <x:Property Name="ExistingMsl" Type="InArgument(s:String)" /> 6: <x:Property Name="Ssdl" Type="OutArgument(s:String)" /> 7: <x:Property Name="Msl" Type="OutArgument(s:String)" /> 8: <x:Property Name="Ddl" Type="OutArgument(s:String)" /> 9: <x:Property Name="SmoSsdl" Type="OutArgument(ss:SsdlServer)" /> 10: </x:Members> 11: <Sequence> 12: <dbtk:ProgressBarStartActivity /> 13: <dbtk:CsdlToSsdlTemplateActivity SsdlOutput="[Ssdl]" TemplatePath="$(VSEFTools)\DBGen\CSDLToSSDL_TPT.tt" /> 14: <dbtk:CsdlToMslTemplateActivity MslOutput="[Msl]" TemplatePath="$(VSEFTools)\DBGen\CSDLToMSL_TPT.tt" /> 15: <ded:SsdlToDdlActivity ExistingSsdlInput="[ExistingSsdl]" SsdlInput="[Ssdl]" DdlOutput="[Ddl]" /> 16: <dbtk:GenerateAlterSqlActivity DdlInputOutput="[Ddl]" DeployToScript="True" DeployToDatabase="False" /> 17: <dbtk:ProgressBarEndActivity ClosePopup="true" /> 18: </Sequence> 19: </Activity>   So as you can see, this tool enables you to make some pretty heavy customizations to how the SSDL and SQL get generated.  You can get more info and the tool can be downloaded from: http://visualstudiogallery.msdn.microsoft.com/en-us/df3541c3-d833-4b65-b942-989e7ec74c87.  There is a comments section on the site so make sure you let the team know what you like and what you don’t like.  Enjoy!

    Read the article

  • How the "migrations" approach makes database continuous integration possible

    - by David Atkinson
    Testing a database upgrade script as part of a continuous integration process will only work if there is an easy way to automate the generation of the upgrade scripts. There are two common approaches to managing upgrade scripts. The first is to maintain a set of scripts as-you-go-along. Many SQL developers I've encountered will store these in a folder prefixed numerically to ensure they are ordered as they are intended to be run. Occasionally there is an accompanying document or a batch file that ensures that the scripts are run in the defined order. Writing these scripts during the course of development requires discipline. It's all too easy to load up the table designer and to make a change directly to the development database, rather than to save off the ALTER statement that is required when the same change is made to production. This discipline can add considerable overhead to the development process. However, come the end of the project, everything is ready for final testing and deployment. The second development paradigm is to not do the above. Changes are made to the development database without considering the incremental update scripts required to effect the changes. At the end of the project, the SQL developer or DBA, is tasked to work out what changes have been made, and to hand-craft the upgrade scripts retrospectively. The end of the project is the wrong time to be doing this, as the pressure is mounting to ship the product. And where data deployment is involved, it is prudent not to feel rushed. Schema comparison tools such as SQL Compare have made this latter technique more bearable. These tools work by analyzing the before and after states of a database schema, and calculating the SQL required to transition the database. Problem solved? Not entirely. Schema comparison tools are huge time savers, but they have their limitations. There are certain changes that can be made to a database that can't be determined purely from observing the static schema states. If a column is split, how do we determine the algorithm required to copy the data into the new columns? If a NOT NULL column is added without a default, how do we populate the new field for existing records in the target? If we rename a table, how do we know we've done a rename, as we could equally have dropped a table and created a new one? All the above are examples of situations where developer intent is required to supplement the script generation engine. SQL Source Control 3 and SQL Compare 10 introduced a new feature, migration scripts, allowing developers to add custom scripts to replace the default script generation behavior. These scripts are committed to source control alongside the schema changes, and are associated with one or more changesets. Before this capability was introduced, any schema change that required additional developer intent would break any attempt at auto-generation of the upgrade script, rendering deployment testing as part of continuous integration useless. SQL Compare will now generate upgrade scripts not only using its diffing engine, but also using the knowledge supplied by developers in the guise of migration scripts. In future posts I will describe the necessary command line syntax to leverage this feature as part of an automated build process such as continuous integration.

    Read the article

  • Customize SharePoint list using InfoPath2010 form Part4

    - by ybbest
    Customize SharePoint list using InfoPath2010 form Part1 Customize SharePoint list using InfoPath2010 form Part2 Customize SharePoint list using InfoPath2010 form Part3 In this post, I’d like to show you how to create print functionality in InfoPath for SharePoint list. The print functionality is provided out of box in InfoPath form library; however it is not available in SharePoint list. Here are the steps to create the print functionality.You can download the new form here. 1. Create print page in the list by first copy and paste the displayifs.aspx and rename the file to Printifs.aspx. 2. Open the page in the SharePoint designer and copy the following javascript to the PlaceHolderTitleAreaClass ContentPlaceHolder. <script type="text/javascript"> $(document).ready(function(){ $("[id^='Ribbon']").hide(); $(".s4-title").hide(); $("[id='s4-leftpanel']").hide(); $("[id='s4-ribbonrow']").hide(); $("[id='s4-titlerow']").hide(); $("[id='s4-titlerow']").css("height", "0px"); $("body").css("background-color", "white"); $("body").css("zoom", "135%"); $("[id='MSO_ContentTable']").css("margin-left", "0px"); $("[id='MT-BodyContent']").css("width", "900px"); $(".MT-BodyArea").css("width", "900px"); $("[id='MT-Layout']").css("width", "900px"); $(".ms-bodyareacell").css("width", "900px"); $(".s4-wpTopTable").css("border", "none"); $("[id$='XmlFormView']").css("margin-left", "-80px"); $("body").css("margin-top", "-30px"); $(":contains('CAPEX')").css("border", "5px solid #FFCC00"); window.print(); }); </script> 3. Open InfoPath form for the list and create a field called PrintLink 4. Set the default value of printLink that points to the print page I just created before with the query string id.You can download the formula for the default value here. 5. Add a new image that looks like Print button on the display view, then I can set the url to the Print link Field. (The reason I did not use button is that you cannot set the navigate url for the button). 6.Set the url of the image to the PrintLInk field. 7.Next , create the print view. 8. Copy the contents from the display view to print view 9. Finally, go to the printifs.aspx and edit the InfoPath web part to set the view to PrintView. 9. Republish you form you will see the form as shown below 10. If you click the Print button, you will see the print page and print dialog,you can also add the company logo in the print page using css as well. 11.To deploy the customization,you can use the backup and restore content database approach , you can get more details from my previous blog post here.

    Read the article

  • BizTalk 2009 - Pipeline Component Wizard

    - by Stuart Brierley
    Recently I decided to try out the BizTalk Server Pipeline Component Wizard when creating a new pipeline component for BizTalk 2009. There are different versions of the wizard available, so be sure to download the appropriate version for the BizTalk environment that you are working with. Following the download and expansion of the zip file, you should be left with a Visual Studio solution.  Open this solution and build the project. Following this installation is straight foward - locate and run the built setup.exe file in the PipelineComponentWizard Setup project and click through the small number of installation screens. Once you have completed installation you will be ready to use the wizard in Visual Studio to create your BizTalk Pipeline Component. Start by creating a new project, selecting BizTalk Projects then BizTalk Server Pipeline Component.  You will then be presented with the splash screen. The next step is General Setup, where you will detail the classname, namespace, pipeline and component types, and the implementation language for your Pipeline Component. The options for pipeline type are Receive, Send or Any. Depending on the pipeline type chosen there are different options presented for the component type, matching those available within the BizTalk Pipelines themselves: Receive - Decoder, Disassembling Parser, Validate, Party Resolver, Any. Send -  Encoder, Assembling Serializer, Any. Any - Any. The options for implementation language are C# or VB.Net Next you must set up the UI settings - these are the settings that affect the appearance of the pipeline component within Visual Studio. You must detail the component name, version, description and icon.  Next is the definition of the variables that the pipeline component will use.  The values for these variables will be defined in Visual Studio when creating a pipeline. The options for each variable you require are: Designer Property - The name of the variable. Data Type - String, Boolean, Integer, Long, Short, Schema List, Schema With None Clicking finish now will complete the wizard stage of the creation of your pipeline component. Once the wizard has completed you will be left with a BizTalk Server Pipeline Component project containing a skeleton code file for you to complete.   Within this code file you will mainly be interested in the execute method, which is left mostly empty ready for you to implement your custom pipeline code:          #region IComponent members         /// <summary>         /// Implements IComponent.Execute method.         /// </summary>         /// <param name="pc">Pipeline context</param>         /// <param name="inmsg">Input message</param>         /// <returns>Original input message</returns>         /// <remarks>         /// IComponent.Execute method is used to initiate         /// the processing of the message in this pipeline component.         /// </remarks>         public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(Microsoft.BizTalk.Component.Interop.IPipelineContext pc, Microsoft.BizTalk.Message.Interop.IBaseMessage inmsg)         {             //             // TODO: implement component logic             //             // this way, it's a passthrough pipeline component             return inmsg;         }         #endregion Once you have implemented your custom code, build and compile your Custom Pipeline Component then add the compiled .dll to C:\Program Files\Microsoft BizTalk Server 2009\Pipeline Components . When creating a new pipeline, in Visual Studio reset the toolbox and the custom pipeline component should appear ready for you to use in your Biztalk Pipeline. Drop the pipeline component into the relevant pipeline stage and configure the component properties (the variables defined in the wizard). You can now deploy and use the pipeline as you would any other custom pipeline.

    Read the article

  • JDeveloper 11g R1 (11.1.1.4.0) - New Features on ADF Desktop Integration Explained

    - by juan.ruiz
    One of the areas that introduced many new features on the latest release (11.1.1.4.0)  of JDeveloper 11g R1 is ADF Desktop integration - in this article I’ll provide an overview of these new features. New ADF Desktop Integration Ribbon in Excel - After installing the ADF desktop integration add-in and depending on the mode in which you open the desktop integration workbook, the ADF Desktop integration ribbon for design time and runtime are displayed as a separate tab within Excel. In previous version the ADF Desktop integration environment used to be placed inside the add-ins tab. Above you can see both, design time ribbon as well as runtime ribbon. On the design time ribbon you can manage the workbook and worksheet properties, worksheet component properties, diagnostics, execution and publication of the workbook. The runtime version of the ribbon is totally customizable and represents what it used to be the runtime menu on the spreadsheet, in this ribbon you can include all the operations and actions that could be executed by the end user while working with the spreadsheet data. Diagnostics - A very important aspect for developers is how to debug or verify the interactions of the client with the server, for that ADF desktop integration has provided since day one a series of diagnostics tools. In this release the diagnostics tools are more visible and are really easy to configure. You can access the client console while testing the workbook, or you can simple dump all the messages to a log file – having the ability of setting the output level for both. Security - There are a number of enhancements on security but the one with more impact for developers is tha security now is optional when using ADF Desktop Integration. Until this version every time that you wanted to work with ADFdi it was a must that the application was previously secured. In this release security is optional which means that if you have previously defined security on your application, then you must secure the ADFdi servlet as explained in one of my previous (ADD LINK) posts. In the other hand, if but the time that you start working with ADFdi you have not defined security, you can test and publish your workbooks without adding security. Support for Continuous Integration - In this release we have added tooling for continuous integration building. in the ADF desktop integration space, the concept translates to adding functionality that developers can use to publish ADFdi workbooks as part of their entire application build. For that purpose, we have a publish tool that can be easily invoke from an ANT task such that all the design time workbooks are re-published into the latest version of the application building process. Key Column - At runtime, on any worksheet containing editable tables you will notice a new additional column called the key column. The purpose of this column is to make the end user aware that all rows on the table need to be selected at the time of sorting. The users cannot alter the value of this column. From the developers points of view there are no steps required in order to have the key column included into the worksheets. Installation and Creation of New Workbooks - Both use cases can be executed now directly from JDeveloper. As part of the Tools menu options the developer can install the ADF desktop integration designer. Also, creating new workbooks that previously was done through that convert tool shipped with JDeveloper is now automatic done from the New Gallery. Creating a new ADFdi workbook adds metadata information information to the Excel workbook so you can work in design time. Other Enhancements Support for Excel 2010 and the ADF components ready-only enabled don’t allow to change its value – the cell in Excel is automatically protected, this could cause confusion among customers of previous releases.

    Read the article

  • Data Connection Library in SharePoint Server 2010

    - by Wayne
    What is a Data Connection Library in SharePoint Server 2010? A Data Connection Library in Microsoft SharePoint Server 2010 is a library that can contain two kinds of data connections: an Office Data Connection (ODC) file or a Universal Data Connection (UDC) file. Microsoft InfoPath 2010 uses data connections that comply with the Universal Data Connection (UDC) file schema and typically have either a *.udcx or *.xml file name extension. Data sources described by these data connections are stored on the server and can be used in standard form templates and browser-enabled form templates. How to create a SharePoint Server Data Connection Library? 1. Browse to a SharePoint Server 2010 site on which you have at least Design permissions. If you are on the root site, create a new site before you continue with the next step. 2. On the Site Actions menu, click More Options. 3. On the Create page, click Library under Filter By, and then click Data Connection Library. 4. On the right side of the Create page, type a name for the library, and then click the Create button. 5. Copy the URL of the new data connection library. How to create a new data connection file in InfoPath? 1. Open InfoPath Designer 2010, click Blank Form, and then click Design Form. 2. On the Data tab, click Data Connections, and then click Add. 3. In the Data Connection Wizard, click Create a new connection to, click Receive data, and then click Next. 4. Click the kind of data source that you are connecting to, such as Database, Web service, or SharePoint library or list, and then click Next. 5. Complete the remaining steps in the Data Connection Wizard to configure your data connection, and then click Finish to return to the Data Connections dialog box. 6. In the Data Connections dialog box, click Convert to Connection File. 7. In the Convert Data Connection dialog box, enter the URL of the data connection library that you previously copied (delete "Forms/AllItems.aspx" and anything following it from the URL), enter a name for the data connection file at the end of the URL, and then click OK. It will take a few moments to convert and save the data connection file to the library. 8. Confirm that the data connection was converted successfully by examining the Details section of the Data Connections dialog box while the name of the converted data connection is selected. 9. Browse to the SharePoint data connection library, click the drop-down next to the name of the data connection, click Approve/Reject, click Approved, and then click OK. Take advantage of SharePoint Data Connection Library and other useful features of SharePoint family of products that include, SharePoint Foundation 2010, MOSS 2007 and free SharePoint templates or web parts.

    Read the article

  • Silverlight Cream for February 06, 2011 -- #1042

    - by Dave Campbell
    In this Issue: Mike Taulty, Timmy Kokke, Laurent Bugnion, Arik Poznanski, Deyan Ginev, Deborah Kurata(-2-), Johnny Tordgeman, Roy Dallal, Jaime Rodriguez, Samuel Jack(-2-), James Ashley. Above the Fold: Silverlight: "Customizing Silverlight properties for Visual Designers" Timmy Kokke WP7: "Back button press when using webbrowser control in WP7" Jaime Rodriguez Expression Blend: "Blend Bits 21–Importing from Photoshop & Illustrator…" Mike Taulty From SilverlightCream.com: Blend Bits 21–Importing from Photoshop & Illustrator… Mike Taulty is up to 21 episodes on his Blend Bits sequence now, and this one is about using Blend's import capability, such as a .psd file with all the layers intact. Customizing Silverlight properties for Visual Designers Timmy Kokke has part 1 of 2 parts on making your Silverlight control properties in design surfaces such as Visual Studio designer or Expression Blend. An error when installing MVVM Light templates for VS10 Express Laurent Bugnion has released a new version of MVVMLight that resolves a problem with VS2010 Express version of the templates... no problem with anything else. Reading RSS items on Windows Phone 7 Arik Poznanski has a post up about reading RSS on a WP7, but better yet, he also has code for a helper class that you can grab, plus explanation of wiring it up. Integrating your Windows Phone unit tests with MSBuild #4: The WP7 Unit Test Application Deyan Ginev has a post up about Telerik's WP7 test app that outputs test results in XML from the emulator so they can be integrated with the MSBuild log. Accessing Data in a Silverlight Application: EF I apprently missed this post by Deborah Kurata last week on bringing data into your Silverlight app via Entity Frameworks... good detailed tutorial in VB and C#. Updating Data in a Silverlight Application: EF In Deborah Kurata's latest post, she is continuing with Entity Frameworks by demonstrating updating to the database... full source code will be produced in a later post. Fun with Silverlight and SharePoint 2010 Ribbon Control - Part 2 - An In Depth Look At The Ribbon Control Johnny Tordgeman has Part 2 of his Silverlight and Sharepoint 2010 Ribbon up... taking a deep-dive into the ribbon... great explanation of the attributes, code included. Geographic Coordinates Systems Roy Dallal has some Geo code up that's not necessarily Silverlight, but very cool if you're doing any GIS programming... ya gotta know the coordinate systems! Back button press when using webbrowser control in WP7 Jaime Rodriguez has a post up discussing the much-lamented back-button action in the certification requirements and how to deal with that in a web browser app. Multiplayer-enabling my Windows Phone 7 game: Day 1 Samuel Jack challenged himself to build a WP7 game in 3 days... now he's challenging himself to make it multiplayer in 3 days... this first hour-to-hour post is research of networking and an azure server-side solution. Multiplayer-enabling my Windows Phone 7 game: Day 2–Building a UI with XPF Day 2 for Samuel Jack getting the multiplayer portion of his game working in 3 days.. this day involves getting up-to-speed with XPF. How to Hotwire your WP7 Phone Battery Did you realize if you run your WP7 battery completely down that you can't charge it? James Ashley reports that circumstance, and how he resolved it. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Nest reinvents smoke detectors. Introduces smart and talking smoke detector that keeps quite when you wave

    - by Gopinath
    Nest, the leading smart thermostat maker has introduced a smart home device today- Nest Protect, a smart, talking smoke & carbon monoxide detector that can quite when you wave your hand. Less annoyances and more intelligence Smoke detectors are around for hundreds of years and playing a major role in providing safety from fire accidents at home. But the technology of these devices is stale and there is no major innovation for the past several years. With the introduction of Nest Protect, the landscape of smoke detectors is all set to change just like how Nest thermostat redefined the industry two years ago. Nest Protect is internet enabled and equipped with motion- and smoke-detection sensors so that when it starts beeping you can silence it by waving hand instead of doing circus feats to turn off the alarm. Everyone who cooks in a home equipped with smoke detector would know how annoying it is to turn off sensitive smoke detectors that goes off control quite often. Apart from addressing the annoyances of regular smoke detector, Nest Protect has talking capabilities. It can alert users with clear & actionable instructions when it detects a danger. Instead of harsh beeps it actually speak to you so you know what is happening. It will tell you what smoke it has detected and in which room it is detected. Multiple Nest Protects installed in a home can communicate with each other. Lets say that there is a smoke in bed room, the Nest Protect installed in bed room shares this information to all Nest Protects installed in the home and your kitchen device can alert you that there is a smoke in bed room. There is an App for that The internet enabled Nest Protect has an app to view its status and various alerts. When the Protect is running on low battery it alerts you to replace them soon. If there is a smoke at home and you are away, you will get message alerts. The app works on all major smartphones as well as tablets. Auto shuts down gas furnaces/heaters on smoke Apart from forming a network with other Nest Protect devices installed at home, they can also communicate with Nest Thermostat if it is installed. When carbon monoxide is detected it can shut off your gas furnace automatically. Also with the help of motion detectors it improves Nest Thermostat’s auto-away functionality. It looks elegant and costs a lot more than a regular smoke detector Just like Nest Thermostat, Nest Protect is elegant and adorable. You just fall in love with it the moment you see it. It’s another master piece from the designer of Apple’s iPod. All is good with the Nest Protect, except the price!! It costs whooping $129, which is almost 4 times more expensive than the best selling conventional thermostats available at $30. A single bed room apartment would require at least 3 detectors and it costs around $390 to install Nest Protects compared to 90$ required for conventional smoke detectors. Though Nest Thermostat is an expensive one compared to conventional thermostats, it offered great savings through its intelligent auto-away feature. Users were able to able to see returns on their investments. If Nest Protect also can provide good return on investment the it will be very successful.

    Read the article

  • RC of Entity Framework 4.1 (which includes EF Code First)

    - by ScottGu
    Last week the data team shipped the Release Candidate of Entity Framework 4.1.  You can learn more about it and download it here. EF 4.1 includes the new “EF Code First” option that I’ve blogged about several times in the past.  EF Code First provides a really elegant and clean way to work with data, and enables you to do so without requiring a designer or XML mapping file.  Below are links to some tutorials I’ve written in the past about it: Code First Development with Entity Framework 4.x EF Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database The above tutorials were written against the CTP4 release of EF Code First (and so some APIs might be a little different) – but the concepts and scenarios outlined in them are the same as with the RC. Go Live License Last week’s EF 4.1 RC ships with a “go live” license that enables you to use it in production environments.  The final release of EF 4.1 will ship within the next 4 weeks and will be 100% API compatible with the RC release. Improvements with the RC The RC includes several improvements and enhancements.  The EF team has a good blog post summarizing the RC changes.  Scott Hanselman also has a nice video interview with the data team that talks more about the release. One of my favorite improvements introduced with last week’s RC is its support for medium trust security.  This enables you to use EF 4.1 (and code-first) within low-cost ASP.NET shared hosting web environments – without requiring a hoster to install anything to use it. EF 4.1 also now supports validation with not only code-first scenarios, but also model-first and database-first workflows.  Upgrading from previous releases The RC does include a few API tweaks and changes from the prior CTP builds.  Read the release notes that come with the release to get a more detailed listing of the changes. John Papa also has an excellent Upgrading to EF 4.1 RC blog post that describes the steps he took when upgrading a large project he wrote with the previous CTP5 release.  The work to upgrade is pretty straight forward and easy – use his write-up as a guide on how to quickly update projects of your own. NuGet Package Rename One of the changes that the data team made between the CTP5 and RC releases was to rename the NuGet package name from “EFCodeFirst” to “EntityFramework”. They decided to make this change since the EF 4.1 release now includes several additions above and beyond just code first. If you already have installed the “EFCodeFirst” NuGet package, you’ll want to uninstall it and then install the new “EntityFramework” NuGet package.  John Papa’s blog post details the exact steps on how to do this (it only takes ~20 seconds to do this). More EF Tutorials Julie Lerman has created some nice whitepapers and tutorials for MSDN that show using the new EF4 and EF 4.1 feature set. Click here to find links to read and watch them. Summary I’m really excited about the EF 4.1 release that will be shipping next month.  It significantly improves the Entity Framework, and makes it even easier and cleaner to work with data inside of .NET.  You can take advantage of it within all ASP.NET projects (including both Web Forms and MVC), within client projects using Windows Forms and WPF, and within other project types like WCF, Console and Services.  You can use NuGet to easily install it within all of them. Hope this helps, Scott P.S. I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • SSIS Technique to Remove/Skip Trailer and/or Bad Data Row in a Flat File

    - by Compudicted
    I noticed that the question on how to skip or bypass a trailer record or a badly formatted/empty row in a SSIS package keeps coming back on the MSDN SSIS Forum. I tried to figure out the reason why and after an extensive search inside the forum and outside it on the entire Web (using several search engines) I indeed found that it seems even thought there is a number of posts and articles on the topic none of them are employing the simplest and the most efficient technique. When I say efficient I mean the shortest time to solution for the fellow developers. OK, enough talk. Let’s face the problem: Typically a flat file (e.g. a comma delimited/CSV) needs to be processed (loaded into a database in most cases really). Oftentimes, such an input file is produced by some sort of an out of control, 3-rd party solution and would come in with some garbage characters and/or even malformed/miss-formatted rows. One such example could be this imaginary file: As you can see several rows have no data and there is an occasional garbage character (1, in this example on row #7). Our task is to produce a clean file that will only capture the meaningful data rows. As an aside, our output/target may be a database table, but for the purpose of this exercise we will simply re-format the source. Let’s outline our course of action to start off: Will use SSIS 2005 to create a DFT; The DFT will use a Flat File Source to our input [bad] flat file; We will use a Conditional Split to process the bad input file; and finally Dump the resulting data to a new [clean] file. Well, only four steps, let’s see if it is too much of work. 1: Start the BIDS and add a DFT to the Control Flow designer (I named it Process Dirty File DFT): 2, and 3: I had added the data viewer to just see what I am getting, alas, surprisingly the data issues were not seen it:   What really is the key in the approach it is to properly set the Conditional Split Transformation. Visually it is: and specifically its SSIS Expression LEN([After CS Column 0]) > 1 The point is to employ the right Boolean expression (yes, the Conditional Split accepts only Boolean conditions). For the sake of this post I re-named the Output Name “No Empty Rows”, but by default it will be named Case 1 (remember to drag your first column into the expression area)! You can close your Conditional Split now. The next part will be crucial – consuming the output of our Conditional Split. Last step - #4: Add a Flat File Destination or any other one you need. Click on the Conditional Split and choose the green arrow to drop onto the target. When you do so make sure you choose the No Empty Rows output and NOT the Conditional Split Default Output. Make the necessary mappings. At this point your package must look like: As the last step will run our package to examine the produced output file. F5: and… it looks great!

    Read the article

  • DataContractSerializer: type is not serializable because it is not public?

    - by Michael B. McLaughlin
    I recently ran into an odd and annoying error when working with the DataContractSerializer class for a WP7 project. I thought I’d share it to save others who might encounter it the same annoyance I had. So I had an instance of  ObservableCollection<T> that I was trying to serialize (with T being a class I wrote for the project) and whenever it would hit the code to save it, it would give me: The data contract type 'ProjectName.MyMagicItemsClass' is not serializable because it is not public. Making the type public will fix this error. Alternatively, you can make it internal, and use the InternalsVisibleToAttribute attribute on your assembly in order to enable serialization of internal members - see documentation for more details. Be aware that doing so has certain security implications. This, of course, was malarkey. I was trying to write an instance of MyAwesomeClass that looked like this: [DataContract] public class MyAwesomeClass { [DataMember] public ObservableCollection<MyMagicItemsClass> GreatItems { get; set; }   [DataMember] public ObservableCollection<MyMagicItemsClass> SuperbItems { get; set; }     public MyAwesomeClass { GreatItems = new ObservableCollection<MyMagicItemsClass>(); SuperbItems = new ObservableCollection<MyMagicItemsClass>(); } }   That’s all well and fine. And MyMagicItemsClass was also public with a parameterless public constructor. It too had DataContractAttribute applied to it and it had DataMemberAttribute applied to all the properties and fields I wanted to serialize. Everything should be cool, but it’s not because I keep getting that “not public” exception. I could tell you about all the things I tried (generating a List<T> on the fly to make sure it wasn’t ObservableCollection<T>, trying to serialize the the Collections directly, moving it all to a separate library project, etc.), but I want to keep this short. In the end, I remembered my the “Debug->Exceptions…” VS menu option that brings up the list of exception-related circumstances under which the Visual Studio debugger will break. I checked the “Thrown” checkbox for “Common Language Runtime Exceptions”, started the project under the debugger, and voilà: the true problem revealed itself. Some of my properties had fairly elaborate setters whose logic I wanted to ignore. So for some of them, I applied an IgnoreDataMember attribute to them and applied the DataMember attribute to the underlying fields instead. All of which, in line with good programming practices, were private. Well, it just so happens that WP7 apps run in a “partial trust” environment and outside of “full trust”-land, DataContractSerializer refuses to serialize or deserialize non-public members. Of course that exception was swallowed up internally by .NET so all I ever saw was that bizarre message about things that I knew for certain were public being “not public”. I changed all the private fields I was serializing to public and everything worked just fine. In hindsight it all makes perfect sense. The serializer uses reflection to build up its graph of the object in order to write it out. In partial trust, you don’t want people using reflection to get at non-public members of an object since there are potential security problems with allowing that (you could break out of the sandbox pretty quickly by reflecting and calling the appropriate methods and cause some havoc by reflecting and setting the appropriate fields in certain circumstances. The fact that you cannot reflect your own assembly seems a bit heavy-handed, but then again I’m not a compiler writer or a framework designer and I have no idea what sorts of difficulties would go into allowing that from a compilation standpoint or what sorts of security problems allowing that could present (if any). So, lesson learned. If you get an incomprehensible exception message, turn on break on all thrown exceptions and try running it again (it might take a couple of tries, depending) and see what pops out. Chances are you’ll find the buried exception that actually explains what was going on. And if you’re getting a weird exception when trying to use DataContractSerializer complaining about public types not being public, chances are you’re trying to serialize a private or protected field/property.

    Read the article

  • Flow-Design Cheat Sheet &ndash; Part II, Translation

    - by Ralf Westphal
    In my previous post I summarized the notation for Flow-Design (FD) diagrams. Now is the time to show you how to translate those diagrams into code. Hopefully you feel how different this is from UML. UML leaves you alone with your sequence diagram or component diagram or activity diagram. They leave it to you how to translate your elaborate design into code. Or maybe UML thinks it´s so easy no further explanations are needed? I don´t know. I just know that, as soon as people stop designing with UML and start coding, things end up to be very different from the design. And that´s bad. That degrades graphical designs to just time waste on paper (or some designer). I even believe that´s the reason why most programmers view textual source code as the only and single source of truth. Design and code usually do not match. FD is trying to change that. It wants to make true design a first class method in every developers toolchest. For that the first prerequisite is to be able to easily translate any design into code. Mechanically, without thinking. Even a compiler could do it :-) (More of that in some other article.) Translating to Methods The first translation I want to show you is for small designs. When you start using FD you should translate your diagrams like this. Functional units become methods. That´s it. An input-pin becomes a method parameter, an output-pin becomes a return value: The above is a part. But a board can be translated likewise and calls the nested FUs in order: In any case be sure to keep the board method clear of any and all business logic. It should not contain any control structures like if, switch, or a loop. Boards do just one thing: calling nested functional units in proper sequence. What about multiple input-pins? Try to avoid them. Replace them with a join returning a tuple: What about multiple output-pins? Try to avoid them. Or return a tuple. Or use out-parameters: But as I said, this simple translation is for simple designs only. Splits and joins are easily done with method translation: All pretty straightforward, isn´t it. But what about wires, named pins, entry points, explicit dependencies? I suggest you don´t use this kind of translation when your designs need these features. Translating to methods is for small scale designs like you might do once you´re working on the implementation of a part of a larger design. Or maybe for a code kata you´re doing in your local coding dojo. Instead of doing TDD try doing FD and translate your design into methods. You´ll see that way it´s much easier to work collaboratively on designs, remember them more easily, keep them clean, and lessen the need for refactoring. Translating to Events [coming soon]

    Read the article

  • EPM 11.1.2.2 Architecture: Financial Performance Management Applications

    - by Marc Schumacher
     Financial Management can be accessed either by a browser based client or by SmartView. Starting from release 11.1.2.2, the Financial Management Windows client does not longer access the Financial Management Consolidation server. All tasks that require an on line connection (e.g. load and extract tasks) can only be done using the web interface. Any client connection initiated by a browser or SmartView is send to the Oracle HTTP server (OHS) first. Based on the path given (e.g. hfmadf, hfmofficeprovider) in the URL, OHS makes a decision to forward this request either to the new Financial Management web application based on the Oracle Application Development Framework (ADF) or to the .NET based application serving SmartView retrievals running on Internet Information Server (IIS). Any requests send to the ADF web interface that need to be processed by the Financial Management application server are send to the IIS using HTTP protocol and will be forwarded further using DCOM to the Financial Management application server. SmartView requests, which are processes by IIS in first row, are forwarded to the Financial Management application server using DCOM as well. The Financial Management Application Server uses OLE DB database connections via native database clients to talk to the Financial Management database schema. Communication between the Financial Management DME Listener, which handles requests from EPMA, and the Financial Management application server is based on DCOM.  Unlike most other components Essbase Analytics Link (EAL) does not have an end user interface. The only user interface is a plug-in for the Essbase Administration Services console, which is used for administration purposes only. End users interact with a Transparent or Replicated Partition that is created in Essbase and populated with data by EAL. The Analytics Link Server deployed on WebLogic communicates through HTTP protocol with the Analytics Link Financial Management Connector that is deployed in IIS on the Financial Management web server. Analytics Link Server interacts with the Data Synchronisation server using the EAL API. The Data Synchronization server acts as a target of a Transparent or Replicated Partition in Essbase and uses a native database client to connect to the Financial Management database. Analytics Link Server uses JDBC to connect to relational repository databases and Essbase JAPI to connect to Essbase.  As most Oracle EPM System products, browser based clients and SmartView can be used to access Planning. The Java based Planning web application is deployed on WebLogic, which is configured behind an Oracle HTTP Server (OHS). Communication between Planning and the Planning RMI Registry Service is done using Java Remote Message Invocation (RMI). Planning uses JDBC to access relational repository databases and talks to Essbase using the CAPI. Be aware of the fact that beside the Planning System database a dedicated database schema is needed for each application that is set up within Planning.  As Planning, Profitability and Cost Management (HPCM) has a pretty simple architecture. Beside the browser based clients and SmartView, a web service consumer can be used as a client too. All clients access the Java based web application deployed on WebLogic through Oracle HHTP Server (OHS). Communication between Profitability and Cost Management and EPMA Web Server is done using HTTP protocol. JDBC is used to access the relational repository databases as well as data sources. Essbase JAPI is utilized to talk to Essbase.  For Strategic Finance, two clients exist, SmartView and a Windows client. While SmartView communicates through the web layer to the Strategic Finance Server, Strategic Finance Windows client makes a direct connection to the Strategic Finance Server using RPC calls. Connections from Strategic Finance Web as well as from Strategic Finance Web Services to the Strategic Finance Server are made using RPC calls too. The Strategic Finance Server uses its own file based data store. JDBC is used to connect to the EPM System Registry from web and application layer.  Disclosure Management has three kinds of clients. While the browser based client and SmartView interact with the Disclosure Management web application directly through Oracle HTTP Server (OHS), Taxonomy Designer does not connect to the Disclosure Management server. Communication to relational repository databases is done via JDBC, to connect to Essbase the Essbase JAPI is utilized.

    Read the article

  • JFall 2012

    - by Geertjan
    JFall 2012 was over far too soon! Seven tracks going on simultaneously in a great location, with many artifacts reminding me of JavaOne, and nice snacks and drinks afterwards. The day started, as such things always do, with a keynote. Thanks to @royvanrijn for the photo below, I didn't take any myself and without a picture this report might have been too dry: What you see above is Steve Chin riding into the keynote hall on his NightHacking bike. The keynote was interesting, I can't be too complimentary about it, since I was part of it myself. Bert Ertman introduced the day and then Steve Chin took over, together with Sharat Chander, Tom Eugelink, Timon Veenstra, and myself. We had a strict choreography for the keynote, one that would ensure a lot of variation and some unexpected surprises, such as Steve being thrown off the stage a few times by Bert because of mentioning JavaOne too many times, rather than the clearly much cooler JFall. Steve talked about JavaOne and the direction Java is headed in, Sharat talked about JavaME and embedded devices, Steve and Tom did a demo involving JavaFX, I did a Project Easel demo, and Timon from Ordina talked about his Duke's Choice Award winning AgroSense project. I think the Project Easel demo (which I repeated later in a screencast for Parleys arranged by Eugene Boogaart) came across well and several people I spoke to especially like the roundtrip/bi-directional work that can be done, from browser to IDE and back again, very simply and intuitively. (In a long conversation on the drive back home afterwards, the scenario of a designer laying out the UI in HTML and then handing the HTML to a developer for back-end work, a developer who would then find it convenient to open the HTML in a browser and quickly navigate from the browser to the resources within the IDE, was discussed and considered to be extremely interesting and worth considering adopting NetBeans for, for no other reason than that.) Later I attended a session by David Delabassee on Java EE 7, Hans Dockter on Gradle, and Sander Mak on cross-build injection attacks. I was sorry to have missed Martijn Verburg's session, which sounded like it was really fantastic, among others, such as Gerrit Grunwald. I did a session too, entitled "Unlocking the Java EE 6 Platform", which was very well attended, pretty much a full room, and the demo went very smoothly. I talked to many people, e.g., a long time with Hans Dockter about how cool Gradle is and how great the Gradle/NetBeans plugin is turning out to be. I also had a long conversation (and did a demo) with Chris Chedgey, from Structure101, after his session, which was incredibly well attended; very interesting how popular modularity is. I met several people for the first time, as well as some colleagues from past places I've worked at. All in all, it was a great conference, unfortunately too short, which was very well attended (clearly over 1000) people, with several international speakers, as well as international attendees such as Mattias Karlsson, Sweden JUG leader. And, unsurprisingly, I came across NetBeans Platform applications again, none of which I had ever heard of before. In each case, "our fat client application" was mentioned in passing, never as a main application, and never in a context where there are plans for the application to be migrated to the web or mobile, simply because doing so makes no business sense at all. Great times at JFall, looking forward to meeting with some of the people I met again soon.

    Read the article

  • RPi and Java Embedded GPIO: Big Data and Java Technology

    - by hinkmond
    Java Embedded and Big Data go hand-in-hand, especially as demonstrated by prototyping on a Raspberry Pi to show how well the Java Embedded platform can perform on a small embedded device which then becomes the proof-of-concept for industrial controllers, medical equipment, networking gear or any type of sensor-connected device generating large amounts of data. The key is a fast and reliable way to access that data using Java technology. In the previous blog posts you've seen the integration of a static electricity sensor and the Raspberry Pi through the GPIO port, then accessing that data through Java Embedded code. It's important to point out how this works and why it works well with Java code. First, the version of Linux (Debian Wheezy/Raspian) that is found on the RPi has a very convenient way to access the GPIO ports through the use of Linux OS managed file handles. This is key in avoiding terrible and complex coding using register manipulation in C code, or having to program in a less elegant and clumsy procedural scripting language such as python. Instead, using Java Embedded, allows a fast way to access those GPIO ports through those same Linux file handles. Java already has a very easy to program way to access file handles with a high degree of performance that matches direct access of those file handles with the Linux OS. Using the Java API java.io.FileWriter lets us open the same file handles that the Linux OS has for accessing the GPIO ports. Then, by first resetting the ports using the unexport and export file handles, we can initialize them for easy use in a Java app. // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); ... // Reset the port unexportFile.write(gpioChannel); unexportFile.flush(); // Set the port for use exportFile.write(gpioChannel); exportFile.flush(); Then, another set of file handles can be used by the Java app to control the direction of the GPIO port by writing either "in" or "out" to the direction file handle. // Open file handle to input/output direction control of port FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio" + gpioChannel + "/direction"); // Set port for input directionFile.write("in"); // Or, use "out" for output directionFile.flush(); And, finally, a RandomAccessFile handle can be used with a high degree of performance on par with native C code (only milliseconds to read in data and write out data) with low overhead (unlike python) to manipulate the data going in and out on the GPIO port, while the object-oriented nature of Java programming allows for an easy way to construct complex analytic software around that data access functionality to the external world. RandomAccessFile[] raf = new RandomAccessFile[GpioChannels.length]; ... // Reset file seek pointer to read latest value of GPIO port raf[channum].seek(0); raf[channum].read(inBytes); inLine = new String(inBytes); It's Big Data from sensors and industrial/medical/networking equipment meeting complex analytical software on a small constraint device (like a Linux/ARM RPi) where Java Embedded allows you to shine as an Embedded Device Software Designer. Hinkmond

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >