Search Results

Search found 71854 results on 2875 pages for 'build time'.

Page 330/2875 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • September 2011 Release of the Ajax Control Toolkit

    - by Stephen Walther
    I’m happy to announce the release of the September 2011 Ajax Control Toolkit. This release has several important new features including: Date ranges – When using the Calendar extender, you can specify a start and end date and a user can pick only those dates which fall within the specified range. This was the fourth top-voted feature request for the Ajax Control Toolkit at CodePlex. Twitter Control – You can use the new Twitter control to display recent tweets associated with a particular Twitter user or tweets which match a search query. Gravatar Control – You can use the new Gravatar control to display a unique image for each user of your website. Users can upload custom images to the Gravatar.com website or the Gravatar control can display a unique, auto-generated, image for a user. You can download this release this very minute by visiting CodePlex: http://AjaxControlToolkit.CodePlex.com Alternatively, you can execute the following command from the Visual Studio NuGet console: Improvements to the Ajax Control Toolkit Calendar Control The Ajax Control Toolkit Calendar extender control is one of the most heavily used controls from the Ajax Control Toolkit. The developers on the Superexpert team spent the last sprint focusing on improving this control. There are three important changes that we made to the Calendar control: we added support for date ranges, we added support for highlighting today’s date, and we made fixes to several bugs related to time zones and daylight savings. Using Calendar Date Ranges One of the top-voted feature requests for the Ajax Control Toolkit was a request to add support for date ranges to the Calendar control (this was the fourth most voted feature request at CodePlex). With the latest release of the Ajax Control Toolkit, the Calendar extender now supports date ranges. For example, the following page illustrates how you can create a popup calendar which allows a user only to pick dates between March 2, 2009 and May 16, 2009. <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="CalendarDateRange.aspx.cs" Inherits="WebApplication1.CalendarDateRange" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html> <head runat="server"> <title>Calendar Date Range</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager ID="tsm" runat="server" /> <asp:TextBox ID="txtHotelReservationDate" runat="server" /> <asp:CalendarExtender ID="Calendar1" TargetControlID="txtHotelReservationDate" StartDate="3/2/2009" EndDate="5/16/2009" SelectedDate="3/2/2009" runat="server" /> </form> </body> </html> This page contains three controls: an Ajax Control Toolkit ToolkitScriptManager control, a standard ASP.NET TextBox control, and an Ajax Control Toolkit CalendarExtender control. Notice that the Calendar control includes StartDate and EndDate properties which restrict the range of valid dates. The Calendar control shows days, months, and years outside of the valid range as struck out. You cannot select days, months, or years which fall outside of the range. The following video illustrates interacting with the new date range feature: If you want to experiment with a live version of the Ajax Control Toolkit Calendar extender control then you can visit the Calendar Sample Page at the Ajax Control Toolkit Sample Site. Highlighted Today’s Date Another highly requested feature for the Calendar control was support for highlighting today’s date. The Calendar control now highlights the user’s current date regardless of the user’s time zone. Fixes to Time Zone and Daylight Savings Time Bugs We fixed several significant Calendar extender bugs related to time zones and daylight savings time. For example, previously, when you set the Calendar control’s SelectedDate property to the value 1/1/2007 then the selected data would appear as 12/31/2006 or 1/1/2007 or 1/2/2007 depending on the server time zone. For example, if your server time zone was set to Samoa (UTC-11:00), then setting SelectedDate=”1/1/2007” would result in “12/31/2006” being selected in the Calendar. Users of the Calendar extender control found this behavior confusing. After careful consideration, we decided to change the Calendar extender so that it interprets all dates as UTC dates. In other words, if you set StartDate=”1/1/2007” then the Calendar extender parses the date as 1/1/2007 UTC instead of parsing the date according to the server time zone. By interpreting all dates as UTC dates, we avoid all of the reported issues with the SelectedDate property showing the wrong date. Furthermore, when you set the StartDate and EndDate properties, you know that the same StartDate and EndDate will be selected regardless of the time zone associated with the server or associated with the browser. The date 1/1/2007 will always be the date 1/1/2007. The New Twitter Control This release of the Ajax Control Toolkit introduces a new twitter control. You can use the Twitter control to display recent tweets associated with a particular twitter user. You also can use this control to show the results of a twitter search. The following page illustrates how you can use the Twitter control to display recent tweets made by Scott Hanselman: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="TwitterProfile.aspx.cs" Inherits="WebApplication1.TwitterProfile" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html > <head runat="server"> <title>Twitter Profile</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager ID="tsm" runat="server" /> <asp:Twitter ID="Twitter1" ScreenName="shanselman" runat="server" /> </form> </body> </html> This page includes two Ajax Control Toolkit controls: the ToolkitScriptManager control and the Twitter control. The Twitter control is set to display tweets from Scott Hanselman (shanselman): You also can use the Twitter control to display the results of a search query. For example, the following page displays all recent tweets related to the Ajax Control Toolkit: Twitter limits the number of times that you can interact with their API in an hour. Twitter recommends that you cache results on the server (https://dev.twitter.com/docs/rate-limiting). By default, the Twitter control caches results on the server for a duration of 5 minutes. You can modify the cache duration by assigning a value (in seconds) to the Twitter control's CacheDuration property. The Twitter control wraps a standard ASP.NET ListView control. You can customize the appearance of the Twitter control by modifying its LayoutTemplate, StatusTemplate, AlternatingStatusTemplate, and EmptyDataTemplate. To learn more about the new Twitter control, visit the live Twitter Sample Page. The New Gravatar Control The September 2011 release of the Ajax Control Toolkit also includes a new Gravatar control. This control makes it easy to display a unique image for each user of your website. A Gravatar is associated with an email address. You can visit Gravatar.com and upload an image and associate the image with your email address. That way, every website which uses Gravatars (such as the www.ASP.NET website) will display your image next to your name. For example, I visited the Gravatar.com website and associated an image of a Koala Bear with the email address [email protected]. The following page illustrates how you can use the Gravatar control to display the Gravatar image associated with the [email protected] email address: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="GravatarDemo.aspx.cs" Inherits="WebApplication1.GravatarDemo" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html xmlns="http://www.w3.org/1999/xhtml"> <head id="Head1" runat="server"> <title>Gravatar Demo</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager ID="tsm" runat="server" /> <asp:Gravatar ID="Gravatar1" Email="[email protected]" runat="server" /> </form> </body> </html> The page above simply displays the Gravatar image associated with the [email protected] email address: If a user has not uploaded an image to Gravatar.com then you can auto-generate a unique image for the user from the user email address. The Gravatar control supports four types of auto-generated images: Identicon -- A different geometric pattern is generated for each unrecognized email. MonsterId -- A different image of a monster is generated for each unrecognized email. Wavatar -- A different image of a face is generated for each unrecognized email. Retro -- A different 8-bit arcade-style face is generated for each unrecognized email. For example, there is no Gravatar image associated with the email address [email protected]. The following page displays an auto-generated MonsterId for this email address: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="GravatarMonster.aspx.cs" Inherits="WebApplication1.GravatarMonster" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html xmlns="http://www.w3.org/1999/xhtml"> <head id="Head1" runat="server"> <title>Gravatar Monster</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager ID="tsm" runat="server" /> <asp:Gravatar ID="Gravatar1" Email="[email protected]" DefaultImageBehavior="MonsterId" runat="server" /> </form> </body> </html> The page above generates the following image automatically from the supplied email address: To learn more about the properties of the new Gravatar control, visit the live Gravatar Sample Page. ASP.NET Connections Talk on the Ajax Control Toolkit If you are interested in learning more about the changes that we are making to the Ajax Control Toolkit then please come to my talk on the Ajax Control Toolkit at the upcoming ASP.NET Connections conference. In the talk, I will present a summary of the changes that we have made to the Ajax Control Toolkit over the last several months and discuss our future plans. Do you have ideas for new Ajax Control Toolkit controls? Ideas for improving the toolkit? Come to my talk – I would love to hear from you. You can register for the ASP.NET Connections conference by visiting the following website: Register for ASP.NET Connections   Summary The previous release of the Ajax Control Toolkit – the July 2011 Release – has had over 100,000 downloads. That is a huge number of developers who are working with the Ajax Control Toolkit. We are really excited about the new features which we added to the Ajax Control Toolkit in the latest September sprint. We hope that you find the updated Calender control, the new Twitter control, and the new Gravatar control valuable when building your ASP.NET Web Forms applications.

    Read the article

  • Microsoft .NET Web Programming: Web Sites versus Web Applications

    - by SAMIR BHOGAYTA
    In .NET 2.0, Microsoft introduced the Web Site. This was the default way to create a web Project in Visual Studio 2005. In Visual Studio 2008, the Web Application has been restored as the default web Project in Visual Studio/.NET 3.x The Web Site is a file/folder based Project structure. It is designed such that pages are not compiled until they are requested ("on demand"). The advantages to the Web Site are: 1) It is designed to accommodate non-.NET Applications 2) Deployment is as simple as copying files to the target server 3) Any portion of the Web Site can be updated without requiring recompilation of the entire Site. The Web Application is a .dll-based Project structure. ASP.NET pages and supporting files are compiled into assemblies that are then deployed to the target server. Advantages of the Web Application are: 1) Precompiled files do not expose code to an attacker 2) Precompiled files run faster because they are binary data (the Microsoft Intermediate Language, or MSIL) executed by the CLR (Common Language Runtime) 3) References, assemblies, and other project dependencies are built in to the compiled site and automatically managed. They do not need to be manually deployed and/or registered in the Global Assembly Cache: deployment does this for you If you are planning on using automated build and deployment, such as the Team Foundation Server Team Build engine, you will need to have your code in the form of a Web Application. If you have a Web Site, it will not properly compile as a Web Application would. However, all is not lost: it is possible to work around the issue by adding a Web Deployment Project to your Solution and then: a) configuring the Web Deployment Project to precompile your code; and b) configuring your Team Build definition to use the Web Deployment Project as its source for compilation. https://msevents.microsoft.com/cui/WebCastEventDetails.aspx?culture=en-US&EventID=1032380764&CountryCode=US

    Read the article

  • Moving from Winforms to WPF

    - by Elmex
    I am a long time experienced Windows Forms developer, but now it's time to move to WPF because a new WPF project is comming soon to me and I have only a short lead time to prepare myself to learn WPF. What is the best way for a experienced Winforms devleoper? Can you give me some hints and recommendations to learn WPF in a very short time! Are there simple sample WPF solutions and short (video) tutorials? Which books do you recommend? Is www.windowsclient.net a good starting point? Are there alternatives to the official Microsoft site? Thanks in advance for your help!

    Read the article

  • How can I install a 32bit python on 64 bit Ubuntu

    - by moose
    I am using Ubuntu 10.10 (Linux pc07 2.6.35-27-generic #48-Ubuntu SMP Tue Feb 22 20:25:46 UTC 2011 x86_64 GNU/Linux) and the default python package (Python 2.6.6). I would like to install python-psyco to improve the performance of one of my scripts, but only python-psyco-doc is available for 64 bit. I tried a virtual machine, but the the performance boost is much less on the virtual machine than on a "real" installed 32-bit Ubuntu. So my question is: How can I install a 32Bit Python with psyco on my 64Bit Ubuntu machine? edit: I've found this article and made this: Download "Python 2.7.1 bzipped source tarball" from http://python.org/download/ Go in the directory where you decompressed "Python 2.7.1" $ OPT=-m32 LDFLAGS=-m32 ./configure --prefix=/opt/pym32 $ make But I got this error: gcc -pthread -m32 -Xlinker -export-dynamic -o python \ Modules/python.o \ libpython2.7.a -lpthread -ldl -lutil -lm libpython2.7.a(posixmodule.o): In function `posix_tmpnam': /home/moose/Downloads/Python-2.7.1/./Modules/posixmodule.c:7346: warning: the use of `tmpnam_r' is dangerous, better use `mkstemp' libpython2.7.a(posixmodule.o): In function `posix_tempnam': /home/moose/Downloads/Python-2.7.1/./Modules/posixmodule.c:7301: warning: the use of `tempnam' is dangerous, better use `mkstemp' Segmentation fault make: *** [sharedmods] Fehler 139 edit2: Now I've found http://indefinitestudies.org/2010/02/08/how-to-build-32-bit-python-on-ubuntu-9-10-x86_64/ and it seems like this worked: $ cd Python-2.7.1 $ CC="gcc -m32" LDFLAGS="-L/lib32 -L/usr/lib32 \ -Lpwd/lib32 -Wl,-rpath,/lib32 -Wl,-rpath,/usr/lib32" \ ./configure --prefix=/opt/pym32 $ make $ sudo make install But installing psyco didn't work: Download the lastest snapshot: http://psyco.sourceforge.net/download.html Extract it and go into the folder $ python setup.py install This error appeared: PROCESSOR = 'ivm' running install running build running build_py running build_ext building 'psyco._psyco' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DALL_STATIC=1 -Ic/ivm -I/usr/include/python2.6 -c c/psyco.c -o build/temp.linux-x86_64-2.6/c/psyco.o In file included from c/psyco.c:1: c/psyco.h:9: fatal error: Python.h: Datei oder Verzeichnis nicht gefunden compilation terminated. error: command 'gcc' failed with exit status 1

    Read the article

  • CodePlex Daily Summary for Friday, February 26, 2010

    CodePlex Daily Summary for Friday, February 26, 2010New Projectsaion-gamecp: Aion Gamecp for aion Private server based on Aion UniqueAzure Email Queuer: Azure Email Queuer makes it easier for Developers Programming in the Cloud to Queue Emails to keep the UI Thread Clear for Requests. Developed w...BIG1: Bob and Ian's Game. Written using XNA Game Studio Express. Basically an update of David Braben and Ian Bell's classic game "Elite." This is a nonco...CMS7: CMS7 The CMS7 is composed of three module. (1)Main CMS Business (2)Process Customization (3)Role/Department CustomizationCoreSharp Networking Core: A simple to use framework to develop efficient client/server application. The framework is part of my project at school and I hope it will benefit ...Fullscreen Countdown: Small and basic countdown application. The countdown window can be resized to fit any size to display the minutes elapsed. Developped in C#, .NET F...IRC4N00bz: Learning sockets, events, delegates, SQL, and IRC commands all in one big project! It's written in C# (Csharp) and hope you find it helpfull, or ev...LjSystem: This project is a collection of my extensions to the BCLMP3 Tags Management: A software to manage the tags of MP3 filesnetone: All net in oneNext Dart (Dublin Area Rapid Transport): The shows the times of the next darts from a given station. It is a windows application that updates automatically and so is easier to use than th...PChat - An OCDotNet.Org Presentation: PChat is a multithreaded pinnable chat server and client. It is designed to be a demonstration of Visual Studio 2010 MVC 2, for ocdotnet.org Use...Pittsburgh Code Camp iPhone App: The Pittsburgh Code Camp iPhone Application is meant as a demonstration of the creation of an iPhone application while at the same time providing t...Radical: Radical is an infrastructure frameworkRadioAutomation: Windows application for radio automation.SilverSynth - Digital Audio Synthesis for Silverlight: SilverSynth is a digial audio synthesis library for Silverlight developers to create synthesized wave forms from code. It supports synthesis of sin...SkeinLibManaged: This implementation of the Skein Cryptographic Hash function is written entirely in Managed CSharp. It is posted here to share with the world at l...SpecExplorerEval: We are checking out spec explorer and presenting on its useSPOJemu: This is a SPOJ emulator. It allows you to define tests in xml and then check your application if it's working as you expected.The C# Skype Chat bot: A Skype bot in C# for managing Skype chats.VS 2010 Architecture Layers Patterns: Architecture layers patterns toolbox items for layers diagrams.Yakiimo3D: Mostly DirectX 11 programming tutorials.代码生成器: Project DetailsNew ReleasesArkSwitch: ArkSwitch v1.1.1: This release fixes a crash that occurs when certain processes with multiple primary windows are encountered.BTP Tools: CSB, CUV and HCSB e-Sword files 2010-02-26: include csb.bbl csb+.bbl csb.cmt csbc.dct cuv.bbl cuv+.bbl cuv.cmt cuvc.dct hcsb+.bbl hcsbc.dct files for e-Sword 8.0BubbleBurst: BubbleBurst v1.1: This is the second release of BubbleBurst, the subject of the book Advanced MVVM. This release contains a minor fix that was added after the book ...DevTreks -social budgeting that improves lives and livelihoods: Social Budgeting Web Software, alpha 3b: Alpha 3b simplifies and strengthens state management. With the exception of linked lists, the internal mechanics of addins have not been improved...Dragonrealms PvpStance plugin for Genie: 1.0.0.4: This updated is needed now that the DR server move broke the "profile soandso pvp" syntax. This version will capture the pvp stance out of the full...FastCode: FastCode 1.0: Definitions <integerType> : byte, sbyte, short, ushort, int, uint, long, ulond <floatType> : float, double, decimal Base types extensions Intege...Fullscreen Countdown: Fullscreen Countdown 1.0: First versionIRC4N00bz: IRC4N00bz_02252010.zip: I'm calling it a night. Here's the dll for where I'm at so far. It works, just lakcs some abilities. Anything not included can be pulled from th...Labrado: Labrado MiniTimer: Labrado MiniTimer is a convenient timer tool designed and implemented for GMAT test preparation.LINQ to VFP: LinqToVfp (v1.0.17.1): Cleaned up WCF Data Service Expression Tree. (details...) This build requires IQToolkit v0.17b.Microsoft Health Common User Interface: Release 8.0.200.000: This is version 8.0 of the Microsoft® Health Common User Interface Control Toolkit. The scope and requirements of this release are based on materia...Mini SQL Query: Mini SQL Query Funky Dev Build (RC1+): The "Funk Dev Build" bit is that I added a couple of features I think are pretty cool. It is a "dev" build but I class it as stable. Find Object...Neovolve: Neovolve.BlogEngine.Extensions 1.2: Updated extensions to work with BE 1.6. Updated Snippets extension to better handle excluded tags and fixed regex bug. Added SyntaxHighlighter exte...Neovolve: Neovolve.BlogEngine.Web 1.1: Update to support BE version 1.6 Neovolve.BlogEngine.Web 1.1 contains a redirector module that translates Community Server url formats into BlogEn...Next Dart (Dublin Area Rapid Transport): 1.0: There are 2 files NextDart 1.0.zip This contains just the files. Extract it to a folder and run NextDart.exe. NextDart 1.0 Intaller.zip This c...Powershell4SQL: Version 1.2: Changes from version 1.1 Added additional attributes to simplify syntax. Server and Database become optional. Defaulted to (local) and 'master' ...Radical: Radical (Desktop) 1.0: First stable dropRaidTracker: Raid Tracker: a few tweaksRaiser's Edge API Developer Toolkit: Alpha Release 1: This is an untested, alpha release. Contains RE API Toolkit built using 7.85 Dlls and 7.91 Dlls.SharePoint Enhanced Calendar by ArtfulBits: ArtfulBits.EnhancedCalendar v1.3: New Features: Simple to activate mechanism added (add Enhanced Calendar Web Part on the same page as standard calendar) Support for any type of S...Silverlight 4.0 Com Library for SQL Server Access: Version 1.0: This is the intial alpha release. It includes ExecuteQuery, ExecuteNonQuery and ExecuteScalar routines. See roadmap section of home page for detai...Silverlight HTML 5 Canvas: SLCanvas 1.1: This release enables <canvas renderMethod="auto" onload="runme(this)"></canvas> or <canvas renderMethod="Silverlight" onload="runme(this)"></ca...SilverSynth - Digital Audio Synthesis for Silverlight: SilverSynth 1.0: Source code including demo application.StringDefs: StringDefs Alpha Release 1.01: In this release of the Library few namespaces are added.STSDev 2008: STSDev 2008 2.1: Update to the StsDev 2008 project to correct Manifest Building issues.Text to HTML: 0.4.0.2: Cambios de la versión:Correcciones menores en el sistema de traducción. Controlada la excepción aparecida al suprimir los archivos de idioma. A...The Silverlight Hyper Video Player [http://slhvp.com]: Release 4 - Friendly User Release (Pre-Beta): Release 4 - Friendly User Release (Pre-Beta) This version of the code has much of the design that we plan to go forward with for Mix and utilizes a...TreeSizeNet: TreeSizeNet 0.10.2: - Assemblies merged in one executableVCC: Latest build, v2.1.30225.0: Automatic drop of latest buildVCC: Latest build, v2.1.30225.1: Automatic drop of latest buildVS 2010 Architecture Layers Patterns: VS 2010 RC Architecture Layers Patterns v1.0: Architecture layers patterns toolbox items based on the Microsoft Application Architecture Guide, 2nd Edition for the layer diagram designer of Vi...Yakiimo3D: DirectX11 BitonicSortCPU Source and Binary: DirectX11 BitonicSortCPU sample source and binary.Yakiimo3D: DirectX11 MandelbrotGPU Source and Binary: DirectX11 MandelbrotGPU source and binary.Most Popular ProjectsVSLabOSIS Interop TestsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsDinnerNow.netRawrBlogEngine.NETSLARToolkit - Silverlight Augmented Reality ToolkitInfoServiceSharpMap - Geospatial Application Framework for the CLRCommon Context AdaptersNB_Store - Free DotNetNuke Ecommerce Catalog ModulejQuery Library for SharePoint Web Servicespatterns & practices – Enterprise Library

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Create a Persistent Bootable Ubuntu USB Flash Drive

    - by Trevor Bekolay
    Don’t feel like reinstalling an antivirus program every time you boot up your Ubuntu flash drive? We’ll show you how to create a bootable Ubuntu flash drive that will remember your settings, installed programs, and more! Previously, we showed you how to create a bootable Ubuntu flash drive that would reset to its initial state every time you booted it up. This is great if you’re worried about messing something up, and want to start fresh every time you start tinkering with Ubuntu. However, if you’re using the Ubuntu flash drive to diagnose and solve problems with your PC, you might find that a lot of problems require guess-and-test cycles. It would be great if the settings you change in Ubuntu and the programs you install stay installed the next time you boot it up. Fortunately, Universal USB Installer, a great little program from Pen Drive Linux, can do just that! Note: You will need a USB drive at least 2 GB large. Make sure you back up any files on the flash drive because this process will format the drive, removing any files currently on it. Once Ubuntu has been installed on the flash drive, you can move those files back if there is enough space. Put Ubuntu on your flash drive Universal-USB-Installer.exe does not need to be installed, so just double click on it to run it wherever you downloaded it. Click Yes if you get a UAC prompt, and you will be greeted with this window. Click I Agree. In the drop-down box on the next screen, select Ubuntu 9.10 Desktop i386. Don’t worry if you normally use 64-bit operating systems – the 32-bit version of Ubuntu 9.10 will still work fine. Some useful tools do not have 64-bit versions, so unless you’re planning on switching to Ubuntu permanently, the 32-bit version will work best. If you don’t have a copy of the Ubuntu 9.10 CD downloaded, then click on the checkbox to Download the ISO. You’ll be prompted to launch a web browser; click Yes. The download should start immediately. When it’s finished, return the the Universal USB Installer and click on Browse to navigate to the ISO file you just downloaded. Click OK and the text field will be populated with the path to the ISO file. Select the drive letter that corresponds to the flash drive that you would like to use from the dropdown box. If you’ve backed up the files on this drive, we recommend checking the box to format the drive. Finally, you have to choose how much space you would like to set aside for the settings and programs that will be stored on the flash drive. Considering that Ubuntu itself only takes up around 700 MB, 1 GB should be plenty, but we’re choosing 2 GB in this example because we have lots of space on this USB drive. Click on the Create button and then make yourself a sandwich – it will take some time to install no matter how fast your PC is. Eventually it will finish. Click Close. Now you have a flash drive that will boot into a fully capable Ubuntu installation, and any changes you make will persist the next time you boot it up! Boot into Ubuntu If you’re not sure how to set your computer to boot using the USB drive, then check out the How to Boot Into Ubuntu section of our previous article on creating bootable USB drives, or refer to your motherboard’s manual. Once your computer is set to boot using the USB drive, you’ll be greeted with splash screen with some options. Press Enter to boot into Ubuntu. The first time you do this, it may take some time to boot up. Fortunately, we’ve found that the process speeds up on subsequent boots. You’ll be greeted with the Ubuntu desktop. Now, if you change settings like the desktop resolution, or install a program, those changes will be permanently stored on the USB drive! We installed avast! Antivirus, and on the next boot, found that it was still in the Accessories menu where we left it. Conclusion We think that a bootable Ubuntu USB flash drive is a great tool to have around in case your PC has problems booting otherwise. By having the changes you make persist, you can customize your Ubuntu installation to be the ultimate computer repair toolkit! Download Universal USB Installer from Pen Drive Linux Similar Articles Productive Geek Tips Create a Bootable Ubuntu USB Flash Drive the Easy WayCreate a Bootable Ubuntu 9.10 USB Flash DriveReset Your Ubuntu Password Easily from the Live CDHow-To Geek on Lifehacker: Control Your Computer with Shortcuts & Speed Up Vista SetupHow To Setup a USB Flash Drive to Install Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3 Yes, it’s Patch Tuesday Generate Stunning Tag Clouds With Tagxedo Install, Remove and HIDE Fonts in Windows 7

    Read the article

  • Back from Teched US

    - by gsusx
    It's been a few weeks since I last blogged and, trust me, I am not happy about it :( I have been crazily busy with some of our projects at Tellago which you are going to hear more about in the upcoming weeks :) I was so busy that I didn't even have time to blog about my sessions at Teched US last week. This year I ended up presenting three sessions on three different tracks: BIE403 | Real-Time Business Intelligence with Microsoft SQL Server 2008 R2 Session Type: Breakout Session Real-time business...(read more)

    Read the article

  • Moving from Winforms to WPF

    - by Elmex
    I am a long time experienced Windows Forms developer, but now it's time to move to WPF because a new WPF project is comming soon to me and I have only a short lead time to prepare myself to learn WPF. What is the best way for a experienced Winforms devleoper? Can you give me some hints and recommendations to learn WPF in a very short time! Are there simple sample WPF solutions and short (video) tutorials? Which books do you recommend? Is www.windowsclient.net a good starting point? Are there alternatives to the official Microsoft site? Thanks in advance for your help!

    Read the article

  • External Monitors shut off when Laptop Lid closes

    - by John Lanz
    I have researched the solution... gconftool-2 --type string --set /apps/gnome-power-manager/buttons/lid_ac "nothing" does not fix it. I have two external monitors and when I close my lid the settings are reset and the laptop's monitor is set to the default. Thanks! gsettings list-recursively org.gnome.settings-daemon.plugins.power org.gnome.settings-daemon.plugins.power active true org.gnome.settings-daemon.plugins.power button-hibernate 'nothing' org.gnome.settings-daemon.plugins.power button-power 'nothing' org.gnome.settings-daemon.plugins.power button-sleep 'nothing' org.gnome.settings-daemon.plugins.power button-suspend 'nothing' org.gnome.settings-daemon.plugins.power critical-battery-action 'suspend' org.gnome.settings-daemon.plugins.power idle-brightness 30 org.gnome.settings-daemon.plugins.power idle-dim-ac false org.gnome.settings-daemon.plugins.power idle-dim-battery true org.gnome.settings-daemon.plugins.power idle-dim-time 10 org.gnome.settings-daemon.plugins.power lid-close-ac-action 'nothing' org.gnome.settings-daemon.plugins.power lid-close-battery-action 'nothing' org.gnome.settings-daemon.plugins.power notify-perhaps-recall true org.gnome.settings-daemon.plugins.power percentage-action 2 org.gnome.settings-daemon.plugins.power percentage-critical 3 org.gnome.settings-daemon.plugins.power percentage-low 10 org.gnome.settings-daemon.plugins.power priority 1 org.gnome.settings-daemon.plugins.power sleep-display-ac 600 org.gnome.settings-daemon.plugins.power sleep-display-battery 600 org.gnome.settings-daemon.plugins.power sleep-inactive-ac false org.gnome.settings-daemon.plugins.power sleep-inactive-ac-timeout 0 org.gnome.settings-daemon.plugins.power sleep-inactive-ac-type 'suspend' org.gnome.settings-daemon.plugins.power sleep-inactive-battery true org.gnome.settings-daemon.plugins.power sleep-inactive-battery-timeout 0 org.gnome.settings-daemon.plugins.power sleep-inactive-battery-type 'suspend' org.gnome.settings-daemon.plugins.power time-action 120 org.gnome.settings-daemon.plugins.power time-critical 300 org.gnome.settings-daemon.plugins.power time-low 1200 org.gnome.settings-daemon.plugins.power use-time-for-policy true

    Read the article

  • Error : Member 'D-T-D' in the Period dimension has no value for the Period Type property

    - by RahulS
    Workaround for LCM EPMA deploy errors: Error : Member 'D-T-D' in the Period dimension has no value for the Period  Type property.  Error : Member name 'D-T-D' in the Period dimension is only valid when Period  Type is set to 'DTS Time Period'.  Error : Member 'W-T-D' in the Period dimension has no value for the Period  Type property.  Error : Member name 'W-T-D' in the Period dimension is only valid when Period  Type is set to 'DTS Time Period'.  Error : Member 'M-T-D' in the Period dimension has no value for the Period  Type property.  Error : Member name 'M-T-D' in the Period dimension is only valid when Period  Type is set to 'DTS Time Period'.  Error : Member 'Q-T-D' in the Period dimension has no value for the Period  Type property.  Error : Member name 'Q-T-D' in the Period dimension is only valid when Period  Type is set to 'DTS Time Period'.  Error : Member 'P-T-D' in the Period dimension has no value for the Period  Type property.  Error : Member name 'P-T-D' in the Period dimension is only valid when Period  Type is set to 'DTS Time Period'.  Error : Member 'S-T-D' in the Period dimension has no value for the Period  Type property.  Error : Member name 'S-T-D' in the Period dimension is only valid when Period  Type is set to 'DTS Time Period'.  Error : Member 'Y-T-D' in the Period dimension has no value for the Period  Type property.  Error : Member name 'Y-T-D' in the Period dimension is only valid when Period  Type is set to 'DTS Time Period'.  Error : Member 'H-T-D' in the Period dimension has no value for the Period  Type property.  Error : Member name 'H-T-D' in the Period dimension is only valid when Period  Type is set to 'DTS Time Period'. Fix 1. Edit the Period dimension LCM artifact (Keep the back up of the file before editing.)  2. Delete the DTS members (for example as mentioned below) in the Period dimension hierarchy section.   #root|D-T-D|True||||||||||||||||   #root|W-T-D|True||||||||||||||||   #root|M-T-D|True||||||||||||||||   #root|Q-T-D|True||||||||||||||||   #root|P-T-D|True||||||||||||||||   #root|S-T-D|True||||||||||||||||   #root|Y-T-D|True||||||||||||||||   #root|H-T-D|True||||||||||||||||   3. Delete the DTS members (for example as mentioned below) in the Period member hierarchy section,   D-T-D|True||||||||||||||||   W-T-D|True||||||||||||||||   M-T-D|True||||||||||||||||   Q-T-D|True||||||||||||||||   P-T-D|True||||||||||||||||   S-T-D|True||||||||||||||||   Y-T-D|True||||||||||||||||   H-T-D|True||||||||||||||||   4. Then save the edited Period dimension LCM artifact.   5. Then try to import the Period dimension using LCM.   6. Then Validate/Deploy the Planning application still the same issue. PS: This issue is fixed in 11.1.2.2.

    Read the article

  • Can't seem to install imagick

    - by PolishHurricane
    I'm trying to install the PHP PEAR PECL extension "imagick" (image magick), but failing horribly. It seems that I keed installing packages to progress, but this one has me stumped. It seems to fail all the way at the bottom. Please Note: I'm using ArchLinux, apt-get doesn't save me. [root@Crux tmp]# pecl install imagick downloading imagick-3.0.1.tgz ... Starting to download imagick-3.0.1.tgz (93,920 bytes) .....................done: 93,920 bytes 13 source files, building running: phpize Configuring for: PHP Api Version: 20100412 Zend Module Api No: 20100525 Zend Extension Api No: 220100525 Please provide the prefix of Imagemagick installation [autodetect] : building in /tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1 running: /tmp/pear/temp/imagick/configure --with-imagick checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for a sed that does not truncate output... /bin/sed checking for cc... cc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ISO C89... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-pc-linux-gnu checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/modules checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... re2c checking for re2c version... 0.13.5 (ok) checking for gawk... gawk checking whether to enable the imagick extension... yes, shared checking whether to enable the imagick GraphicsMagick backend... no checking ImageMagick MagickWand API configuration program... found in /usr/bin/MagickWand-config checking if ImageMagick version is at least 6.2.4... found version 6.7.8 Q16 checking for MagickWand.h header file... found in /usr/include/ImageMagick/wand/MagickWand.h checking PHP version is at least 5.1.3... yes. found 5.4.6 checking for ld used by cc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for /usr/bin/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm -B checking whether ln -s works... yes checking how to recognize dependent libraries... pass_all checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking the maximum length of command line arguments... 1572864 checking command to parse /usr/bin/nm -B output from cc object... ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip checking if cc supports -fno-rtti -fno-exceptions... no checking for cc option to produce PIC... -fPIC checking if cc PIC flag -fPIC works... yes checking if cc static flag -static works... yes checking if cc supports -c -o file.o... yes checking whether the cc linker (/usr/bin/ld) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no creating libtool appending configuration tag "CXX" to libtool configure: creating ./config.status config.status: creating config.h running: make /bin/sh /tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1/libtool --mode=compile cc -I. -I/tmp/pear/temp/imagick -DPHP_ATOM_INC -I/tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1/include -I/tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1/main -I/tmp/pear/temp/imagick -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -I/usr/include/ImageMagick -DHAVE_CONFIG_H -g -O2 -c /tmp/pear/temp/imagick/imagick_class.c -o imagick_class.lo mkdir .libs cc -I. -I/tmp/pear/temp/imagick -DPHP_ATOM_INC -I/tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1/include -I/tmp/pear/temp/pear-build-rootLbSUWT/imagick-3.0.1/main -I/tmp/pear/temp/imagick -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -I/usr/include/ImageMagick -DHAVE_CONFIG_H -g -O2 -c /tmp/pear/temp/imagick/imagick_class.c -fPIC -DPIC -o .libs/imagick_class.o /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagematteâ: /tmp/pear/temp/imagick/imagick_class.c:268:2: warning: âMagickGetImageMatteâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:82) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getsizeoffsetâ: /tmp/pear/temp/imagick/imagick_class.c:406:2: warning: passing argument 2 of âMagickGetSizeOffsetâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:87:3: note: expected âssize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_paintfloodfillimageâ: /tmp/pear/temp/imagick/imagick_class.c:1016:3: warning: âMagickPaintFloodfillImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:99) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c:1019:3: warning: âMagickPaintFloodfillImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:99) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagepropertiesâ: /tmp/pear/temp/imagick/imagick_class.c:1083:2: warning: passing argument 3 of âMagickGetImagePropertiesâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:35:5: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimageprofilesâ: /tmp/pear/temp/imagick/imagick_class.c:1131:2: warning: passing argument 3 of âMagickGetImageProfilesâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:33:5: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_recolorimageâ: /tmp/pear/temp/imagick/imagick_class.c:1402:2: warning: âMagickRecolorImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:109) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_setfontâ: /tmp/pear/temp/imagick/imagick_class.c:1442:3: error: âstruct _php_core_globalsâ has no member named âsafe_modeâ /tmp/pear/temp/imagick/imagick_class.c:1442:3: error: âCHECKUID_CHECK_FILE_AND_DIRâ undeclared (first use in this function) /tmp/pear/temp/imagick/imagick_class.c:1442:3: note: each undeclared identifier is reported only once for each function it appears in /tmp/pear/temp/imagick/imagick_class.c:1442:3: error: âCHECKUID_NO_ERRORSâ undeclared (first use in this function) /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_queryformatsâ: /tmp/pear/temp/imagick/imagick_class.c:2580:2: warning: passing argument 2 of âMagickQueryFormatsâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:41:5: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_queryfontsâ: /tmp/pear/temp/imagick/imagick_class.c:2607:2: warning: passing argument 2 of âMagickQueryFontsâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:40:5: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_colorfloodfillimageâ: /tmp/pear/temp/imagick/imagick_class.c:3396:2: warning: âMagickColorFloodfillImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:75) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_mapimageâ: /tmp/pear/temp/imagick/imagick_class.c:3730:2: warning: âMagickMapImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:86) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_mattefloodfillimageâ: /tmp/pear/temp/imagick/imagick_class.c:3763:2: warning: âMagickMatteFloodfillImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:88) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_medianfilterimageâ: /tmp/pear/temp/imagick/imagick_class.c:3790:2: warning: âMagickMedianFilterImageâ is deprecated (declared at /usr/include/ImageMagick/wand/magick-image.h:217) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_paintopaqueimageâ: /tmp/pear/temp/imagick/imagick_class.c:3853:2: warning: âMagickPaintOpaqueImageChannelâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:104) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_painttransparentimageâ: /tmp/pear/temp/imagick/imagick_class.c:3916:2: warning: âMagickPaintTransparentImageâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:107) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_reducenoiseimageâ: /tmp/pear/temp/imagick/imagick_class.c:4059:2: warning: âMagickReduceNoiseImageâ is deprecated (declared at /usr/include/ImageMagick/wand/magick-image.h:266) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimageattributeâ: /tmp/pear/temp/imagick/imagick_class.c:5068:2: warning: âMagickGetImageAttributeâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:59) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagechannelextremaâ: /tmp/pear/temp/imagick/imagick_class.c:5253:2: warning: âMagickGetImageChannelExtremaâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:78) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c:5253:2: warning: passing argument 3 of âMagickGetImageChannelExtremaâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:68:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/deprecate.h:78:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c:5253:2: warning: passing argument 4 of âMagickGetImageChannelExtremaâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:68:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/deprecate.h:78:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimageextremaâ: /tmp/pear/temp/imagick/imagick_class.c:5506:2: warning: âMagickGetImageExtremaâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:80) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c:5506:2: warning: passing argument 2 of âMagickGetImageExtremaâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:68:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/deprecate.h:80:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c:5506:2: warning: passing argument 3 of âMagickGetImageExtremaâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:68:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/deprecate.h:80:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagehistogramâ: /tmp/pear/temp/imagick/imagick_class.c:5629:2: warning: passing argument 2 of âMagickGetImageHistogramâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:74:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-image.h:415:5: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagepageâ: /tmp/pear/temp/imagick/imagick_class.c:5740:2: warning: passing argument 2 of âMagickGetImagePageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:74:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-image.h:192:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c:5740:2: warning: passing argument 3 of âMagickGetImagePageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:74:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-image.h:192:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c:5740:2: warning: passing argument 4 of âMagickGetImagePageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:74:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-image.h:192:3: note: expected âssize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c:5740:2: warning: passing argument 5 of âMagickGetImagePageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:74:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-image.h:192:3: note: expected âssize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimageindexâ: /tmp/pear/temp/imagick/imagick_class.c:6344:2: warning: âMagickGetImageIndexâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:65) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_setimageindexâ: /tmp/pear/temp/imagick/imagick_class.c:6369:2: warning: âMagickSetImageIndexâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:113) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getimagesizeâ: /tmp/pear/temp/imagick/imagick_class.c:6447:2: warning: âMagickGetImageSizeâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:140) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_setimageattributeâ: /tmp/pear/temp/imagick/imagick_class.c:6796:2: warning: âMagickSetImageAttributeâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:111) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_flattenimagesâ: /tmp/pear/temp/imagick/imagick_class.c:7043:2: warning: âMagickFlattenImagesâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:132) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_averageimagesâ: /tmp/pear/temp/imagick/imagick_class.c:8079:2: warning: âMagickAverageImagesâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:131) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_mosaicimagesâ: /tmp/pear/temp/imagick/imagick_class.c:8516:2: warning: âMagickMosaicImagesâ is deprecated (declared at /usr/include/ImageMagick/wand/deprecate.h:135) [-Wdeprecated-declarations] /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getpageâ: /tmp/pear/temp/imagick/imagick_class.c:9126:2: warning: passing argument 2 of âMagickGetPageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:84:3: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c:9126:2: warning: passing argument 3 of âMagickGetPageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:84:3: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c:9126:2: warning: passing argument 4 of âMagickGetPageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:84:3: note: expected âssize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c:9126:2: warning: passing argument 5 of âMagickGetPageâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:84:3: note: expected âssize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getquantumdepthâ: /tmp/pear/temp/imagick/imagick_class.c:9154:2: warning: passing argument 1 of âMagickGetQuantumDepthâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:52:4: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getquantumrangeâ: /tmp/pear/temp/imagick/imagick_class.c:9176:2: warning: passing argument 1 of âMagickGetQuantumRangeâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:53:4: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getsamplingfactorsâ: /tmp/pear/temp/imagick/imagick_class.c:9247:2: warning: passing argument 2 of âMagickGetSamplingFactorsâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:59:4: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getsizeâ: /tmp/pear/temp/imagick/imagick_class.c:9273:2: warning: passing argument 2 of âMagickGetSizeâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:86:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c:9273:2: warning: passing argument 3 of âMagickGetSizeâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:86:3: note: expected âsize_t *â but argument is of type âlong unsigned int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_getversionâ: /tmp/pear/temp/imagick/imagick_class.c:9299:2: warning: passing argument 1 of âMagickGetVersionâ from incompatible pointer type [enabled by default] In file included from /usr/include/ImageMagick/wand/MagickWand.h:73:0, from /tmp/pear/temp/imagick/php_imagick.h:49, from /tmp/pear/temp/imagick/imagick_class.c:21: /usr/include/ImageMagick/wand/magick-property.h:55:4: note: expected âsize_t *â but argument is of type âlong int *â /tmp/pear/temp/imagick/imagick_class.c: In function âzim_imagick_setimageprogressmonitorâ: /tmp/pear/temp/imagick/imagick_class.c:9534:2: error: âstruct _php_core_globalsâ has no member named âsafe_modeâ /tmp/pear/temp/imagick/imagick_class.c:9534:2: error: âCHECKUID_CHECK_FILE_AND_DIRâ undeclared (first use in this function) /tmp/pear/temp/imagick/imagick_class.c:9534:2: error: âCHECKUID_NO_ERRORSâ undeclared (first use in this function) make: *** [imagick_class.lo] Error 1 ERROR: `make' failed

    Read the article

  • Multiple IP addresses on one NIC register twice in DNS server

    - by Brad B.
    Hi, We've got a build server (Windows Server 2008 SP2, 64-bit) which has one NIC and two IP addresses registered to that NIC (192.168.1.30 and 192.168.1.31). The build server is registering two identical Host (A) records for itself in our DNS server: buildserver.example.com = 192.168.1.30 buildserver.example.com = 192.168.1.31 I know in the "Advanced TCP/IP Settings" window for the build server's NIC, under the "DNS" tab, there is a check box labeled "Register this connection's addresses in DNS". I only want ONE of the IP addresses (ending in .30) to be registered in DNS not both of them. Can that be done? My best guess is to disable the "Register this connection's addresses in DNS" and manually add the Host (A) record to our DNS server. Thanks for any help!

    Read the article

  • The Product Owner

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. Picking a most important part of Scrum is difficult.  All of the rules are required, but if there were one rule that is “more” required that every other rule, its having a good Product Owner.  Simply put, the Product Owner can make or break the project. Duties of the Product Owner A Product Owner has many duties and responsibilities.  I’ll talk about each of these duties in detail below. A Product Owner: Discovers and records stories for the backlog. Prioritizes stories in the Product Backlog, Release Backlog and Iteration Backlog. Determines Release dates and Iteration Dates. Develops story details and helps the team understand those details. Helps QA to develop acceptance tests. Interact with the Customer to make sure that the product is meeting the customer’s needs. Discovers and Records Stories for the Backlog When I do Scrum, I always use User Stories as the means for capturing functionality that’s required in the system.  Some people will use Use Cases, but the same rule applies.  The Product Owner has the ultimate responsibility for figuring out what functionality will be in the system.  Many different mechanisms for capturing this input can be used.  User interviews are great, but all sources should be considered, including talking with Customer Support types.  Often, they hear what users are struggling with the most and are a great source for stories that can make the application easier to use. Care should be taken when soliciting user stories from technical types such as programmers and the people that manage them.  They will almost always give stories that are very technical in nature and may not have a direct benefit for the end user.  Stories are about adding value to the company.  If the stories don’t have direct benefit to the end user, the Product Owner should question whether or not the story should be implemented.  In general, technical stories should be included as tasks in User Stories.  Technical stories are often needed, but the ultimate value to the user is in user based functionality, so technical stories should be considered nothing more than overhead in providing that user functionality. Until the iteration prior to development, stories should be nothing more than short, one line placeholders. An exercise called Story Planning can be used to brainstorm and come up with stories.  I’ll save the description of this activity for another blog post. For more information on User Stories, please read the book User Stories Applied by Mike Cohn. Prioritizes Stories in the Product Backlog, Release Backlog and Iteration Backlog Prioritization of stories is one of the most difficult tasks that a Product Owner must do.  A key concept of Scrum done right is the need to have the team working from a single set of prioritized stories.  If the team does not have a single set of prioritized stories, Scrum will likely fail at your organization.  The Product Owner is the ONLY person who has the responsibility to prioritize that list.  The Product Owner must be very diplomatic and sincerely listen to the people around him so that he can get the priorities correct. Just listening will still not yield the proper priorities.  Care must also be taken to ensure that Return on Investment is also considered.  Ultimately, determining which stories give the most value to the company for the least cost is the most important factor in determining priorities.  Product Owners should be willing to look at cold, hard numbers to determine the order for stories.  Even when many people want a feature, if that features is costly to develop, it may not have as high of a return on investment as features that are cheaper, but not as popular. The act of prioritization often causes conflict in an environment.  Customer Service thinks that feature X is the most important, because it will stop people from calling.  Operations thinks that feature Y is the most important, because it will stop servers from crashing.  Developers think that feature Z is most important because it will make writing software much easier for them.  All of these are useful goals, but the team can have only one list of items, and each item must have a priority that is different from all other stories.  The Product Owner will determine which feature gives the best return on investment and the other features will have to wait their turn, which means that someone will not have their top priority feature implemented first. A weak Product Owner will refuse to do prioritization.  I’ve heard from multiple Product Owners the following phrase, “Well, it’s all got to be done, so what does it matter what order we do it in?”  If your product owner is using this phrase, you need a new Product Owner.  Order is VERY important.  In Scrum, every release is potentially shippable.  If the wrong priority items are developed, then the value added in each release isn’t what it should be.  Additionally, the Product Owner with this mindset doesn’t understand Agile.  A product is NEVER finished, until the company has decided that it is no longer a going concern and they are no longer going to sell the product.  Therefore, prioritization isn’t an event, its something that continues every day.  The logical extension of the phrase “It’s all got to be done” is that you will never ship your product, since a product is never “done.”  Once stories have been prioritized, assigning them to the Release Backlog and the Iteration Backlog becomes relatively simple.  The top priority items are copied into the respective backlogs in order and the task is complete.  The team does have the right to shuffle things around a little in the iteration backlog.  For example, they may determine that working on story C with story A is appropriate because they’re related, even though story B is technically a higher priority than story C.  Or they may decide that story B is too big to complete in the time available after Story A has tasks created, so they’ll work on Story C since it’s smaller.  They can’t, however, go deep into the backlog to pick stories to implement.  The team and the Product Owner should work together to determine what’s best for the company. Prioritization is time consuming, but its one of the most important things a Product Owner does. Determines Release Dates and Iteration Dates Product owners are responsible for determining release dates for a product.  A common misconception that Product Owners have is that every “release” needs to correspond with an actual release to customers.  This is not the case.  In general, releases should be no more than 3 months long.  You  may decide to release the product to the customers, and many companies do release the product to customers, but it may also be an internal release. If a release date is too far away, developers will fall into the trap of not feeling a sense of urgency.  The date is far enough away that they don’t need to give the release their full attention.  Additionally, important tasks, such as performance tuning, regression testing, user documentation, and release preparation, will not happen regularly, making them much more difficult and time consuming to do.  The more frequently you do these tasks, the easier they are to accomplish. The Product Owner will be a key participant in determining whether or not a release should be sent out to the customers.  The determination should be made on whether or not the features contained in the release are valuable enough  and complete enough that the customers will see real value in the release.  Often, some features will take more than three months to get them to a state where they qualify for a release or need additional supporting features to be released.  The product owner has the right to make this determination. In addition to release dates, the Product Owner also will help determine iteration dates.  In general, an iteration length should be chosen and the team should follow that iteration length for an extended period of time.  If the iteration length is changed every iteration, you’re not doing Scrum.  Iteration lengths help the team and company get into a rhythm of developing quality software.  Iterations should be somewhere between 2 and 4 weeks in length.  Any shorter, and significant software will likely not be developed.  Any longer, and the team won’t feel urgency and planning will become very difficult. Iterations may not be extended during the iteration.  Companies where Scrum isn’t really followed will often use this as a strategy to complete all stories.  They don’t want to face the harsh reality of what their true performance is, and looking good is more important than seeking visibility and improving the process and team.  Companies like this typically don’t allow failure.  This is unhealthy.  Failure is part of life and unless we learn from it, we can’t improve.  I would much rather see a team push out stories to the next iteration and then have healthy discussions about why they failed rather than extend the iteration and not deal with the core problems. If iteration length varies, retrospectives become more difficult.  For example, evaluating the performance of the team’s estimation efforts becomes much more difficult if the iteration length varies.  Also, the team must have a velocity measurement.  If the iteration length varies, measuring velocity becomes impossible and upper management no longer will have the ability to evaluate the teams performance.  People external to the team will no longer have the ability to determine when key features are likely to be developed.  Variable iterations cause the entire company to fail and likely cause Scrum to fail at an organization. Develops Story Details and Helps the Team Understand Those Details A key concept in Scrum is that the stories are nothing more than a placeholder for a conversation.  Stories should be nothing more than short, one line statements about the functionality.  The team will then converse with the Product Owner about the details about that story.  The product owner needs to have a very good idea about what the details of the story are and needs to be able to help the team understand those details. Too often, we see this requirement as being translated into the need for comprehensive documentation about the story, including old fashioned requirements documentation.  The team should only develop the documentation that is required and should not develop documentation that is only created because their is a process to do so. In general, what we see that works best is the iteration before a team starts development work on a story, the Product Owner, with other appropriate business analysts, will develop the details of that story.  They’ll figure out what business rules are required, potentially make paper prototypes or other light weight mock-ups, and they seek to understand the story and what is implied.  Note that the time allowed for this task is deliberately short.  The Product Owner only has a single iteration to develop all of the stories for the next iteration. If more than one iteration is used, I’ve found that teams will end up with Big Design Up Front and traditional requirements documents.  This is a waste of time, since the team will need to then have discussions with the Product Owner to figure out what the requirements document says.  Instead of this, skip making the pretty pictures and detailing the nuances of the requirements and build only what is minimally needed by the team to do development.  If something comes up during development, you can address it at that time and figure out what you want to do.  The goal is to keep things as light weight as possible so that everyone can move as quickly as possible. Helps QA to Develop Acceptance Tests In Scrum, no story can be counted until it is accepted by QA.  Because of this, acceptance tests are very important to the team.  In general, acceptance tests need to be developed prior to the iteration or at the very beginning of the iteration so that the team can make sure that the tasks that they develop will fulfill the acceptance criteria. The Product Owner will help the team, including QA, understand what will make the story acceptable.  Note that the Product Owner needs to be careful about specifying that the feature will work “Perfectly” at the end of the iteration.  In general, features are developed a little bit at a time, so only the bit that is being developed should be considered as necessary for acceptance. A weak Product Owner will make statements like “Do it right the first time.”  Not only are these statements damaging to the team (like they would try to do it WRONG the first time . . .), they’re also ignoring the iterative nature of Scrum.  Additionally, a weak product owner will seek to add scope in the acceptance testing.  For example, they will refuse to determine acceptance at the beginning of the iteration, and then, after the team has planned and committed to the iteration, they will expand scope by defining acceptance.  This often causes the team to miss the iteration because scope that wasn’t planned on is included.  There are ways that the team can mitigate this problem.  For example, include extra “Product Owner” time to deal with the uncertainty that you know will be introduced by the Product Owner.  This will slow the perceived velocity of the team and is not ideal, since they’ll be doing more work than they get credit for. Interact with the Customer to Make Sure that the Product is Meeting the Customer’s Needs Once development is complete, what the team has worked on should be put in front of real live people to see if it meets the needs of the customer.  One of the great things about Agile is that if something doesn’t work, we can revisit it in a future iteration!  This frees up the team to make the best decision now and know that if that decision proves to be incorrect, the team can revisit it and change that decision. Features are about adding value to the customer, so if the customer doesn’t find them useful, then having the team make tweaks is valuable.  In general, most software will be 80 to 90 percent “right” after the initial round and only minor tweaks are required.  If proper coding standards are followed, these tweaks are usually minor and easy to accomplish.  Product Owners that are doing a good job will encourage real users to see and use the software, since they know that they are trying to add value to the customer. Poor product owners will think that they know the answers already, that their customers are silly and do stupid things and that they don’t need customer input.  If you have a product owner that is afraid to show the team’s work to real customers, you probably need a different product owner. Up Next, “Who Makes a Good Product Owner.” Followed by, “Messing with the Team.” Technorati Tags: Scrum,Product Owner

    Read the article

  • CodePlex Daily Summary for Wednesday, April 21, 2010

    CodePlex Daily Summary for Wednesday, April 21, 2010New ProjectsA WPF ImageButton that uses pixel shader for dynamically change the image status: A WPF ImageButton that uses pixel shader for dynamically change the image status.CCLI SongSelect Library: A managed code library for reading and working with CCLI SongSelect files in C#, VB.NET or any other language that can use DLLs built with Managed ...code paste: A code paste with idea, insight and funny from web. All the paster rights belong the original author.CodeBlock: CodeBlock is CLI implementation which uses LLVM as its native code generator. CodeBlock provides LLVM binding for .NET which is written in C#, a...CSS 360 Planetary Calendar: This is the Planetary Calendar for UW Bothell -- CSS 360DNN 4 Friendly Url Modification: DNN 4.6.2.0 source code modification for Friendly URL support.Event Scavenger: "Event Scavenger" was created out of a need to manage and monitor multiple servers and have a global view of what's happening in the logs of multip...FastBinaryFileSearch: General Purpose Binary Search Implementation Of BoyerMooreHorspool Algorithmgotstudentathelaticprofile: Test project Grate: Grate process photos by color-dispersion-tree It's developed in C#GZK2010: GZK Project is used for sdo.gzk 2010JpAccountingBeta: JpAccountingBetaLog4Net Viewer: Log4Net Viewer is a log4net files parser and exposes them to webservices. You can then watch them on a rich interface. This code is .NET4 with an ...MarkLogic Toolkit for Excel: The MarkLogic Toolkit for Excel allows you to quickly build content applications with MarkLogic Server that extend the functionality of Microsoft E...MarkLogic Toolkit for PowerPoint: The MarkLogic Toolkit for PowerPoint allows you to quickly build content applications with MarkLogic Server that extend the functionality of Micros...MarkLogic Toolkit for Word: The MarkLogic Toolkit for Word allows you to quickly build content applications with MarkLogic Server that extend the functionality of Microsoft Wo...MvcContrib Portable Areas: This project hosts mvccontrib portable areas.OgmoXNA: OgmoXNA is an XNA 3.1 Game Library and set of Content Pipeline Extensions for Matt Thorson's multi-platform Ogmo Editor. Its goal is to provide ne...Pdf ebook seaerch engine and viewer: PDF Search Engine is a book search engine search on sites, forums, message boards for pdf files. You can find and download a tons of e-books but p...ResizeDragBehavior: This C# Silverlight 3 ResizeDragBehavior is a simpel implementation of resizing columns left, right, above or under a workingspace. It allows you ...Roguelike school project: A simple Rogue-like game made in c# for my school project. Uses Windows forms for GUI and ADO.NET + SQL Server CE for persistency.SharePoint Service Account Password Recovery Tool: A utility to recover SharePoint service account and application pool passwords.Smart Include - a powerful & easy way to handle your CSS and Javascript includes: Smart Include provides web developers with a very easy way to have all their css and javascript files compressed and/or minified, placed in a singl...sNPCedit: Perfect World NPC Editorstatusupdates: Generic status updatesTRXtoHTML: This is a command line utility to generate html report files from VSTS 2008 result files (trx). Usage: VSTSTestReport /q <input trx file> <outpu...WawaWuzi: 网页版五子棋,欢迎大家参与哦,呵呵。WPF Alphabet: WPF Alphabet is a application that I created to help my child learn the alphabet. It displays each letter and pronounces it using speech synthesis....WPF AutoCompleteBox for Chinese Spell: CSAutoBox is a type of WPF AutoCompleteBox for Chinese Spell in Input,Like Google,Bing.WpfCollections: WpfCollections help in WPF MVVM scenarios, resolving some of common problems and tasks: - delayed CurrentChange in ListCollectionView, - generate...XML Log Viewer in the Cloud: Silvelright 4 application hosted on Windows Azure. Upload any log file based on xml format. View log, search log, diff log, catalog etc.New ReleasesA Guide to Parallel Programming: Drop 3 - Guide Preface, Chapters 1, 2, 5, and code: This is Drop 3 with Guide Preface, Chapters 1, 2, 5, and References, and the accompanying code samples. This drop requires Visual Studio 2010 Beta ...Artefact Animator: Artefact Animator - Silverlight 4 and WPF .Net 4: Artefact Animator Version 4.0.4.6 Silverlight 4 ArtefactSL.dll for both Debug and Release. WPF 4 Artefact.dll for both Debug and Release.ASP.NET Wiki Control: Release 1.2: Includes SyntaxHighlighter integration. The control will display the functionality if it detects that http://alexgorbatchev.com/wiki/SyntaxHighli...C# to VB.NET VB.NET To C# Code Convertor: CSharp To VB.Net Convertor VS2010 Support: !VS2010 Support Added To The Addon Visual Studio buildin VB.Net To C# , C# To VB.Net Convertor using NRefactor from icsharpcode's SharpDevelop. The...CodeBlock: LLVM - 2010-04-20: These are precompiled LLVM dynamic link libraries. One for AMD64 architecture and one for IA32. To use these DLL's you should copy them to corresp...crudwork library: crudwork 2.2.0.4: What's new: qapp (shorts for Query Analyzer Plus Plus) is a SQL query analyzer previously for SQLite only now works for all databases (SQL, Oracle...DiffPlex - a .NET Diff Generator: DiffPlex 1.1: Change listFixed performance bug caused by logging for debug build Added small performance fix in the core diff algorithm Package the release b...Event Scavenger: First release (version 3.0): This release does not include any installers yet as the system is a bit more complex than a simple MSI installer can handle. Follow the instruction...Extend SmallBasic: Teaching Extensions v.012: added archiving for screen shots (Tortoise.approve) ColorWheel exits if emptyFree Silverlight & WPF Chart Control - Visifire: Charts for Silverlight 4!: Hi, Visifire now works with Silverlight 4. Microsoft released Silverlight 4 last week. Some of the new features include more fluid animations, Web...IST435: Lab 5 - DotNetNuke Module Development: Lab 5 - DotNetNuke Module DevelopmentThis is the instructions for Lab 5 on. This lab must be completed in-class. It is based on your Lab 4.KEMET_API: Kemet API v0.2d: new platform with determiners and ideograms ... please consult the "release_note.txt" for more informations.MDT Scripts, Front Ends, Web Services, and Utilities for use with ConfigMgr/SCCM: PrettyGoodFrontEndClone (v1.0): This is a clone of the great PrettyGoodFrontEnd written by Johan Arwidmark that uses the Deployment Webservice as a backend so you don't need to ho...NMigrations: 1.0.0.3: CHG: upgraded solution/projects to Visual Studio 2010 FIX: removed precision/scale from MONEY data type (issue #8081) FIX: added support for binary...Object/Relational Mapper & Code Generator in Net 2.0 for Relational & XML Schema: 2.6: Minor release.OgmoXNA: OgmoXNA Alpha Binaries: Binaries Release build binaries for the Windows and Xbox 360 platforms. Includes the Content Pipeline Extensions needed to build your projects in ...Ox Game Engine for XNA: Release 70 - Fixes: Update in 2.2.3.2 Removed use of 'reflected' render state. May fix some render errors. Original Hi all! I fixed all of the major known problems...patterns & practices – Enterprise Library: Enterprise Library 5.0 - April 2010: Microsoft Enterprise Library 5.0 official binaries can be downloaded from MSDN. Please be patient as the bits get propagated through the download s...Pdf ebook seaerch engine and viewer: Codes PDf ebook: CodesPlay-kanaler (Windows Media Center Plug-in): Playkanaler 1.0.4: Playkanaler version 1.0.4 Viasatkanalerna kanske fungerar igen, tack vare AleksandarF. Pausa och spola fungerar inte ännu.PokeIn Comet Ajax Library: PokeIn v06 x64: Bug fix release of PokeIn x64 Security Bugs Fixed Encoding Bugs Fixed Performance Improvements Made New Method in BrowserHelper classPokeIn Comet Ajax Library: PokeIn v06 x86: Bug fix release of PokeIn x86 Security Bugs Fixed Encoding Bugs Fixed Performance Improvements Made New Method in BrowserHelper classPowerSlim - Acceptance Testing for Enterprise Applications: PowerSlim 0.2: We’re pleased to announce the PowerSlim 0.2. The main feature of this release is Windows Setup which installs all you need to start doing Acceptan...Rapidshare Episode Downloader: RED v0.8.5: This release fixes some bugs that mainly have to do with Next and Add Show functionality.Rawr: Rawr 2.3.15: - Improvements to Wowhead/Armory parsing. - Rawr.Mage: Fix for calculations being broken on 32bit OSes. - Rawr.Warlock: Lots more work on fleshin...ResizeDragBehavior: ResizeDragBehavior 1.0: First release of the ResizeDragBehavior. Also includes a sampleproject to see how this behavior can be implemented.RoTwee: RoTwee (11.0.0.0): 17316 Follow Visual Studio 2010/.NET Framework 4.0SharePoint Service Account Password Recovery Tool: 1.0: This is the first release of the password recovery toolSilverlightFTP: SilverlightFTP Beta RUS: SilverlightFTP with drag-n-drop support. Russian.SqlCe Viewer (SeasonStar Database Management): SqlCe Viewer(SSDM) 0.0.8.3: 1:Downgrade to .net framework 3.5 sp1 2:Fix some bugs 3:Refactor Mysql EntranceThe Ghost: DEL3SWE: DEL3SWETMap for VS2010: TMap for Visual Studio 2010: TMap for Visual Studio 2010Sogeti has developed a testing process template that integrates the TMap test approach with Visual Studio 2010 (VS2010)....TRXtoHTML: TRXtoHTML v1.1: Minor updateVisual Studio Find Results Window Tweak: Find Results Window Tweak: First stable release of the tool, which enables you to tweak the find results window.Web Service Software Factory: 15 Minute Walkthrough for WSSF2010: This walkthrough provides a very brief introduction for those who either do not have a lot of time for a full introduction, or those who are lookin...Web Service Software Factory: Hands On Lab - Building a Web Service (VS2010): This hands-on lab provides eight exercies to briefly introduce most of the experiences of building a Web service using the Service Factory 2010. Th...Web Service Software Factory: Web Service Software Factory 2010 Source Code: System Requirements • Microsoft Visual Studio 2010 (Premium, Professional or Ultimate Edition) • Guidance Automation Extensions 2010 • Visu...WPF Alphabet: Source Code plus Binaries: Compete C# and WPF source code available below. I have also included the binary for those that just want to run it.WPF AutoCompleteBox for Chinese Spell: CSAutoBox V1.0: This is CSAutoBox V1.0 Beta,if you have any questions,please email me.Most Popular ProjectsRawrWBFS ManagerSilverlight ToolkitAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryASP.NETMicrosoft SQL Server Community & SamplesPHPExcelMost Active ProjectsRawrpatterns & practices – Enterprise LibraryBlogEngine.NETIonics Isapi Rewrite FilterFarseer Physics EnginePHPExcelTweetSharpCaliburn: An Application Framework for WPF and SilverlightNB_Store - Free DotNetNuke Ecommerce Catalog ModulePokeIn Comet Ajax Library

    Read the article

  • How Oracle Data Integration Customers Differentiate Their Business in Competitive Markets

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 With data being a central force in driving innovation and competing effectively, data integration has become a key IT approach to remove silos and ensure working with consistent and trusted data. Especially with the release of 12c version, Oracle Data Integrator and Oracle GoldenGate offer easy-to-use and high-performance solutions that help companies with their critical data initiatives, including big data analytics, moving to cloud architectures, modernizing and connecting transactional systems and more. In a recent press release we announced the great momentum and analyst recognition Oracle Data Integration products have achieved in the data integration and replication market. In this press release we described some of the key new features of Oracle Data Integrator 12c and Oracle GoldenGate 12c. In addition, a few from our 4500+ customers explained how Oracle’s data integration platform helped them achieve their business goals. In this blog post I would like to go over what these customers shared about their experience. Land O’Lakes is one of America’s premier member-owned cooperatives, and offers an extensive line of agricultural supplies, as well as production and business services. Rich Bellefeuille, manager, ETL & data warehouse for Land O’Lakes told us how GoldenGate helped them modernize their critical ERP system without impacting service and how they are moving to new projects with Oracle Data Integrator 12c: “With Oracle GoldenGate 11g, we've been able to migrate our enterprise-wide implementation of Oracle’s JD Edwards EnterpriseOne, ERP system, to a new database and application server platform with minimal downtime to our business. Using Oracle GoldenGate 11g we reduced database migration time from nearly 30 hours to less than 30 minutes. Given our quick success, we are considering expansion of our Oracle GoldenGate 12c footprint. We are also in the midst of deploying a solution leveraging Oracle Data Integrator 12c to manage our pricing data to handle orders more effectively and provide a better relationship with our clients. We feel we are gaining higher productivity and flexibility with Oracle's data integration products." ICON, a global provider of outsourced development services to the pharmaceutical, biotechnology and medical device industries, highlighted the competitive advantage that a solid data integration foundation brings. Diarmaid O’Reilly, enterprise data warehouse manager, ICON plc said “Oracle Data Integrator enables us to align clinical trials intelligence with the information needs of our sponsors. It helps differentiate ICON’s services in an increasingly competitive drug-development industry."  You can find more info on ICON's implementation here. A popular use case for Oracle GoldenGate’s real-time data integration is offloading operational reporting from critical transaction processing systems. SolarWorld, one of the world’s largest solar-technology producers and the largest U.S. solar panel manufacturer, implemented Oracle GoldenGate for real-time data integration of manufacturing data for fast analysis. Russ Toyama, U.S. senior database administrator for SolarWorld told us real-time data helps their operations and GoldenGate’s solution supports high performance of their manufacturing systems: “We use Oracle GoldenGate for real-time data integration into our decision support system, which performs real-time analysis for manufacturing operations to continuously improve product quality, yield and efficiency. With reliable and low-impact data movement capabilities, Oracle GoldenGate also helps ensure that our critical manufacturing systems are stable and operate with high performance."  You can watch the full interview with SolarWorld's Russ Toyama here. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Starwood Hotels and Resorts is one of the many customers that found out how well Oracle Data Integration products work with Oracle Exadata. Gordon Light, senior director of information technology for StarWood Hotels, says they had notable performance gain in loading Oracle Exadata reporting environment: “We leverage Oracle GoldenGate to replicate data from our central reservations systems and other OLTP databases – significantly decreasing the overall ETL duration. Moving forward, we plan to use Oracle GoldenGate to help the company achieve near-real-time reporting.”You can listen about Starwood Hotels' implementation here. Many companies combine the power of Oracle GoldenGate with Oracle Data Integrator to have a single, integrated data integration platform for variety of use cases across the enterprise. Ufone is another good example of that. The leading mobile communications service provider of Pakistan has improved customer service using timely customer data in its data warehouse. Atif Aslam, head of management information systems for Ufone says: “Oracle Data Integrator and Oracle GoldenGate help us integrate information from various systems and provide up-to-date and real-time CRM data updates hourly, rather than daily. The applications have simplified data warehouse operations and allowed business users to make faster and better informed decisions to protect revenue in the fast-moving Pakistani telecommunications market.” You can read more about Ufone's use case here. In our Oracle Data Integration 12c launch webcast back in November we also heard from BT’s CTO Surren Parthab about their use of GoldenGate for moving to private cloud architecture. Surren also shared his perspectives on Oracle Data Integrator 12c and Oracle GoldenGate 12c releases. You can watch the video here. These are only a few examples of leading companies that have made data integration and real-time data access a key part of their data governance and IT modernization initiatives. They have seen real improvements in how their businesses operate and differentiate in today’s competitive markets. You can read about other customer examples in our Ebook: The Path to the Future and access resources including white papers, data sheets, podcasts and more via our Oracle Data Integration resource kit. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Little PM side post...

    - by edgaralgernon
    When adding new team memebers... off set the ramp up time by 1) having pre built machines ready and and easy method of getting the lastest tools, code base etc. I'm fortunate enough to be at a client that has a machine ready built and loaded when the dev arrives, all they have to do is grab the code. 2) have tasks broken down so that dependencies are as minimal as possible. In other words, to over come the mythical man month issue (as recently mentioned on slashdot) make sure the tasks you hand out have few dependencies on each other. That way the new dev is able to be productive fairly quickly. Here's our historical lead time... the bump in Jan is due to added work, by 2/18 we had added 4 new people over the last two weeks. And amazing the time starts coming down: Here's our averag work time: again time ramps up as we are adding more tasks, but then starts inching back down through out Feb and March. It's not that we beat the Mythical Man Month, and in fact I still believe the book and idea are highly relevant. But if you can break the tasks down and reduce the dependencies between the task then you can mitigate the effect. The tool used in this case is from AgileZen.com and some of the wild swings are due to inexperience with the system initially... but our average times as measured by the tool are matching real life. Also the tool appearst to measure in 24 hour days and 7 day weeks. so it isn't as bad as it looks. :-)

    Read the article

  • C#: Adding Functionality to 3rd Party Libraries With Extension Methods

    - by James Michael Hare
    Ever have one of those third party libraries that you love but it's missing that one feature or one piece of syntactical candy that would make it so much more useful?  This, I truly think, is one of the best uses of extension methods.  I began discussing extension methods in my last post (which you find here) where I expounded upon what I thought were some rules of thumb for using extension methods correctly.  As long as you keep in line with those (or similar) rules, they can often be useful for adding that little extra functionality or syntactical simplification for a library that you have little or no control over. Oh sure, you could take an open source project, download the source and add the methods you want, but then every time the library is updated you have to re-add your changes, which can be cumbersome and error prone.  And yes, you could possibly extend a class in a third party library and override features, but that's only if the class is not sealed, static, or constructed via factories. This is the perfect place to use an extension method!  And the best part is, you and your development team don't need to change anything!  Simply add the using for the namespace the extensions are in! So let's consider this example.  I love log4net!  Of all the logging libraries I've played with, it, to me, is one of the most flexible and configurable logging libraries and it performs great.  But this isn't about log4net, well, not directly.  So why would I want to add functionality?  Well, it's missing one thing I really want in the ILog interface: ability to specify logging level at runtime. For example, let's say I declare my ILog instance like so:     using log4net;     public class LoggingTest     {         private static readonly ILog _log = LogManager.GetLogger(typeof(LoggingTest));         ...     }     If you don't know log4net, the details aren't important, just to show that the field _log is the logger I have gotten from log4net. So now that I have that, I can log to it like so:     _log.Debug("This is the lowest level of logging and just for debugging output.");     _log.Info("This is an informational message.  Usual normal operation events.");     _log.Warn("This is a warning, something suspect but not necessarily wrong.");     _log.Error("This is an error, some sort of processing problem has happened.");     _log.Fatal("Fatals usually indicate the program is dying hideously."); And there's many flavors of each of these to log using string formatting, to log exceptions, etc.  But one thing there isn't: the ability to easily choose the logging level at runtime.  Notice, the logging levels above are chosen at compile time.  Of course, you could do some fun stuff with lambdas and wrap it, but that would obscure the simplicity of the interface.  And yes there is a Logger property you can dive down into where you can specify a Level, but the Level properties don't really match the ILog interface exactly and then you have to manually build a LogEvent and... well, it gets messy.  I want something simple and sexy so I can say:     _log.Log(someLevel, "This will be logged at whatever level I choose at runtime!");     Now, some purists out there might say you should always know what level you want to log at, and for the most part I agree with them.  For the most party the ILog interface satisfies 99% of my needs.  In fact, for most application logging yes you do always know the level you will be logging at, but when writing a utility class, you may not always know what level your user wants. I'll tell you, one of my favorite things is to write reusable components.  If I had my druthers I'd write framework libraries and shared components all day!  And being able to easily log at a runtime-chosen level is a big need for me.  After all, if I want my code to really be re-usable, I shouldn't force a user to deal with the logging level I choose. One of my favorite uses for this is in Interceptors -- I'll describe Interceptors in my next post and some of my favorites -- for now just know that an Interceptor wraps a class and allows you to add functionality to an existing method without changing it's signature.  At the risk of over-simplifying, it's a very generic implementation of the Decorator design pattern. So, say for example that you were writing an Interceptor that would time method calls and emit a log message if the method call execution time took beyond a certain threshold of time.  For instance, maybe if your database calls take more than 5,000 ms, you want to log a warning.  Or if a web method call takes over 1,000 ms, you want to log an informational message.  This would be an excellent use of logging at a generic level. So here was my personal wish-list of requirements for my task: Be able to determine if a runtime-specified logging level is enabled. Be able to log generically at a runtime-specified logging level. Have the same look-and-feel of the existing Debug, Info, Warn, Error, and Fatal calls.    Having the ability to also determine if logging for a level is on at runtime is also important so you don't spend time building a potentially expensive logging message if that level is off.  Consider an Interceptor that may log parameters on entrance to the method.  If you choose to log those parameter at DEBUG level and if DEBUG is not on, you don't want to spend the time serializing those parameters. Now, mine may not be the most elegant solution, but it performs really well since the enum I provide all uses contiguous values -- while it's never guaranteed, contiguous switch values usually get compiled into a jump table in IL which is VERY performant - O(1) - but even if it doesn't, it's still so fast you'd never need to worry about it. So first, I need a way to let users pass in logging levels.  Sure, log4net has a Level class, but it's a class with static members and plus it provides way too many options compared to ILog interface itself -- and wouldn't perform as well in my level-check -- so I define an enum like below.     namespace Shared.Logging.Extensions     {         // enum to specify available logging levels.         public enum LoggingLevel         {             Debug,             Informational,             Warning,             Error,             Fatal         }     } Now, once I have this, writing the extension methods I need is trivial.  Once again, I would typically /// comment fully, but I'm eliminating for blogging brevity:     namespace Shared.Logging.Extensions     {         // the extension methods to add functionality to the ILog interface         public static class LogExtensions         {             // Determines if logging is enabled at a given level.             public static bool IsLogEnabled(this ILog logger, LoggingLevel level)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         return logger.IsDebugEnabled;                     case LoggingLevel.Informational:                         return logger.IsInfoEnabled;                     case LoggingLevel.Warning:                         return logger.IsWarnEnabled;                     case LoggingLevel.Error:                         return logger.IsErrorEnabled;                     case LoggingLevel.Fatal:                         return logger.IsFatalEnabled;                 }                                 return false;             }             // Logs a simple message - uses same signature except adds LoggingLevel             public static void Log(this ILog logger, LoggingLevel level, object message)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message);                         break;                     case LoggingLevel.Informational:                         logger.Info(message);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message);                         break;                     case LoggingLevel.Error:                         logger.Error(message);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message);                         break;                 }             }             // Logs a message and exception to the log at specified level.             public static void Log(this ILog logger, LoggingLevel level, object message, Exception exception)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message, exception);                         break;                     case LoggingLevel.Informational:                         logger.Info(message, exception);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message, exception);                         break;                     case LoggingLevel.Error:                         logger.Error(message, exception);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message, exception);                         break;                 }             }             // Logs a formatted message to the log at the specified level.              public static void LogFormat(this ILog logger, LoggingLevel level, string format,                                          params object[] args)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.DebugFormat(format, args);                         break;                     case LoggingLevel.Informational:                         logger.InfoFormat(format, args);                         break;                     case LoggingLevel.Warning:                         logger.WarnFormat(format, args);                         break;                     case LoggingLevel.Error:                         logger.ErrorFormat(format, args);                         break;                     case LoggingLevel.Fatal:                         logger.FatalFormat(format, args);                         break;                 }             }         }     } So there it is!  I didn't have to modify the log4net source code, so if a new version comes out, i can just add the new assembly with no changes.  I didn't have to subclass and worry about developers not calling my sub-class instead of the original.  I simply provide the extension methods and it's as if the long lost extension methods were always a part of the ILog interface! Consider a very contrived example using the original interface:     // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsWarnEnabled)             {                 _log.WarnFormat("Statement {0} took too long to execute.", statement);             }             ...         }     }     Now consider this alternate call where the logging level could be perhaps a property of the class          // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // allow logging level to be specified by user of class instead         public LoggingLevel ThresholdLogLevel { get; set; }                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsLogEnabled(ThresholdLogLevel))             {                 _log.LogFormat(ThresholdLogLevel, "Statement {0} took too long to execute.",                     statement);             }             ...         }     } Next time, I'll show one of my favorite uses for these extension methods in an Interceptor.

    Read the article

  • SQL SERVER – What is SSRS and Why SSRS is asked for in many Job Opening?

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. This will be a 5 day blog post in getting started with SSRS. Today will show the importance of SSRS in the business. Why is SSRS asked for in so many job openings? If you talk to an SSRS expert it’s very clear to them exactly why companies really need this invention and how it saves time and adds business value. You don’t have to be an SSRS expert to know its value or to start using it. For example you don’t have to be an airline pilot to know the usefulness of modern transportation. Even the people who don’t know how to run SSRS but need the reports can tell you why that is needed. This blog post will go into why SSRS is an important invention by showing how it improves the usage of information in your company. Before SSRS there has always been a need for a company to benefit from the use of its own information. Excel spreadsheets have been a popular way to do this for a long time. With SSRS you can still use this solution and gain many other options too. A friend of mine told me a story about doing database work in the 90s for a major company and how he wished SSRS was available back then. The Vice President of the marketing channel would often come to him just before an important meeting with the board of directors. He often needed to show how certain product sales were performing over time. All this information was in the database so it was my friend’s job to get the information out and organized into a medium the VP could use. This medium was usually Excel. The VP often had meetings all over the world where he showcased this Excel report. The solution to get the VP to him anywhere he was in the world was an Excel file attached to an e-mail. This worked pretty well but with some drawbacks. One time my friend sent the wrong file in the e-mail. A few minutes later my friend realized his mistake and sent another frantic e-mail to VP. This one was saying to ignore the last e-mail and use this newer one. Would the VP see the correct e-mail in time? If SSRS had been available, my friend could have created a solution that let the VP run the report any time he wished. The report could have been published to the company intranet where the VP could run it from any of the offices he happened to be traveling to that month. There is a fair amount of work up front to develop and publish the report, but once that work is completed, the report can be reused as many times as needed. My friend could even be on vacation for the first day of the monthly and the VP can get his real-time report. Not only could the report show the most recent data, the VP could choose to view reports of previous months with just a few clicks. The deployed SSRS is user friendly, and can also be configured to protect reports from being run by the wrong people. Tomorrow’s Post Tomorrow’s blog post will show how to know if you already have SSRS installed. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • CodePlex Daily Summary for Sunday, June 13, 2010

    CodePlex Daily Summary for Sunday, June 13, 2010New ProjectsCurve Drawer: A Java project to explore the possibilities of drawing curves and knots.File Manager Redux: .NET version of the original File Manager.Hierachical Gantt Chart In SharePoint 2010: This solution makes it easier for shedule management. We will provide a wsp including a list definition and a custom gantt control. The list defi...Light Box Control for Asp.Net: Lightbox control for asp.net is used to display the thumbnail images. on clicking the thumbnail images the original images is displayed in the ligh...Linquify: Linquify is a Visual Studio 2008/2010 Addin and C# .NET business class / DTO generator for LINQ to SQL and the Entity Framework. It supports rapid ...Microsoft Dynamics CRM Query - T4 Template: A T4 Template that generates code that leverages LINQ to SQL and the Microsoft Dynamics CRM API to give a CRM data access solution. There is also ...Open Sound Control Library: A .NET Library for the Open Sound Control Protocol. This library makes it easy to use devices which communicate via OSC.Questionable Content Screensaver: A screensaver for the questionable content comic. It is written in C#, and uses the windows presentation foundation. See the comic at http://ww...Reflect: Reflect is an open source .NET reflection tool used for viewing metadata of .NET assemblies.runescape 602 client tools and server: runescape 602 client tools and serverSharpCrack: Hash cracker written in managed code.SilverCAT project: This is my Windows Azure study project. So far I did not find any value to share it to the public. If I find it out one day, I will add hereSilverStackAPI: My entry for the Stack Exchange API contest. A silverlight library and demo app.social bookmark control for asp.net: social bookmark control for asp.net, This control is used to bookmark the current asp.net page into popular social networking sites like facebook, ...SSIS Event Log Source: An SSIS 2005 Data Source component for loading Windows 2003/XP event logs (*.evt) into SQL Server 2005 for analysisUnOfficial ActiveWorlds Wrapper.Net: UnOfficial ActiveWorlds Wrapper .Net makes it easier for programmers to make active worlds bots. You'll no longer have to make it by yourself. It'...Using Named Pipe and self-elevation feature of Vista in a console application.: NPipeWithElevatedProc, make it easier for console application users, running programs with administrator privileges. The processing messages are al...Virtual Keyboard control for asp.net: Virtual Keyboard control for asp.net, This control is used to get the secured inputs through virtual keyboards.Visual Studio 2010 PowerShell Code Generator: Brings rich PowerShell functionalities into VS Templating. You can access the file system, the registry, and many other PowerShell features. You ca...WatchersNET.UrlShorty: This Module allows users to shorten a long URL and share it, this is a similiar service to web services like bit.ly, tinyurl.com and others. It als...New ReleasesBD File Hash: BD File Hash 1.0.5: The first public release of BD File Hash.Cassandraemon: Cassandraemon 0.6.0: Cassandraemon is LINQ Provider for Apache Cassandra. This is first release of Cassandraemon. Features You can Query by LINQ Support Regist, Del...Community Forums NNTP bridge: Community Forums NNTP Bridge V36: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...Curve Drawer: Alpha 1: Basic functionality is available to draw curves and clear them.CycleMania Starter Kit EAP - ASP.NET 4 Problem - Design - Solution: Cyclemania 09.32: see Source Code tab for recent change historyDEWD: DEWD for Umbraco v1.0 (beta-2): Beta release of the package. Functional feature set and fairly stable. Since the last release: Default values (support for dynamic values such as t...Fiddler TreeView Panel Extension: FiddlerTreeViewPanel 0.71: Added support for double-click to expand/collapse all child nodes. Keep selected node when losing focus from the TreeView. Please refer to http://...HKGolden Express: HKGoldenExpress (Build 201006130200): New features: User can reply to message with quoting others' message. Bug fix: Incorrect format of dynamically generated Sitemap XML. Improveme...Liekhus ADO.NET Entity Data Model XAF Extensions: Version 1.1.2: Latest patches and changes.Light Box Control for Asp.Net: Light Box Control for asp.net: Lightbox control for asp.net is used to display the thumbnail images. on clicking the thumbnail images the original images is displayed in the ligh...Lightweight Fluent Workflow: Objectflow 1.1.0.0: This release has support for multi-threaded operations. As this required significant changes to the fluent interface I have introduced breaking ch...Linquify: Linquify 1.6: Linquify 1.6 Includes: - Support for Entity Framework foreign keys - TransactionsLiteFx: LiteFx Alpha: Versão alpha do LiteFx.Microsoft Dynamics CRM Query - T4 Template: MS CRM Query T4 Template Version 0.5 Beta: Initial ReleaseNHibernate Membership Provider: NHibernate Membership Provider 0.9c: This is an updated source package with updated unit tests and some minor refactoring.NLog - Advanced .NET Logging: Nightly Build 2010.06.12.001: Changes since the last build:2010-06-12 10:42:41 Jarek Kowalski Added Width, Height, AutoScroll and MaxLines parameters to RichTextBoxTarget. 2010...Radical: Radical 1.0.1 (Vacuum): First drop with support for Windows Phone 7SharpCrack: SharpCrack v0.8: First release of SharpCrack. It does not support brute-force mode.social bookmark control for asp.net: social bookmark control for asp.net: social bookmark control for asp.net, This control is used to bookmark the current asp.net page into popular social networking sites like facebook, ...StardustExtensions: Simple hello: This is a very simple hello world script. Is just a basic script, is not packaged and works on IronPythonTiledLib: TiledLib 1.5: This release introduces breaking changes from 1.2. If you upgrade to this version from 1.2, you may have compiler errors and/or runtime differences...UDC indexes parser: UDC Parser RC1: Обновлена библиотека токенов, добавлены xml-doc комментарии, обновлен и исправлен код, обновлена логика лексера, обновлена грамматика парсера. Доба...UnOfficial ActiveWorlds Wrapper.Net: UnOfficial ActiveWorlds Wrapper.Net V0.5.85.1: NewLogin Structure. LaserBeam. ChangedOld Functions Changes Function Names Old New WorldInstanceSet SetWorldInstance WorldInstanceGet GetWo...UrzaGatherer: UrzaGatherer v2.0.2a: Inegration of VS Installer.VCC: Latest build, v2.1.30612.0: Automatic drop of latest buildVirtual Keyboard control for asp.net: virtual keyboard control: Virtual Keyboard control for asp.net, This control is used to get the secured inputs through virtual keyboards.Visual Studio 2010 PowerShell Code Generator: PSCodeGenerator: How to install PowerShell Code GeneratorDownload the zip Unzip Run .\Install-PSCodeGenerator.ps1 at the PowerShell console prompt Copies the te...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 25 Beta: Build 25 (beta) New: Added support for Filter items (virtual folders) in Solution Explorer. New: Added "Get Lock..." to Solution Explorer context...WatchersNET.UrlShorty: WatchersNET.UrlShorty 01.00.00: First BETA Release Please Read the Readme or the Online Documentation for Install Instructions.Yet Another GPS: Release Beta 2.1: Release Beta 2.1: - Fix KML Template with Google Map Mobile Version - Add Signal Strength indecator - Add Time indecator - Fix Sound Language Prob...Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesASP.NETMost Active Projectspatterns & practices – Enterprise LibraryjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog ModuleRhyduino - Arduino and Managed CodeBlogEngine.NETCommunity Forums NNTP bridgeCassandraemonMediaCoder.NETAndrew's XNA HelpersMicrosoft Silverlight Media Framework

    Read the article

  • Free SQL Server training? Now you’re talking.

    - by Fatherjack
    SQL Server user groups are everywhere, literally all over the globe there are SQL Server professionals meeting on a regular basis, sharing ideas, solving problems, learning about how to do new stuff and new ways to do old stuff and it’s all for free. I don’t have detailed figures but of all the SQL Server professionals there are only a small number of them attend these user groups. Those people are the people that are taking the time and making then effort to make themselves better at their chosen trade, more employable and having a good time. For free. I don’t know why but there are many people that don’t seem to want to be the best they can be. Some of you enlightened people that do already attend could be doing more though. Have you ever spoken at  your group? Not just in the break while you have a mouthful of pizza and a drink in your hand but had the attention of the whole group listen to you speak. It doesn’t need to be a full hour, it doesn’t need to be some obscure deeply technical demonstration of SQL Server internals, just a few minutes on something that you do that might help other people with their daily work. A neat process that helps you get from Problem A to Solution B. There is no need to get concerned that becoming a speaker means that you suddenly have to know more than anyone else in the room. This is you talking about something that you experienced. What you did, what you would repeat, what you might do differently next time. No one in the audience can pick you up on a technicality. If someone comes out with a great idea that you hadn’t thought of, say “That’s a great idea, I didn’t think of that while we had the problem on our hands. I’ll try to remember that for next time”. If someone is looking to show you up for picking the wrong decision (and this, in my experience, is very uncommon indeed) then you simply give a reply like “Well, at the time we chose that option. Perhaps another time then we would tackle things differently but we were happy with how our solution worked”. It’s sharing things like this that makes user groups have a real value, talking about how you coped with or averted a disaster, a handy little section of code or using a tool in a particular way that you take for granted that might, just might, be something that other people haven’t thought of that solves a problem or saves some time for them. At the next meeting you might get the same benefit from a different person and so it goes on. As individuals benefits so the community benefits. For free. Things I encourage you to do; If you are a chapter or user group leader; encourage someone from your group who has never spoken before to start speaking. If you are a chapter or user group attendee that hasn’t spoken before; speak for at least 5 minutes on something related to SQL Server at any group meeting. If you don’t currently attend a user group; please go along to you nearest one when they are meeting next and invest in yourself and your future. UK user group details are here: http://sqlsouthwest.co.uk/national_ug.htm , PASS chapters outside the UK are found via http://www.sqlpass.org/PASSChapters/LocalChapters.aspx. If you are unsure of how you might achieve any of these things then get in touch with me*, I’ll give you specific advice on getting started on any of the above points and help you prove to yourself what you are capable of. SQL Community – be part of it and make it better. Let me know how you get on in the comments.

    Read the article

  • CodePlex Daily Summary for Monday, April 19, 2010

    CodePlex Daily Summary for Monday, April 19, 2010New Projects8085 Microprocessor simulator: This program allows you to write 8085 programs in assembly and run those programs on your PC. It comes with lots of help, plus you can put breakpo...Additional.NET framework: The Additional.Net framework extends the functionality of the .NET framework for easier application development. It is developed in C#.Astoria Contrib: A contrib project for filling the gaps in WCF Data Services, providing missing functionality or augmenting with T4 templates, helpers, etc.ClipoWeb: ClipoWeb is a web clipboard that allows you to copy text and files between computers. Users access a web page on the source and destination compute...elearning Center: Đây là một ứng dụng web viết hoàn toàn bằng Sliverlight. Ứng dụng này là một dạng elearning với đầy đủ chức năng và có khả năng tương tác tối đa v...Excel VSTO SQL Server Browser: Get Data from SQL Server and put it in Excel directly. The objective is to get more control about what do you need to pull and create automatic pro...Generic Tree Structure: Generic Base Classes that helps you to create complex tree structures without writing it again and again. Simply to use Like "var Node<Folder> fold...LAN Lordz LAN Party System: The LAN Lordz LAN Party System makes it easier for medium and large size events to track their attendance, sponsors, door prizes, tournaments, and...LiteFx: O LiteFx é um framework que ajuda na implementação de DDD (anêmico ou rico) ele foi desenvolvido por Douglas Aguiar (http://twitter.com/DouglasAgui...Managed UI Flow for ASP.NET MVC Framework: If your web application getting more complex, understanding and managing of complex UI flows(pageflow of application) getting harder and harder, If...Meus Exemplos: Meus ExemplosOrchard Blueprint Theme: Orchard BluePrint is a project that provides a WYSIWYG reference implementation of a Orchard theme to help designers get started with theme design....Outlook Social Network Connector - Avatar: Avatar 是一个开源的MS Outlook的插件,豆瓣用户可以在Outlook 2010中使用豆瓣。查看一封邮件中相关的收件人、发件人的用户广播、同城活动以及豆邮。不用上豆瓣也能方便了解好友动态。这个插件使用C#, .NET 4.0 开发。API 请求认证使用OAuth 认证。 (Avat...Quadro Tree: This is Quadri tree library.Sharepoint 2010 Alert Controller: In MOSS 2007 or Sharepoint Server 2010 if you want to see your alerts by list name you should use this tool.SharePoint Web Parts: The goal of this project is to develop a set of web parts for SharePoint.Silverlight Image Cropper: This is a silverlight 4 util that makes it easy to crop out a number images of a specific resolution screen or screens. ie. an easy way to crop ...SilverlightFTP: Silverlight ftp clientsplibex: libraries for sharepoint lists manipulationStardustExtensions: Official Extensions for StardustSwim Team Manager: Swim Team Manager is designed for managing and tracking administrative and performance information for your club, school, or swim team. Swim Tea...ToDoListWpf: A To Do List, I used it to manage my work items. I am sorry for my poor English.Trance Layer: TranceLayer is a fast and flexible logging or diagnostics framework for .Net. It allows you to plug it into an existing or new application with m...Unoficcial NeoFM.hu NowPlaying: A little windows tray program. Shows what's on neofm.hu right now.WabbitStudio Z80 Software Tools: The software suite provides all of the tools you need to create high quality Z80 software in Z80 assembly language, with a focus on TI calculators....WinToolbar: Windows.Toolbar is Silverlight library that implements common widgets that allows us to build a rich toolbar control in our applications, it incor...XP-More: XP-More is a tool that helps manage Windows 7 Virtual Machines (XP Mode and any other). Specifically, it makes duplication of VMs a no brainer - no...Yodelay .NET Framework Extensions: The Yodelay .NET Framework Extensions project provides a library of components that make many kinds of programming tasks simpler. These include bas...New ReleasesClipoWeb: ClipoWeb 1.0: First Beta release of the ClipoWeb web applicationDDDSample.Net: 0.8: This release contains all four versions of DDDSample.Net available in previous, 0.7 and a brand new one: Layered Model version. Layered Model demon...DotNetNuke Blueprint: 00.00.02: Added to this version CSS Reset Skin version including Grids This version will soon be updated with corresponding HTML version and DNN templateEsferatec.Text.RegularExpressions: 3.5.1003.1001: first stable release of the class; the assembly file is ready to use, the documentation is complete;Excel VSTO SQL Server Browser: Sample Only: Sample without Ribbon UI, if you close the TaskPane you will no longer able to open it without restart ExcelFolder Bookmarks: Folder Bookmarks 1.5.5: This is the latest version of Folder Bookmarks (1.5.5), with the new Archive Manager and Archive Viewer. It has an installer - it will create a dir...Gardens Point LEX: Gardens Point LEX, Version 1.1.3: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.1.3 corre...Gardens Point Parser Generator: Gardens Point Parser Generator V1.4.0: The distribution is a zip archive which contains the binary executables, documentation, source code and examples. ChangesVersion 1.4.0 of GPPG has...HKGolden Express: HKGoldenExpress (Build 201004181455): New features: Added rating of each topic. (Note: This feature is availabe since Build 201004172120) Bug fix: Handle invalid XML character in XML...Home Access Plus+: v4.0.0.0 Beta: v4.0.0.0 Beta Change Log: Moved to using .net 4.0 New Silverlight Uploader Various .net 4 fixes and tweaks File Changes: All fixes have changedHTML Ruby: 6.21.6: Reduced performance hit on pages with heavy DOM manipulations Fixed issue where empty tags caused it to apply invalid spacing values Stop spaci...LINQ to VFP: LinqToVfp (v1.0.17.2): Modified to allow using RecNo as a primary key. This build requires IQToolkit v0.17b.Managed UI Flow for ASP.NET MVC Framework: Preview 1: The source available on this site, does not reflect the final state of the project, it is a preview of what will be shipping in the framework in th...MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (2): Super minor update to accommodate the new Blend 4 RC. Only changes: The path to the Blend 4 templates changed to be "My Documents\Expression\Blend...N2 CMS: 2.0 rc: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes (1.5 -> 2.0 release candid...OpenGL ES 2.0 Compact Framework Wrapper: Sample application CAB with texturing: This took some time as it was pretty hard to get the texture loaded and setup so that it would bind to the sampler2D in the fragment shader. Featu...Orchard Blueprint Theme: 00.00.01: This is the first release of this project, still in a very alpha version. Very soon this release is to be updated with the HTML version of the them...RoughJs: RoughJsSL: This is Silverlight library's CompilerSharepoint 2010 Alert Controller: Sharepoint 2010 Alert Controller: After you download WSP file you can get help from Home PageSharePoint LogViewer: SharePoint LogViewer 2.5: Minimize log viewer to tray Get popup notification of SharePoint log events from tray Redirect log entries to event log Send email notifications on...Site Directory for SharePoint 2010 (from Microsoft Consulting Services, UK): v1.1: This is a minor update which includes the following changes: Code consolidation across the whole project Additional site data captured. See solut...Stardust: Stardust 1.0: First stable version of Stardust (Build 172)StardustExtensions: Facebook Extension: Extension for stardust to upload and post images on Facebook.StardustExtensions: Facebook Extension (Source): The source code of an extension for Stardust used to post images on facebook.StardustExtensions: WPF Example: This is an example extension. Uses WPF to create a Window and say "Hello World!" Is a perfect download if want to start writing Stardust ExtensionsStardustExtensions: WPF Example Source: This is the source code of an extension that creates a Window using WPF & displays a simple text. Is great as an example of creating Stardust Exten...TFTP Server: TFTP Server 1.1 Beta installer: New MSI based installer Installs a TFTP service Supports multiple servers on different endpoints, with every server pointing to its own root di...TiledLib: TiledLib 1.1: This download is for prebuilt DLLs and a demo project. For the full source code, use the Source Code tab. Changes: Bug fixes in a few methods Ad...Trance Layer: TranceLayer Digger: Digger version is a beta. It is intended to be used as a demonstration of muscles while lacking a set of features that are in the docs. The set of ...uManage - Active Directory Self-Service Portal: uManage v1.2 (.NET 4.0 RTM): New Releasev1.2 Adds the Administrative Portal as well as the requirement of a MSSQL database (2005+). The Setup Wizard has also been updated to i...Unoficcial NeoFM.hu NowPlaying: NeoNotifier: First release. Aplha, but usable.VidCoder: 0.2.1: Changes: Added 2-pass encoding Fixed x264 options getting mangled during p-invoke Fixed intermittent crash with logging window open due to thre...WCF RIA Services Contrib: WCF RIA Services Contrib RC2 Release: This version is for the WCF RIA Services RC2 (SL4 RTM) release. The ApplyState has been modifed in this version to disable validation during proces...WiiCIS.NET: WiiCIS.NET v0.2: Changes... - Removal of WiimoteManager, connection must be done manually - Accelerometer orientation was originally in degrees, is now in radians -...WinToolbar: WinToolbar Source code plus sample: This zip file contains the current version source code and libraries plus a testrunner (sample app).XP-More: 0.9 (Beta): Most of the functionality is in place, final polishing will be done soon.Most Popular ProjectsFacebook Developer ToolkitWSPBuilder (SharePoint WSP tool)QuickGraph, Graph Data Structures And Algorithms for .NetPerformance Analysis of Logs (PAL) Toolpatterns & practices: Team Development with Visual Studio Team Foundation ServerTFS Integration Platformpatterns & practices: Performance Testing Guidance for Web Applicationspatterns & practices: Enterprise Library ContribJSON ViewerManaged Wifi APIMost Active ProjectsRawrpatterns & practices – Enterprise LibraryIndustrial DashboardIonics Isapi Rewrite FilterFarseer Physics EngineMVVM Light ToolkitjQuery Library for SharePoint Web ServicesN2 CMSCaliburn: An Application Framework for WPF and SilverlightBlogEngine.NET

    Read the article

  • Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } The Telecom industry continues to evolve through disruptive products, uncertain markets, shorter product lifecycles and convergence of technologies. Today’s market has moved from network centric to consumer centric and focuses primarily on the customer experience. It has resulted in several product management challenges such as an increased complexity and volume of offerings, creating product variants, accelerating time-to-market, ability to provide multiple product views for varied stakeholders, leveraging OSS intelligence to BSS layer, product co-creation and increasing audit and security concerns for service providers. The document discusses how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry.   1.0.       Introduction   Figure 1: Business Scenario   Modern business demands the launch of complex products in a very short timeframe and effecting changes in the price plan faster without IT intervention. One of the key transformation initiatives companies are focusing on is in the area of product management transformation and operational efficiency improvement. As part of these initiatives, companies are investing in best- in-class COTs-based Product Management solutions developed on industry-wide standards.   The new COTs packages are planned to integrate with existing or new B/OSS systems to provide a strategic end-to-end agile solution for reduced time-to-market and order journey time. In addition, system rationalization is being undertaken to phase out legacy systems and migrate to strategic systems.   2.0.       An Overview of Product Management in Telecom   Product data in telecom is multi- dimensional and difficult to manage. It increased significantly due to the complexity of the product, product offerings on the converged network, increased volume of offerings, bundled offering structures and ever increasing regulatory requirements.   In addition, the shrinking product lifecycle in telecom makes it difficult to manage the dynamic product data. Mergers and acquisitions coupled with organic growth pose major challenges in product portfolio management. It is a roadblock in the journey towards becoming an agile organization.       Figure 2: Complexity in Product Management   Network Technology’ is the new dimension in telecom product management where the same products are realized through different networks i.e., Soiled network to Converged network. Consequently, the product solution is different.     Figure 3: Current Scenario - Pain Points in Product Management   The major business implications arising out of the current scenario are slow time-to-market and an inefficient process that affects innovation.   3.0. Transformation of Next Generation Product Management   Companies must focus on their Product Management Transformation Journey in the areas of:   ·       Management of single truth of product information across the organization/geographies which is currently managed in heterogeneous systems   ·       Management of the Intellectual Property (IP) on the product concept and partnership in the design of discrete components to integrate into the system   ·       Leveraging structured and unstructured product data within the extended enterprise to extract consumer insights and drive innovation   ·       Management of effective operational separation to comply with regulatory bodies   ·       Reuse of existing designs and add relevant features such as value-added services to enable effective product bundling     Figure 4: Next generation needs   PLM-based Enterprise Product Catalogue solutions efficiently address the above requirements and act as an enabler towards product management transformation and rapid product launch.   4.0. PLM-based Enterprise Product Management     Figure 5: PLM-based Enterprise Product Mastering   Enterprise Product Management (EPM) enables the business to manage complex product attributes of data in complex environments. Product Mastering helps create a 'single view' of the product by creating a business-driven, IT-supported environment where a global 'single truth record' is created, managed and reused.   4.1 The Business Case for Telco PLM-based solutions for Enterprise Product Management   ·       Telco PLM-based Product Mastering solutions provide a centralized authoring environment for product definition and control of all product data and rules   ·       PLM packages are designed to support multiple perspectives of product data (ordering perspective, billing perspective, provisioning perspective)   ·       Maintains relationships/links between different elements of the entire product definition   ·       Telco PLM packages are specialized in next generation lifecycle management requirements of products such as revision and state management, test and release management, role management and impact analysis)   ·       Takes into consideration all aspects of OSS product requirements compared to CRM product catalogue solutions where the product data managed is mostly order oriented and transactional     ·       New breed of Telco PLM packages are designed with 'open' standards such as SID and eTOM. They are interoperable, support integration frameworks such as subscription and notification.   ·       Telco PLM packages have developed good collaboration frameworks to integrate suppliers and partners into the product development value chain   4.2 Various Architectures/Approaches for Product Mastering using Telco PLM systems   4. 2.a Single Central Product Management (Mastering) Approach   Figure 6: Single Central Product Management (Master) Approach       This approach is implemented across verticals such as aerospace and automotive. It focuses on a physically centralized product master to which other sources are dependent on. The product definition data (Product bundles, service bundles, price plans, offers and discounts, product configuration rules and market campaigns) is created and maintained physically in a centralized environment. In addition, the product definition/authoring environment is centralized. The existing legacy product definition data available in CRM product catalogue, billing catalogue and the legacy product catalogue is migrated to the centralized PLM-based Enterprise Product Management solution.   Architectural changes must be made in the existing business landscape of applications to create and revise data because the applications have to refer to the central repository for approvals and validation of product configurations. It is achieved by modifying how the applications write data or how the applications can be adapted to use the rules to be managed and published.   Complete product configuration validation will be done in enterprise / central product catalogue and final configuration will be sent to the B/OSS system through the SOA compliant product distribution architecture. The approach/architecture enables greater control in terms of product data management and product data governance.   4.2.b Federated Product Management (Mastering) Architecture     Figure 7: Federated Product Management (Mastering) Architecture   In the federated product mastering approach, the basic unique product definition data (product id, description product hierarchy, basic price plans and simple product design rules) will be centrally created and will be maintained. And, the advanced product definition (Product bundling, promotions, offers & discount plans) will be created in respective down stream OSS systems. The advanced product definition (Product bundling, promotions, offers and discount plans) will be created in respective downstream OSS systems.   For example, basic product definitions such as attributes, product hierarchy and basic price plans will be created and maintained in Enterprise/Central product reference catalogue and distributed to downstream OSS systems. Respective downstream OSS systems build product bundles, promotions, advanced price plans over the basic product definition and master the advanced product definition. Central reference database accesses the respective other source product master data and assembles a point-in-time consolidated view of the product. The approach is typically adapted in some merger and acquisition scenarios where there is a low probability of a central physical authority managing the data. In addition, the migration effort in this case is minimal and there are no big architectural changes to the organization application landscape. However, this approach will not result in better product data management and data governance.   5.0 Customer Scenario – Before EPC deployment   A leading global telecommunications service provider wanted to launch a quad play and triple play service offering in the shortest possible lead time. The service provider was offering Broadband and VoIP services to customers. The company wanted to reuse a majority of the Broadband services and price plans and bundle them with new wireless and IPTV services for quad play and triple play. The challenges in launching the new service offerings were:       Figure 8: Triple Play Plan   ·       Broadband product data was stored in multiple product catalogues (CRM catalogue, Billing catalogue, spread sheets)   ·       Product managers spent a lot of time performing tasks involving duplication or re-keying of data. Manual effort caused errors, cost and time over-runs.   ·       No effective product and price data governance mechanism. Price change issues arising from the lack of data consistency across systems resulted in leakage of customer value and revenue.   ·       Product data had re-usability issues and was not in a structured format. It resulted in uncontrolled product portfolio creation and product management issues.   ·       Lack of enterprise product model resulted into product distribution challenges and thus delays in product launch.   ·       Designers are constrained by existing legacy product management solutions to model product/service requirements and product configuration rules such as upgrading, downgrading and cross selling.    5.1 Customer Scenario - After EPC deployment     Figure 9: SOA-based end-to-end EPC Solution   The company deployed PLM-based Enterprise Product Catalogue solutions to launch quad play service after evaluating various product catalogues. The broadband product offering, service and price data were migrated to the new system, and the product and price plan hierarchy for new offerings were created using the entities defined in the Enterprise Product Model. Supplier product catalogue data such as routers and set up boxes were loaded onto the new solution through SOA-based web service. Price plans and configuration rules were built in the new system. The validated final product configurations were extracted from the product catalogue in a SID format and were distributed to the downstream B/OSS systems through exposed SOA-based web services. The transformations required for the B/OSS system were handled using the transformation layer as part of the solution.   6.0 How PLM enabled Product Management Transformation         Figure 10: Product Management Transformation     PLM-based Product Catalogue Solution helped the customer reduce the product launch cycle time by 30% and enable transformation of Product Management for next generation services.   7.0 Conclusion   On the one hand, the telecom industry is undergoing changes due to disruptions, uncertain product markets and increased complexity of products. On the other hand, the ARPU is decreasing year-on-year. Communications Service Providers are embarking on convergence, bundled service offerings, flexibility to cross-sell and up-sell, introduce new value-added services, leverage Web 2.0 concepts and network capabilities. Consequently, large scale IT transformation initiatives to improve their ARPU supporting network and business transformations are a business imperative. Product Management has become a focus area. Companies are investing in best-in- class COTS solutions to reduce time-to-market, ensure rapid service delivery and improve operational efficiency. An efficient PLM-based enterprise product mastering solution plays a key role in achieving zero touch automation and rapid product launch.   References:   1.     Preston G.Smith, Donald G.Reineristsem, Van Nostrand Reinhold “Developing Products in Half the time”.   2.     John G. Innes, "Achieving Successful Product Change", Pitman Publishing.   3.     D T Pham and R M Setchi (16th Jan, 2001) "Authoring environment for documentation development" University of Wales Cardiff, U.K., Proceedings on Institution of Mechanical Engineers, Vol. 215, Part B.   4.     Oracle Product Hub for Communications:   http://www.oracle.com/us/products/applications/master-data-management/product-hub-082059.html  

    Read the article

  • Setting write permissions for folders while creating a package with MSDeploy

    - by bala_88
    I'm using MSDeploy to create an artefact as a build step in NAnt. This particular build step is called on successful compilation. The artefact is then used to for deployment. Here is the step specified in my build file. <target name="BuildMsDeployPackage" depends="StageForMsDeployPackaging"> <exec program="${msdeploy.exe}" workingdir="${buildDirectory}" verbose="true" commandline="-verb:sync -source:iisapp=${packagingDirectory} -dest:package=${publishDirectory}\${webapp.artifact.zip}"/> The source here is my my web project. I want to be able to set specify write access to a couple of folders in the package that is created. Is this possible? I know that there is a setAcl provider for this specific purpose, but can this be used while creating a package?

    Read the article

  • Announcing ASP.NET MVC 3 (Release Candidate 2)

    - by ScottGu
    Earlier today the ASP.NET team shipped the final release candidate (RC2) for ASP.NET MVC 3.  You can download and install it here. Almost there… Today’s RC2 release is the near-final release of ASP.NET MVC 3, and is a true “release candidate” in that we are hoping to not make any more code changes with it.  We are publishing it today so that people can do final testing with it, let us know if they find any last minute “showstoppers”, and start updating their apps to use it.  We will officially ship the final ASP.NET MVC 3 “RTM” build in January. Works with both VS 2010 and VS 2010 SP1 Beta Today’s ASP.NET MVC 3 RC2 release works with both the shipping version of Visual Studio 2010 / Visual Web Developer 2010 Express, as well as the newly released VS 2010 SP1 Beta.  This means that you do not need to install VS 2010 SP1 (or the SP1 beta) in order to use ASP.NET MVC 3.  It works just fine with the shipping Visual Studio 2010.  I’ll do a blog post next week, though, about some of the nice additional feature goodies that come with VS 2010 SP1 (including IIS Express and SQL CE support within VS) which make the dev experience for both ASP.NET Web Forms and ASP.NET MVC even better. Bugs and Perf Fixes Today’s ASP.NET MVC 3 RC2 build contains many bug fixes and performance optimizations.  Our latest performance tests indicate that ASP.NET MVC 3 is now faster than ASP.NET MVC 2, and that existing ASP.NET MVC applications will experience a slight performance increase when updated to run using ASP.NET MVC 3. Final Tweaks and Fit-N-Finish In addition to bug fixes and performance optimizations, today’s RC2 build contains a number of last-minute feature tweaks and “fit-n-finish” changes for the new ASP.NET MVC 3 features.  The feedback and suggestions we’ve received during the public previews has been invaluable in guiding these final tweaks, and we really appreciate people’s support in sending this feedback our way.  Below is a short-list of some of the feature changes/tweaks made between last month’s ASP.NET MVC 3 RC release and today’s ASP.NET MVC 3 RC2 release: jQuery updates and addition of jQuery UI The default ASP.NET MVC 3 project templates have been updated to include jQuery 1.4.4 and jQuery Validation 1.7.  We are also excited to announce today that we are including jQuery UI within our default ASP.NET project templates going forward.  jQuery UI provides a powerful set of additional UI widgets and capabilities.  It will be added by default to your project’s \scripts folder when you create new ASP.NET MVC 3 projects. Improved View Scaffolding The T4 templates used for scaffolding views with the Add-View dialog now generates views that use Html.EditorFor instead of helpers such as Html.TextBoxFor. This change enables you to optionally annotate models with metadata (using data annotation attributes) to better customize the output of your UI at runtime. The Add View scaffolding also supports improved detection and usage of primary key information on models (including support for naming conventions like ID, ProductID, etc).  For example: the Add View dialog box uses this information to ensure that the primary key value is not scaffold as an editable form field, and that links between views are auto-generated correctly with primary key information. The default Edit and Create templates also now include references to the jQuery scripts needed for client validation.  Scaffold form views now support client-side validation by default (no extra steps required).  Client-side validation with ASP.NET MVC 3 is also done using an unobtrusive javascript approach – making pages fast and clean. [ControllerSessionState] –> [SessionState] ASP.NET MVC 3 adds support for session-less controllers.  With the initial RC you used a [ControllerSessionState] attribute to specify this.  We shortened this in RC2 to just be [SessionState]: Note that in addition to turning off session state, you can also set it to be read-only (which is useful for webfarm scenarios where you are reading but not updating session state on a particular request). [SkipRequestValidation] –> [AllowHtml] ASP.NET MVC includes built-in support to protect against HTML and Cross-Site Script Injection Attacks, and will throw an error by default if someone tries to post HTML content as input.  Developers need to explicitly indicate that this is allowed (and that they’ve hopefully built their app to securely support it) in order to enable it. With ASP.NET MVC 3, we are also now supporting a new attribute that you can apply to properties of models/viewmodels to indicate that HTML input is enabled, which enables much more granular protection in a DRY way.  In last month’s RC release this attribute was named [SkipRequestValidation].  With RC2 we renamed it to [AllowHtml] to make it more intuitive: Setting the above [AllowHtml] attribute on a model/viewmodel will cause ASP.NET MVC 3 to turn off HTML injection protection when model binding just that property. Html.Raw() helper method The new Razor view engine introduced with ASP.NET MVC 3 automatically HTML encodes output by default.  This helps provide an additional level of protection against HTML and Script injection attacks. With RC2 we are adding a Html.Raw() helper method that you can use to explicitly indicate that you do not want to HTML encode your output, and instead want to render the content “as-is”: ViewModel/View –> ViewBag ASP.NET MVC has (since V1) supported a ViewData[] dictionary within Controllers and Views that enables developers to pass information from a Controller to a View in a late-bound way.  This approach can be used instead of, or in combination with, a strongly-typed model class.  The below code demonstrates a common use case – where a strongly typed Product model is passed to the view in addition to two late-bound variables via the ViewData[] dictionary: With ASP.NET MVC 3 we are introducing a new API that takes advantage of the dynamic type support within .NET 4 to set/retrieve these values.  It allows you to use standard “dot” notation to specify any number of additional variables to be passed, and does not require that you create a strongly-typed class to do so.  With earlier previews of ASP.NET MVC 3 we exposed this API using a dynamic property called “ViewModel” on the Controller base class, and with a dynamic property called “View” within view templates.  A lot of people found the fact that there were two different names confusing, and several also said that using the name ViewModel was confusing in this context – since often you create strongly-typed ViewModel classes in ASP.NET MVC, and they do not use this API.  With RC2 we are exposing a dynamic property that has the same name – ViewBag – within both Controllers and Views.  It is a dynamic collection that allows you to pass additional bits of data from your controller to your view template to help generate a response.  Below is an example of how we could use it to pass a time-stamp message as well as a list of all categories to our view template: Below is an example of how our view template (which is strongly-typed to expect a Product class as its model) can use the two extra bits of information we passed in our ViewBag to generate the response.  In particular, notice how we are using the list of categories passed in the dynamic ViewBag collection to generate a dropdownlist of friendly category names to help set the CategoryID property of our Product object.  The above Controller/View combination will then generate an HTML response like below.    Output Caching Improvements ASP.NET MVC 3’s output caching system no longer requires you to specify a VaryByParam property when declaring an [OutputCache] attribute on a Controller action method.  MVC3 now automatically varies the output cached entries when you have explicit parameters on your action method – allowing you to cleanly enable output caching on actions using code like below: In addition to supporting full page output caching, ASP.NET MVC 3 also supports partial-page caching – which allows you to cache a region of output and re-use it across multiple requests or controllers.  The [OutputCache] behavior for partial-page caching was updated with RC2 so that sub-content cached entries are varied based on input parameters as opposed to the URL structure of the top-level request – which makes caching scenarios both easier and more powerful than the behavior in the previous RC. @model declaration does not add whitespace In earlier previews, the strongly-typed @model declaration at the top of a Razor view added a blank line to the rendered HTML output. This has been fixed so that the declaration does not introduce whitespace. Changed "Html.ValidationMessage" Method to Display the First Useful Error Message The behavior of the Html.ValidationMessage() helper was updated to show the first useful error message instead of simply displaying the first error. During model binding, the ModelState dictionary can be populated from multiple sources with error messages about the property, including from the model itself (if it implements IValidatableObject), from validation attributes applied to the property, and from exceptions thrown while the property is being accessed. When the Html.ValidationMessage() method displays a validation message, it now skips model-state entries that include an exception, because these are generally not intended for the end user. Instead, the method looks for the first validation message that is not associated with an exception and displays that message. If no such message is found, it defaults to a generic error message that is associated with the first exception. RemoteAttribute “Fields” -> “AdditionalFields” ASP.NET MVC 3 includes built-in remote validation support with its validation infrastructure.  This means that the client-side validation script library used by ASP.NET MVC 3 can automatically call back to controllers you expose on the server to determine whether an input element is indeed valid as the user is editing the form (allowing you to provide real-time validation updates). You can accomplish this by decorating a model/viewmodel property with a [Remote] attribute that specifies the controller/action that should be invoked to remotely validate it.  With the RC this attribute had a “Fields” property that could be used to specify additional input elements that should be sent from the client to the server to help with the validation logic.  To improve the clarity of what this property does we have renamed it to “AdditionalFields” with today’s RC2 release. ViewResult.Model and ViewResult.ViewBag Properties The ViewResult class now exposes both a “Model” and “ViewBag” property off of it.  This makes it easier to unit test Controllers that return views, and avoids you having to access the Model via the ViewResult.ViewData.Model property. Installation Notes You can download and install the ASP.NET MVC 3 RC2 build here.  It can be installed on top of the previous ASP.NET MVC 3 RC release (it should just replace the bits as part of its setup). The one component that will not be updated by the above setup (if you already have it installed) is the NuGet Package Manager.  If you already have NuGet installed, please go to the Visual Studio Extensions Manager (via the Tools –> Extensions menu option) and click on the “Updates” tab.  You should see NuGet listed there – please click the “Update” button next to it to have VS update the extension to today’s release. If you do not have NuGet installed (and did not install the ASP.NET MVC RC build), then NuGet will be installed as part of your ASP.NET MVC 3 setup, and you do not need to take any additional steps to make it work. Summary We are really close to the final ASP.NET MVC 3 release, and will deliver the final “RTM” build of it next month.  It has been only a little over 7 months since ASP.NET MVC 2 shipped, and I’m pretty amazed by the huge number of new features, improvements, and refinements that the team has been able to add with this release (Razor, Unobtrusive JavaScript, NuGet, Dependency Injection, Output Caching, and a lot, lot more).  I’ll be doing a number of blog posts over the next few weeks talking about many of them in more depth. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >