Search Results

Search found 12898 results on 516 pages for 'expression engine'.

Page 433/516 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • Running a piece of code for a certain amount of time in C#?

    - by Hazem
    Hi I am working on a robot that is capable of motion detection using a webcam. I am doing this in C# The robot moves too fast, so I want to turn it on/off at short time intervals in order to reduce its speed. For example, it will start the engine then wait 0.5 second and turn it off, this cycle repeats every 2 seconds. This way, its speed wont be too fast. I would like to include this in a single function called Move() I just don't know how to do this, especially because my motion detection code runs like 20 times a second. Depending on the position of the obstacle, I may need to disable the Move() function and activate other functions that let the robot move into other directions. Any ideas/suggestions on where am I supposed to start? Thanks a lot!

    Read the article

  • SEO on a login based website

    - by Paul
    Ok so say I have a website that the majority of content is behind a closed door, login screen for paying customers only. How can I let google know what is there so people can find the site through a search engine. What programming techniques could I use to give more content to the google bot. I will have some overview and demonstration screens to tempt the users but will these five to ten pages of content be enough for the search engines. The key work is quite a highly sort one and there is quite alot of competition. And before anyone asks the site is not a subscription porn site (Sadly)

    Read the article

  • MySql query and table optimisation

    - by Cheeky
    Hi everyone. I am trying to run the following simple query on a table with 500K records. SELECT COUNT(*) AS impressionCount FROM impression WHERE 0 = 0 AND impressionObjectId1 = 'C69A54B8-B828-E2E4-2319A93011DF4120' AND impressionObjectId2 = '1'; This query is taking 10 seconds to run. I have tried creating individual indexes for the impressionObjectId1 and impressionObjectId2 columns, as well as a composite index using both. The composite worked well for a while, but now it is also slow. Here is my table structure: DROP TABLE IF EXISTS `impression`; CREATE TABLE `impression` ( `impressionId` varchar(50) NOT NULL, `impressionObjectId1` varchar(50) NOT NULL, `impressionObjectId2` varchar(50) default NULL, `impressionStampDate` datetime NOT NULL, PRIMARY KEY (`impressionId`), KEY `IX_object` (`impressionObjectId1`,`impressionObjectId2`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 ROW_FORMAT=DYNAMIC COMMENT='InnoDB free: 191488 kB'; Any advice would be greatly appreciated. Thanks EDIT: When adding an EXPLAIN, this is the output: 1, 'SIMPLE', 'impression', 'ref', 'IX_object', 'IX_object', '105', 'const,const', 304499, 'Using where; Using index'

    Read the article

  • Selecting distinct values from mysql with largest timestamp

    - by user987048
    I am building a mail system. The inbox is only supposed to grab the last message (one with the highest time value) of a concatenation of user and sender, where the user or sender is the user ID. Here is the table structure: CREATE TABLE IF NOT EXISTS `mail` ( `user` int(11) NOT NULL, `sender` int(11) NOT NULL, `body` text NOT NULL, `new` enum('0','1') NOT NULL default '1', `time` int(11) NOT NULL, KEY `user` (`user`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; So, with a table with the following data: user sender new time ***************************************** 1 0 0 5 1 0 0 6 2 1 0 7 1 0 1 8 1 2 0 9 1 0 1 11 1 2 1 12 I want to select the following: WHERE USER OR SENDER = X (in this case, 1) user sender new time ***************************************** 2 1 0 7 1 2 0 9 1 0 1 11 How would I go about doing something like this?

    Read the article

  • Mysql with innodb and serializable transaction does not (always) lock rows

    - by Tobias G.
    Hello, I have a transaction with a SELECT and possible INSERT. For concurrency reasons, I added FOR UPDATE to the SELECT. To prevent phantom rows, I'm using the SERIALIZABLE transaction isolation level. This all works fine when there are any rows in the table, but not if the table is empty. When the table is empty, the SELECT FOR UPDATE does not do any (exclusive) locking and a concurrent thread/process can issue the same SELECT FOR UPDATE without being locked. CREATE TABLE t ( id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, display_order INT ) ENGINE = InnoDB; SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; START TRANSACTION; SELECT COALESCE(MAX(display_order), 0) + 1 from t FOR UPDATE; .. This concept works as expected with SQL Server, but not with MySQL. Any ideas on what I'm doing wrong? EDIT Adding an index on display_order does not change the behavior.

    Read the article

  • Good JDBC pattern

    - by Java Developer
    What is the good practice for database operation in Java application? Do you construct the DML syntax in the Java code and send the statements to DB engine for execution, or you just collect the parameters and then make a call to stored procedure with the parameters via java code? or neither because that's just not how to do it? can anyone give an example of a full database utility classes to do database operations in Java app? also what about the transaction manager? My assignment is to make database operation that is modular in Java. Thanks

    Read the article

  • Accessing local variable doesn't improve performance

    - by NicMagnier
    The short version Why is this code: var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; More performant than this one? var index = Math.floor(ref_index) * 4; The long version This week, the author of Impact js published an article about some rendering issue: http://www.phoboslab.org/log/2012/09/drawing-pixels-is-hard In the article there was the source of a function to scale an image by accessing pixels in the canvas. I wanted to suggest some traditional ways to optimize this kind of code so that the scaling would be shorter at loading time. But after testing it my result was most of the time worst that the original function. Guessing this was the JavaScript engine that was doing some smart optimization I tried to understand a bit more what was going on so I did a bunch of test. But my results are quite confusing and I would need some help to understand what's going on. I have a test page here: http://www.mx981.com/stuff/resize_bench/test.html jsPerf: http://jsperf.com/local-variable-due-to-the-scope-lookup To start the test, click the picture and the results will appear in the console. There are three different versions: The original code: for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; var indexScaled = (y * widthScaled + x) * 4; scaledPixels.data[ indexScaled ] = origPixels.data[ index ]; scaledPixels.data[ indexScaled+1 ] = origPixels.data[ index+1 ]; scaledPixels.data[ indexScaled+2 ] = origPixels.data[ index+2 ]; scaledPixels.data[ indexScaled+3 ] = origPixels.data[ index+3 ]; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance One of my attempt to optimize it: var ref_index = 0; var ref_indexScaled = 0 var ref_step = 1 / scale; for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = Math.floor(ref_index) * 4; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+1 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+2 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+3 ]; ref_index+= ref_step; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance The same optimized code but with recalculating the index variable each time (Hybrid) var ref_index = 0; var ref_indexScaled = 0 var ref_step = 1 / scale; for( var y = 0; y < heightScaled; y++ ) { for( var x = 0; x < widthScaled; x++ ) { var index = (Math.floor(y / scale) * img.width + Math.floor(x / scale)) * 4; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+1 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+2 ]; scaledPixels.data[ ref_indexScaled++ ] = origPixels.data[ index+3 ]; ref_index+= ref_step; } } jsPerf: http://jsperf.com/so-accessing-local-variable-doesn-t-improve-performance The only difference in the two last one is the calculation of the 'index' variable. And to my surprise the optimized version is slower in most browsers (except opera). Results of personal testing (not the jsPerf tests): Opera Original: 8668ms Optimized: 932ms Hybrid: 8696ms Chrome Original: 139ms Optimized: 145ms Hybrid: 136ms Safari Original: 433ms Optimized: 853ms Hybrid: 451ms Firefox Original: 343ms Optimized: 422ms Hybrid: 350ms After digging around, it seems an usual good practice is to access mainly local variable due to the scope lookup. Because The optimized version only call one local variable it should be faster that the Hybrid code which call multiple variable and object in addition to the various operation involved. So why the "optimized" version is slower? I thought that it might be because some JavaScript engine don't optimize the Optimized version because it is not hot enough but after using --trace-opt in chrome, it seems all version are properly compiled by V8. At this point I am a bit clueless and wonder if somebody would know what is going on? I did also some more test cases in this page: http://www.mx981.com/stuff/resize_bench/index.html

    Read the article

  • How to speed up a query?

    - by Soroush Khosravi
    I have a table that every request to the server, stores on it. For each request I will check that it is banned or not. For example it is a query: select * from requests where request_sessID = '4bc0331d983000902b4718c80f12e9b3' AND request_time > (UNIX_TIMESTAMP() - 3600) AND request_isEnable = 1 I also set the engine from InnoDB to MyISAM and row_format to Dynamic but nothing changed. My Hardware is very strong but it took about a minute to execute ! I am a programmer and newbie to mysql How can Speed Up this query? Thanks in Advance

    Read the article

  • What language should I write my 2D game in? [closed]

    - by jmgrosen
    I'm thinking of writing a game. It's inspired by Minecraft/Terraria (but don't worry it will be different). My main question is what language I should write it in -- it'll be relatively simple graphics, more like Terraria than Minecraft. I know Java relatively well and Minecraft is written in it, but C++ seems like the industry standard for game development. However, I know next to no C++. I'm willing to learn but am worried how it will turn out for my first real project in the language. In addition to that, I'd also like suggestions on a good game engine for the language that you suggest. I'd like it to run on: Windows for sure Linux for sure Mac for sure Android would be really nice iOS is optional Thank you in advance!

    Read the article

  • Finding a Relative Path in .NET

    - by Rick Strahl
    Here’s a nice and simple path utility that I’ve needed in a number of applications: I need to find a relative path based on a base path. So if I’m working in a folder called c:\temp\templates\ and I want to find a relative path for c:\temp\templates\subdir\test.txt I want to receive back subdir\test.txt. Or if I pass c:\ I want to get back ..\..\ – in other words always return a non-hardcoded path based on some other known directory. I’ve had a routine in my library that does this via some lengthy string parsing routines, but ran into some Uri processing today that made me realize that this code could be greatly simplified by using the System.Uri class instead. Here’s the simple static method: /// <summary> /// Returns a relative path string from a full path based on a base path /// provided. /// </summary> /// <param name="fullPath">The path to convert. Can be either a file or a directory</param> /// <param name="basePath">The base path on which relative processing is based. Should be a directory.</param> /// <returns> /// String of the relative path. /// /// Examples of returned values: /// test.txt, ..\test.txt, ..\..\..\test.txt, ., .., subdir\test.txt /// </returns> public static string GetRelativePath(string fullPath, string basePath ) { // ForceBasePath to a path if (!basePath.EndsWith("\\")) basePath += "\\"; Uri baseUri = new Uri(basePath); Uri fullUri = new Uri(fullPath); Uri relativeUri = baseUri.MakeRelativeUri(fullUri); // Uri's use forward slashes so convert back to backward slashes return relativeUri.ToString().Replace("/", "\\"); } You can then call it like this: string relPath = FileUtils.GetRelativePath("c:\temp\templates","c:\temp\templates\subdir\test.txt") It’s not exactly rocket science but it’s useful in many scenarios where you’re working with files based on an application base directory. Right now I’m working on a templating solution (using the Razor Engine) where templates live in a base directory and are supplied as relative paths to that base directory. Resolving these relative paths both ways is important in order to properly check for existance of files and their change status in this case. Not the kind of thing you use every day, but useful to remember.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • The Oracle Enterprise Linux Software and Hardware Ecosystem

    - by sergio.leunissen
    It's been nearly four years since we launched the Unbreakable Linux support program and with it the free Oracle Enterprise Linux software. Since then, we've built up an extensive ecosystem of hardware and software partners. Oracle works directly with these vendors to ensure joint customers can run Oracle Enterprise Linux. As Oracle Enterprise Linux is fully--both source and binary--compatible with Red Hat Enterprise Linux (RHEL), there is minimal work involved for software and hardware vendors to test their products with it. We develop our software on Oracle Enterprise Linux and perform full certification testing on Oracle Enterprise Linux as well. Due to the compatibility between Oracle Enterprise Linux and RHEL, Oracle also certifies its software for use on RHEL, without any additional testing. Oracle Enterprise Linux tracks RHEL by publishing freely downloadable installation media on edelivery.oracle.com/linux and updates, bug fixes and security errata on Unbreakable Linux Network (ULN). At the same time, Oracle's Linux kernel team is shaping the future of enterprise Linux distributions by developing technologies and features that matter to customers who deploy Linux in the data center, including file systems, memory management, high performance computing, data integrity and virtualization. All this work is contributed to the Linux and Xen communities. The list below is a sample of the partners who have certified their products with Oracle Enterprise Linux. If you're interested in certifying your software or hardware with Oracle Enterprise Linux, please contact us via [email protected] Chip Manufacturers Intel, Intel Enabled Server Acceleration Alliance AMD Server vendors Cisco Unified Computing System Dawning Dell Egenera Fujitsu HP Huawei IBM NEC Sun/Oracle Storage Systems, Volume Management and File Systems 3Par Compellent EMC VPLEX FalconStor Fusion-io Hitachi Data Systems HP Storage Array Systems Lustre Network Appliance OCFS2 PillarData Symantec Veritas Storage Foundation Networking: Switches, Host Bus Adapters (HBAs), Converged Network Adapters (CNAs), InfiniBand Brocade Emulex Mellanox QLogic Voltaire SOA and Middleware ActiveState ActivePerl, ActivePython Tibco Zend Backup, Recovery & Replication Arkeia Network Backup Suite BakBone NetVault CommVault Simpana 8 EMC Networker, Replication Manager FalconStor Continuous Data Protector HP Data Protector NetApp Snapmanager Quest LiteSpeed Engine Steeleye Data Replication, Disaster Recovery Symantec NetBackup, Veritas Volume Replicator, Symantec Backup Exec Zmanda Amanda Enterprise Data Center Automation BMC CA Unicenter HP Server Automation (formerly Opsware), System Management Homepage Oracle Enterprise Manager Ops Center Quest Vizioncore vFoglight Pro TeamQuest Manager Clustering & High Availability FUJITSU x10sure NEC Express Cluster X Steeleye Lifekeeper Symantec Cluster Server Univa UniCluster Virtualization Platforms and Cloud Providers Amazon EC2 Citrix XenServer Rackspace Cloud VirtualBox VMWare ESX Security Management ArcSight: Enterprise Security Manager, Logger CA Access Control Centrify Suite Ecora Auditor FoxT Manager Likewise: Unix Account Management Lumension Endpoint Management and Security Suite QualysGuard Suite Quest Privilege Manager McAfee Application Control, Change ControlIntegrity Monitor, Integrity Control, PCI Pro Solidcore S3 Symantec Enterprise Security Manager (ESM) Tripwire Trusted Computer Solutions

    Read the article

  • How to implement multi-source XSLT mapping in 11g BPEL

    - by [email protected]
    In SOA 11g, you can create a XSLT mapper that uses multiple sources as the input. To implement a multi-source mapper, just follow the instructions below, Drag and drop a Transform Activity to a BPEL process Double-click on the Transform Activity, the Transform dialog window appears. Add source variables by clicking the Add icon and selecting the variable and part of the variable as needed. You can select multiple input variables. The first variable represents the main XML input to the XSL mapping, while additional variables that are added here are defined in the XSL mapping as input parameters. Select the target variable and its part if available. Specify the mapper file name, the default file name is xsl/Transformation_%SEQ%.xsl, where %SEQ% represents the sequence number of the mapper. Click OK, the xls file will be opened in the graphical mode. You can map the sources to the target as usual. Open the mapper source code, you will notice the variable representing the additional source payload, is defined as the input parameter in the map source spec and body<mapSources>    <source type="XSD">      <schema location="../xsd/po.xsd"/>      <rootElement name="PurchaseOrder" namespace="http://www.oracle.com/pcbpel/po"/>    </source>    <source type="XSD">      <schema location="../xsd/customer.xsd"/>      <rootElement name="Customer" namespace="http://www.oracle.com/pcbpel/Customer"/>      <param name="v_customer" />    </source>  </mapSources>...<xsl:param name="v_customer"/> Let's take a look at the BPEL source code used to execute xslt mapper. <assign name="Transform_1">            <bpelx:annotation>                <bpelx:pattern>transformation</bpelx:pattern>            </bpelx:annotation>            <copy>                <from expression="ora:doXSLTransformForDoc('xsl/Transformation_1.xsl',bpws:getVariableData('v_po'),'v_customer',bpws:getVariableData('v_customer'))"/>                <to variable="v_invoice"/>            </copy>        </assign> You will see BPEL uses ora:doXSLTransformForDoc XPath function to execute the XSLT mapper.This function returns the result of  XSLT transformation when the xslt template matching the document. The signature of this function is  ora:doXSLTransformForDoc(template,input, [paramQName, paramValue]*).Wheretemplate is the XSLT mapper nameinput is the string representation of xml input, paramQName is the parameter defined in the xslt mapper as the additional sourceparameterValue is the additional source payload. You can add more sources to the mapper at the later stage, but you have to modify the ora:doXSLTransformForDoc in the BPEL source code and make sure it passes correct parameter and its value pair that reflects the changes in the XSLT mapper.So the best practices are : create the variables before creating the mapping file, therefore you can add multiple sources when you define the transformation in the first place, which is more straightforward than adding them later on. Review ora:doXSLTransformForDoc code in the BPEL source and make sure it passes the correct parameters to the mapper.

    Read the article

  • SQL SERVER – Server Side Paging in SQL Server 2011 Performance Comparison

    - by pinaldave
    Earlier, I have written about SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative. I got many emails asking for performance analysis of paging. Here is the quick analysis of it. The real challenge of paging is all the unnecessary IO reads from the database. Network traffic was one of the reasons why paging has become a very expensive operation. I have seen many legacy applications where a complete resultset is brought back to the application and paging has been done. As what you have read earlier, SQL Server 2011 offers a better alternative to an age-old solution. This article has been divided into two parts: Test 1: Performance Comparison of the Two Different Pages on SQL Server 2011 Method In this test, we will analyze the performance of the two different pages where one is at the beginning of the table and the other one is at its end. Test 2: Performance Comparison of the Two Different Pages Using CTE (Earlier Solution from SQL Server 2005/2008) and the New Method of SQL Server 2011 We will explore this in the next article. This article will tackle test 1 first. Test 1: Retrieving Page from two different locations of the table. Run the following T-SQL Script and compare the performance. SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO You will notice that when we are reading the page from the beginning of the table, the database pages read are much lower than when the page is read from the end of the table. This is very interesting as when the the OFFSET changes, PAGE IO is increased or decreased. In the normal case of the search engine, people usually read it from the first few pages, which means that IO will be increased as we go further in the higher parts of navigation. I am really impressed because using the new method of SQL Server 2011,  PAGE IO will be much lower when the first few pages are searched in the navigation. Test 2: Retrieving Page from two different locations of the table and comparing to earlier versions. In this test, we will compare the queries of the Test 1 with the earlier solution via Common Table Expression (CTE) which we utilized in SQL Server 2005 and SQL Server 2008. Test 2 A : Page early in the table -- Test with pages early in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO Test 2 B : Page later in the table -- Test with pages later in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO From the resultset, it is very clear that in the earlier case, the pages read in the solution are always much higher than the new technique introduced in SQL Server 2011 even if we don’t retrieve all the data to the screen. If you carefully look at both the comparisons, the PAGE IO is much lesser in the case of the new technique introduced in SQL Server 2011 when we read the page from the beginning of the table and when we read it from the end. I consider this as a big improvement as paging is one of the most used features for the most part of the application. The solution introduced in SQL Server 2011 is very elegant because it also improves the performance of the query and, at large, the database. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL Server Editions and Integration Services

    The SQL Server 2005 and SQL Server 2008 product family has quite a few editions now, so what does this mean for SQL Server Integration Services? Starting from the bottom we have the free edition known as Express, and the entry level Workgroup edition, as well as the new Web edition. None of these three include the full SSIS product, but they do all include the SQL Server Import and Export Wizard, with access to basic data sources but nothing more, so for simple loading and extraction of data this should suffice. You will not be able to build packages though, this is just a one shot deal aimed at using the wizard on an ad-hoc basis. To get the full power of Integration Services you need to start with Standard edition. This includes the BI Development Studio, for building your own packages, and fully functional IDE integrated into Visual Studio. (You get the full VS 2005/2008 IDE with the product). All core functions will be available but with a restricted set of transformations and tasks. The SQL Server 2005 Features Comparison or Features Supported by the Editions of SQL Server 2008 describes standard edition as having basic transforms, compared to Enterprise which includes the advanced transforms. I think basic is a little harsh considering the power you get with Standard, but the advanced covers the truly ground-breaking capabilities of data mining, text mining and cleansing or fuzzy transforms. The power of performing these operations within your ETL pipeline should not be underestimated, but not all processes will require these capabilities, so it seems like a reasonable delineation. Thankfully there are no feature limitations or artificial governors within Standard compared to Enterprise. The same control flow and data flow engines underpin both editions, with the same configuration and deployment options allowing you to work seamlessly between environments and editions if using the common components. In fact there are no govenors at all in SSIS, so whilst the SQL Database engine is limited to 4 CPUs in Standard edition, SSIS is only limited by the base operating system. The advanced transforms only available with Enterprise edition: Data Mining Training Destination Data Mining Query Component Fuzzy Grouping Fuzzy Lookup Term Extraction Term Lookup Dimension Processing Destination Partition Processing Destination The advanced tasks only available with Enterprise edition: Data Mining Query Task So in summary, if you want SQL Server Integration Services, you need SQL Server Standard edition, and for the more advanced tasks and transforms you need SQL Server Enterprise edition. To recap, the answer to the often asked question is no, SQL Server Integration Services is not available in SQL Server Express or Workgroup editions.

    Read the article

  • SQLAuthority News – Tips for Traveling to Nepal

    - by pinaldave
    If you are a regular reader of this blog, you might know that I travel nearly 20+ days out of 30 days in a month. There are cases when I don’t have a chance to go home for an entire month and my family has to travel to different cities just to meet me. During my recent visit, one of my acquaintances suggested that I should blog about my travel experiences as well. This can be helpful to others who are traveling to the country or city. I have previously written about my experience about all the airlines in India. I would be writing about a few tips about traveling to the beautiful country Nepal today. Kathmandu, the capital of Nepal is very scenic. There are lots of historical places to see and visit. I was fortunate enough to stopover the Pashupatinath Temple, Bhaktapur, Vasantpur and the temple of Kumari Goddess. I also visited casinos there, but even if  I have stayed in Las Vegas for 3 and a half years before, I was not keen on them so I left the casinos just like what I did in Las Vegas . I also traveled to the famous Thamel area by car. Here are my quick tips for anyone who is planning to visit Nepal. They are not categorized but just written in the order that came to my mind. Please note that if you are an Indian, you will get a special privilege everywhere in Nepal, beginning right from the Indian airports. Use the expression “Nameste!” If you want to greet any Indian or Nepali. Indian Nationals do not need visa/passport to enter Nepal. In fact, Indian Nationals can just walk in to Nepal without any passport; but should have any valid Indian ID. There is no use of a passport since it will not be stamped at any immigration ports, whether in India or Nepal. Indian currency is widely accepted everywhere. However, please bring only Rs. 100 bills/notes as Rs. 500 or Rs. 1000 are not accepted. However, casinos there will accept larger bills. Indian National Language – Hindi is widely spoken and understood everywhere. I did not find a single person who had trouble speaking it. Nepali language uses the scripting language as Devnagari, which is similar to Hindi. Here, you will find food of almost every country.  The taste of Nepali food is authentic and very delicious. It is very safe to travel and move around in Kathmandu (despite what media suggests). However, it will really help if you have a friend who speaks Nepali. You can negotiate a few deals and cut off to almost 1/5 of the original quoted price of products sold here. If you are from Gujarat, India – you will find Nepali language sharing many common words. Temples are everywhere, so do not miss to visit a few of them. Pashupatinath is a must. Only followers of Hindu religion (from Nepal and India only) are allowed in most of the holy places. Camera is allowed everywhere except on the holy places. Now it is your turn to share your opinions or any suggestions. I think Nepal is a great country as there are lots of places to visit. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology

    Read the article

  • What&rsquo;s new in RadChart for 2010 Q1 (Silverlight / WPF)

    Greetings, RadChart fans! It is with great pleasure that I present this short highlight of our accomplishments for the Q1 release :). Weve worked very hard to make the best silverlight and WPF charting product even better. Here is some of what we did during the past few months.   1) Zooming&Scrolling and the new sampling engine: Without a doubt one of the most important things we did. This new feature allows you to bind your chart to a very large set of data with blazing performance. Dont take my word for it give it a try!   2) New Smart Label Positioning and Spider-like labels feature: This new feature really helps with very busy graphs. You can play with the different settings we offer in this example.     3) Sorting and Filtering. Much like our RadGridview control the chart now allows you to sort and filter your data out of the box with a single line of code!   4) Legend improvements Weve also been paying attention to those of you who wanted a much improved legend. It is now possible to customize the look and feel of legend items and legend position with a single click.     5) Custom palette brushes. You have told us that you want to easily customize all palette colors using a single clean API from both XAML and code behind. The new custom palette brushes API does exactly that.   There are numerous other improvements as well, as much improved themes, performance optimizations and other features that we did. If you want to dig in further check the release notes and changes and backwards compatibility topics.   Feel free to share the pains and gains of working with RadChart. Our team is always open to receiving constructive feedback and beer :-)Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What&rsquo;s new in RadChart for 2010 Q1 (Silverlight / WPF)

    Greetings, RadChart fans! It is with great pleasure that I present this short highlight of our accomplishments for the Q1 release :). Weve worked very hard to make the best silverlight and WPF charting product even better. Here is some of what we did during the past few months.   1) Zooming&Scrolling and the new sampling engine: Without a doubt one of the most important things we did. This new feature allows you to bind your chart to a very large set of data with blazing performance. Dont take my word for it give it a try!   2) New Smart Label Positioning and Spider-like labels feature: This new feature really helps with very busy graphs. You can play with the different settings we offer in this example.     3) Sorting and Filtering. Much like our RadGridview control the chart now allows you to sort and filter your data out of the box with a single line of code!   4) Legend improvements Weve also been paying attention to those of you who wanted a much improved legend. It is now possible to customize the look and feel of legend items and legend position with a single click.     5) Custom palette brushes. You have told us that you want to easily customize all palette colors using a single clean API from both XAML and code behind. The new custom palette brushes API does exactly that.   There are numerous other improvements as well, as much improved themes, performance optimizations and other features that we did. If you want to dig in further check the release notes and changes and backwards compatibility topics.   Feel free to share the pains and gains of working with RadChart. Our team is always open to receiving constructive feedback and beer :-)Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Microsoft silverlight 5.0 features for developers

    - by Jalpesh P. Vadgama
    Recently on Silverlight 5.0 firestarter event ScottGu has announced road map for Silverlight 5.0. There will be lots of features that will be there in silverlight 5.0 but here are few glimpses of Silverlight 5.0 Features. Improved Data binding support and Better support for MVVM: One of the greatest strength of Silverlight is its data binding. Microsoft is going to enhanced data binding by providing more ability to debug it. Developer will able to debug the binding expression and other stuff in Siverlight 5.0. Its also going to provide Ancestor Relative source binding which will allow property to bind with container control. MVVM pattern support will also be enhanced. Performance and Speed Enhancement: Now silverlight 5.0 will have support for 64bit browser support. So now you can use that silverlight application on 64 bit platform also. There is no need to take extra care for it.It will also have faster startup time and greater support for hardware acceleration. It will also provide end to end support for hard acceleration features of IE 9. More support for Out Of Browser Application: With Siverlight 4.0 Microsoft has announced new features called out of browser application and it has amazed lots of developer because now possibilities are unlimited with it. Now in silverlight 5.0 Out Of Browser application will have ability to Create Manage child windows just like windows forms or WPF Application. So you can fill power of desktop application with your out of browser application. Testing Support with Visual Studio 2010: Microsoft is going to add automated UI Testing support with Visual Studio 2010 with silverlight 5.0. So now we can test UI of Silverlight much faster. Better Support for RIA Services: RIA Services allows us to create N-tier application with silverlight via creating proxy classes on client and server both side. Now it will more features like complex type support, Custom type support for MVVM(Model View View Model) pattern. WCF Enhancements: There are lots of enhancement with WCF but key enhancement will WSTrust support. Text and Printing Support: Silverlight 5.0 will support vector base graphics. It will also support multicolumn text flow and linked text containers. It will full open type support,Postscript vector enhancement. Improved Power Enhancement: This will prevent screensaver from activating while you are watching videos on silverlight. Silverlight 5.0 is going add that smartness so it can determine while you are going to watch video and while you are not going watch videos. Better support for graphics: Silverlight 5.0 will provide in-depth support for 3D API. Now 3D rendering support is more enhancement in silverlight and 3D graphics can be rendered easily. You can find more details on following links and also don’t forgot to view silverlight firestarter keynot video of scottgu. http://www.silverlight.net/news/events/firestarter-labs/ http://blogs.msdn.com/b/katriend/archive/2010/12/06/silverlight-5-features-firestarter-keynote-and-sessions-resources.aspx http://weblogs.asp.net/scottgu/archive/2010/12/02/announcing-silverlight-5.aspx http://www.silverlight.net/news/events/firestarter/ http://www.microsoft.com/silverlight/future/ Hope this will help you. Stay tuned!!!.

    Read the article

  • ARM TechCon 2013: Oracle, ARM expand collaboration on servers, Internet of Things

    - by Henrik Stahl
    If you have been following Java news, you are already aware of the fact that there has been a lot of investment in Java for ARM-based devices and servers over the last couple of years (news, more news, even more, and lots more). We have released Java ME Embedded binaries for ARM Cortex-M micro controllers, Java SE Embedded for ARM application processors, and a port of the Oracle JDK for ARM-based servers. We have been making Java available to the Beagleboard, Raspberry Pi and Lego Mindstorms/LeJOS communities and worked with them and the Java User Groups to evangelize Java as a great development environment for IoT devices. We have announced commercial relationships with Freescale, Qualcomm, Gemalto M2M, SIMCom to name a few. ARM and Freescale on their side have joined the JCP, recently been voted in as members of the Executive Committee, and have worked with Oracle to evangelize Java in their ecosystem. It is with this background, Nandini Ramani, Vice President, Java Platform at Oracle, announced a expanded collaboration with ARM in a TechCon 2013 keynote titled "Enabling Compelling Services for IoT". To summarize the announcement: ARM and Oracle will work together on interoperability between the ARM Sensinode communications stack (based on CoAP, DTLS and 6LoWPAN) and Oracle's Java ME, Java SE and middleware products. ARM will donate the Sensinode CoAP protocol engine to OpenJDK to stimulate broad adoption of the CoAP protocol, and work with Oracle to extend the relevant Java specifications with CoAP support. CoAP (Constrained Application Protocol) is an IETF specification that provides a low-bandwidth request/response protocol suitable for IoT applications. ARM will work with Oracle and Freescale to enable the mbed Hardware Abstraction Layer (HAL) to act as a portability layer for Java ME Embedded. Oracle will enable mbed as a tier one platform for Java ME Embedded. Over time, this effort will allow any mbed-enabled platforms (mostly based on Cortex-M microcontrollers) to work with off the shelf Java ME Embedded binaries, extending the reach of Java ME into IoT edge nodes. In Nandini's keynote, Oracle showed a roadmap to port the Oracle JDK for Linux on 64-bit ARMv8 servers in the 2015 time frame, preceded by an extended early access program. We expect this binary to have full feature parity with Oracle JDK on other platforms, and be available under the same royalty-free license. This effort has been going on for some time, but is now accelerated due to availability of hardware from Applied Micro. Oracle will be working with Applied Micro on the ARMv8 port, and on optimizing Java for their X-Gene products. Oracle and ARM will work closely on IoT architecture, and on evangelizing Java on ARM for both servers and IoT devices. These announcements reinforce Java's position as a first-class citizen in the ARM ecosystem, and signal a commitment from us to collaborate on driving standards and open ecosystem for the Internet of Things. If you are active in this area and not already in touch with us, or interested in learning more - please reach out to us!

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • Slides of my HOL on MySQL Cluster

    - by user13819847
    Hi!Thanks everyone who attended my hands-on lab on MySQL Cluster at MySQL Connect last Saturday.The following are the links for the slides, the HOL instructions, and the code examples.I'll try to summarize my HOL below.Aim of the HOL was to help attendees to familiarize with MySQL Cluster. In particular, by learning: the basics of MySQL Cluster Architecture the basics of MySQL Cluster Configuration and Administration how to start a new Cluster for evaluation purposes and how to connect to it We started by introducing MySQL Cluster. MySQL Cluster is a proven technology that today is successfully servicing the most performance-intensive workloads. MySQL Cluster is deployed across telecom networks and is powering mission-critical web applications. Without trading off use of commodity hardware, transactional consistency and use of complex queries, MySQL Cluster provides: Web Scalability (web-scale performance on both reads and writes) Carrier Grade Availability (99.999%) Developer Agility (freedom to use SQL or NoSQL access methods) MySQL Cluster implements: an Auto-Sharding, Multi-Master, Shared-nothing Architecture, where independent nodes can scale horizontally on commodity hardware with no shared disks, no shared memory, no single point of failure In the architecture of MySQL Cluster it is possible to find three types of nodes: management nodes: responsible for reading the configuration files, maintaining logs, and providing an interface to the administration of the entire cluster data nodes: where data and indexes are stored api nodes: provide the external connectivity (e.g. the NDB engine of the MySQL Server, APIs, Connectors) MySQL Cluster is recommended in the situations where: it is crucial to reduce service downtime, because this produces a heavy impact on business sharding the database to scale write performance higly impacts development of application (in MySQL Cluster the sharding is automatic and transparent to the application) there are real time needs there are unpredictable scalability demands it is important to have data-access flexibility (SQL & NoSQL) MySQL Cluster is available in two Editions: Community Edition (Open Source, freely downloadable from mysql.com) Carrier Grade Edition (Commercial Edition, can be downloaded from eDelivery for evaluation purposes) MySQL Carrier Grade Edition adds on the top of the Community Edition: Commercial Extensions (MySQL Cluster Manager, MySQL Enterprise Monitor, MySQL Cluster Installer) Oracle's Premium Support Services (largest team of MySQL experts backed by MySQL developers, forward compatible hot fixes, multi-language support, and more) We concluded talking about the MySQL Cluster vision: MySQL Cluster is the default database for anyone deploying rapidly evolving, realtime transactional services at web-scale, where downtime is simply not an option. From a practical point of view the HOL's steps were: MySQL Cluster installation start & monitoring of the MySQL Cluster processes client connection to the Management Server and to an SQL Node connection using the NoSQL NDB API and the Connector J In the hope that this blog post can help you get started with MySQL Cluster, I take the opportunity to thank you for the questions you made both during the HOL and at the MySQL Cluster booth. Slides are also on SlideShares: Santo Leto - MySQL Connect 2012 - Getting Started with Mysql Cluster Happy Clustering!

    Read the article

  • T-SQL Improvements And Data Types in ms sql 2008

    - by Aamir Hasan
     Microsoft SQL Server 2008 is a new version released in the first half of 2008 introducing new properties and capabilities to SQL Server product family. All these new and enhanced capabilities can be defined as the classic words like secure, reliable, scalable and manageable. SQL Server 2008 is secure. It is reliable. SQL2008 is scalable and is more manageable when compared to previous releases. Now we will have a look at the features that are making MS SQL Server 2008 more secure, more reliable, more scalable, etc. in details.Microsoft SQL Server 2008 provides T-SQL enhancements that improve performance and reliability. Itzik discusses composable DML, the ability to declare and initialize variables in the same statement, compound assignment operators, and more reliable object dependency information. Table-Valued ParametersInserts into structures with 1-N cardinality problematicOne order -> N order line items"N" is variable and can be largeDon't want to force a new order for every 20 line itemsOne database round-trip / line item slows things downNo ARRAY data type in SQL ServerXML composition/decomposition used as an alternativeTable-valued parameters solve this problemTable-Valued ParametersSQL Server has table variablesDECLARE @t TABLE (id int);SQL Server 2008 adds strongly typed table variablesCREATE TYPE mytab AS TABLE (id int);DECLARE @t mytab;Parameters must use strongly typed table variables Table Variables are Input OnlyDeclare and initialize TABLE variable  DECLARE @t mytab;  INSERT @t VALUES (1), (2), (3);  EXEC myproc @t;Procedure must declare variable READONLY  CREATE PROCEDURE usetable (    @t mytab READONLY ...)  AS    INSERT INTO lineitems SELECT * FROM @t;    UPDATE @t SET... -- no!T-SQL Syntax EnhancementsSingle statement declare and initialize  DECLARE @iint = 4;Compound Assignment Operators  SET @i += 1;Row constructors  DECLARE @t TABLE (id int, name varchar(20));  INSERT INTO @t VALUES    (1, 'Fred'), (2, 'Jim'), (3, 'Sue');Grouping SetsGrouping Sets allow multiple GROUP BY clauses in a single SQL statementMultiple, arbitrary, sets of subtotalsSingle read pass for performanceNested subtotals provide ever better performanceGrouping Sets are an ANSI-standardCOMPUTE BY is deprecatedGROUPING SETS, ROLLUP, CUBESQL Server 2008 - ANSI-syntax ROLLUP and CUBEPre-2008 non-ANSI syntax is deprecatedWITH ROLLUP produces n+1 different groupings of datawhere n is the number of columns in GROUP BYWITH CUBE produces 2^n different groupingswhere n is the number of columns in GROUP BYGROUPING SETS provide a "halfway measure"Just the number of different groupings you needGrouping Sets are visible in query planGROUPING_ID and GROUPINGGrouping Sets can produce non-homogeneous setsGrouping set includes NULL values for group membersNeed to distinguish by grouping and NULL valuesGROUPING (column expression) returns 0 or 1Is this a group based on column expr. or NULL value?GROUPING_ID (a,b,c) is a bitmaskGROUPING_ID bits are set based on column expressions a, b, and cMERGE StatementMultiple set operations in a single SQL statementUses multiple sets as inputMERGE target USING source ON ...Operations can be INSERT, UPDATE, DELETEOperations based onWHEN MATCHEDWHEN NOT MATCHED [BY TARGET] WHEN NOT MATCHED [BY SOURCE]More on MERGEMERGE statement can reference a $action columnUsed when MERGE used with OUTPUT clauseMultiple WHEN clauses possible For MATCHED and NOT MATCHED BY SOURCEOnly one WHEN clause for NOT MATCHED BY TARGETMERGE can be used with any table sourceA MERGE statement causes triggers to be fired onceRows affected includes total rows affected by all clausesMERGE PerformanceMERGE statement is transactionalNo explicit transaction requiredOne Pass Through TablesAt most a full outer joinMatching rows = when matchedLeft-outer join rows = when not matched by targetRight-outer join rows = when not matched by sourceMERGE and DeterminismUPDATE using a JOIN is non-deterministicIf more than one row in source matches ON clause, either/any row can be used for the UPDATEMERGE is deterministicIf more than one row in source matches ON clause, its an errorKeeping Track of DependenciesNew dependency views replace sp_dependsViews are kept in sync as changes occursys.dm_sql_referenced_entitiesLists all named entities that an object referencesExample: which objects does this stored procedure use?sys.dm_sql_referencing_entities 

    Read the article

  • Silverlight Cream for March 24, 2010 -- #819

    - by Dave Campbell
    In this Issue: Nokola, Tim Heuer, Christian Schormann, Brad Abrams, David Kelley, Phil Middlemiss, Michael Klucher, Brandon Watson, Kunal Chowdhury, Jacek Ciereszko, and Unni. Shoutouts: Michael Klucher has a short post up For Love of the Game (Development)…, where he's looking for some input from the developer community. Shawn Hargreaves has a link post up of all the Windows Phone MIX10 presentations Chris Cavanagh has a Soft-Body Physics for Windows Phone 7 post up that goes along with one he did 1-1/2 years ago! Jeff Weber posted An Open Letter To Microsoft Regarding The Silverlight Game Development Community Pete Brown posted his MIX10 Recap ... lots of information, and discussion of what he was up to ... I liked the Trivia app Pete... glad to hear that was yours :) I've changed my mind and added a WP7 tag to SilverlightCream. I'll straighten out all the Mobile plus Silverlight links to point at the WP7 tab hopefully tonight. From SilverlightCream.com: EasyPainter Source Pack 3: Adorners, Mouse Cursors and Frames Nokola has been busy with EasyPainter adding in Custom, Extensible Mouse Cursors and Customizable Adorners with extensible adorner frames, and best of all... all with source code! Simulate Geo Location in Silverlight Windows Phone 7 emulator Among the things we don't have in our WP7 emulators is Geo Location... Tim Heuer comes to the rescue with a simulator for it... too cool, Tim! Blend 4: About Path Layout, Part II Christian Schormann is back with Part 2 of his tutorial sequence on the new Path Layout. Really good info and definitely cool presentations of the control. Silverlight 4 + RIA Services - Ready for Business: Exposing OData Services Brad Abrams continues his series with a post on exposing OData services. This looks like a great tutorial on the topic... will probably resolve some questions I've been having :) No Silverlight and Preloader Experience(ish) - in 10 seconds... David Kelley exposes the code he uses on his site, designed to be friendly to Silverlight and non-Silverlight users alike. Merged Dictionaries of Style Resources and Blend Phil Middlemiss has a nice article up on Merged Dictionaries and using multiple resource dictionaries that the app chooses, but also be compatible with Prism and Blend while not eating your system resources out of house and home. XNA Game Studio and Windows Phone Emulator Compatibility Michael Klucher has a definitive post up about getting your XNA and system up-to-speed for WP7... a must-read if you've been running any of the other XNA drops. Windows Phone 7 301 Redirect Bug Brandon Watson reports a 301 Redirect bug on WP7 ... see the code and how he got it, then follow along as he explains all the debug paths he took and what the resolution (?) really is :) Silverlight 4: How to use the new Printing API? Kunal Chowdhury has a tutorial up on printing with Silverlight 4 RC... from the project layout to printing and then printing a smaller section... all good Printing problem in Silverlight 4.0 RC - loading images in code behind Jacek Ciereszko also is writing about printing, and in his case he had problems with loading an image dynamically and printing it... plus he provides a solution to the 'blank page' problem. ToolboxExampleAttribute - a new extension point in Blend 4 (and a few other extensibility related changes) Unni has an article up about Expression Blend 4's new ToolboxExampleAttribute which allow you to have multiple examples of the same type resulting in different XAML produced. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone    MIX10

    Read the article

  • Silverlight Cream for January 30, 2011 -- #1037

    - by Dave Campbell
    In this Issue: Ollie Riches, Colin Eberhardt, Andrej Tozon, Arik Poznanski, Deborah Kurata(-2-), Jay Kimble, Yochay Kiriaty, Peter Kuhn, Mike Ormond, WindowsPhoneGeek(-2-), and Matthias Shapiro. Above the Fold: Silverlight: "Missing Chart Legend" Deborah Kurata WP7: "XNA for Silverlight developers: Part 2 - Text rendering" Peter Kuhn Shoutouts: Timmy Kokke has a post up discussing What’s new in the Expression Design January 2011 preview? From SilverlightCream.com: WP7Contrib: Thread safe ObservableCollection<T> Ollie Riches, one of the two originators of WP7Contrib, has a post up on the WP7C ObservableCollection... what and why. Windows Phone 7 DeferredLoadContentControl Colin Eberhardt's latest is one we should all take notice of... a content control that defers rendering to provide a better user experience... source code is available as are some good external links Andrej Tozon on Hey weigh! WP7 application SilverlightShow interviews WP7 Dev Andrej Tozon and gets his take on his app, challenges, tips, and the future of WP7. A ProgressBar With Text For Windows Phone 7 Arik Poznanski demonstrates putting text up on the progress bar to let your users know what you're up to... and it looks great in the screenshots. Charting in a Silverlight Application using MVVM Deborah Kurata is checking out the Charting control this time around... using the charting control from the toolbox in the MVVM app she built in the last post... C# and VB code as always. Missing Chart Legend Deborah Kurata's latest in the world of Charting and MVVM involves using a custom theme and having your chart legend disappear... never fear, she's gonna tell you how to fix that! Silverlight/WP7 tip: Detecting when in VS Design Mode Jay Kimble has a post up that not only resolves a question you may need answered during development (are you in VS design Mode), but it also helps resolve a class of problem that Jay explains. Windows Phone GPS Emulator Yochay Kiriaty points out that while part of the issues of building a GPS-driven app for WP7 is getting your head around the tools, the next hurdle is testing... and that's what he's really discussing... "Windows Phone GPS Emulator" ... if you're playing with the GPS, you'll want this. XNA for Silverlight developers: Part 2 - Text rendering Peter Kuhn's latest tutorial in his XNA series for Silverlight developers is up at SilverlightShow... in this tutorial, Peter discusses text... it's a vastly different game displaying text in XNA as compared to Silverlight ... check it out and see. OData and Windows Phone 7 Mike Ormond starts you off using OData on your WP7 by showing where to download the libraries, and not stopping until he has an app running that reads an OData feed, plus he plans on continuing the quest in future posts. WP7 ProgressOverlay control in depth: features and customization WindowsPhoneGeek has a couple new posts up. The first one is an in-depth look at the ProgressOverlay control in the Codeing4fun Toolkit... pretty cool to be able to put your logo or app logo up. On Testing Windows Phone 7 Applications – Part II: Dealing with the WP7 Application Model WindowsPhoneGeek also has 5 more WP7 testing tips... and these are a little more technical than the first set, and includes some good external links. Topics include: Tombstoning, Usability, Navigation, Capabilities, and Memory consumption. Fun Theme-Friendly Windows Phone Icon Matthias Shapiro explains how to have your WP7 icon change based on the theme your user has chosen... great examples, and XAML included Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >