Search Results

Search found 18756 results on 751 pages for 'generate images'.

Page 672/751 | < Previous Page | 668 669 670 671 672 673 674 675 676 677 678 679  | Next Page >

  • Employee Info Starter Kit: Project Mission

    - by Mohammad Ashraful Alam
    Employee Info Starter Kit is an open source ASP.NET project template that is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, it illustrates how to utilize Microsoft ASP.NET 4.0, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule. where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. User Stories The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee record Update an existing employee record Delete existing employee records Key Technology Areas ASP.NET 4.0 Entity Framework 4.0 T-4 Template Visual Studio 2010 Architectural Objective There is no universal architecture which can be considered as the best for all sorts of applications around the world. Based on requirements, constraints, environment, application architecture can differ from one to another. Trade-off factors are one of the important considerations while deciding a particular architectural solution. Employee Info Starter Kit is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. “Productivity” as the architectural objective typically also includes other trade-off factors as well as, such as testability, flexibility, performance etc. Fortunately Microsoft .NET Framework 4.0 and Visual Studio 2010 includes lots of great features that have been implemented cleverly in this project to reduce these trade-off factors in the minimum level. Why Employee Info Starter Kit is Not a Framework? Application frameworks are really great for productivity, some of which are really unavoidable in this modern age. However relying too many frameworks may overkill a project, as frameworks are typically designed to serve wide range of different usage and are less customizable or editable. On the other hand having implementation patterns can be useful for developers, as it enables them to adjust application on demand. Employee Info Starter Kit provides hundreds of “connected” snippets and implementation patterns to demonstrate problem solutions in actual production environment. It also includes Visual Studio T-4 templates that generate thousands lines of data access and business logic layer repetitive codes in literally few seconds on the fly, which are fully mock testable due to language support for partial methods and latest support for mock testing in Entity Framework. Why Employee Info Starter Kit is Different than Other Open-source Web Applications? Software development is one of the rapid growing industries around the globe, where the technology is being updated very frequently to adapt greater challenges over time. There are literally thousands of community web sites, blogs and forums that are dedicated to provide support to adapt new technologies. While some are really great to enable learning new technologies quickly, in most cases they are either too “simple and brief” to be used in real world scenarios or too “complex and detailed” which are typically focused to achieve a product goal (such as CMS, e-Commerce etc) from "end user" perspective and have a long duration learning curve with respect to the corresponding technology. Employee Info Starter Kit, as a web project, is basically "developer" oriented which actually considers a hybrid approach as “simple and detailed”, where a simple domain has been considered to intentionally illustrate most of the architectural and implementation challenges faced by web application developers so that anyone can dive into deep into the corresponding new technology or concept quickly. Roadmap Since its first release by 2008 in MSDN Code Gallery, Employee Info Starter Kit gained a huge popularity in ASP.NET community and had 1, 50,000+ downloads afterwards. Being encouraged with this great response, we have a strong commitment for the community to provide support for it with respect to latest technologies continuously. Currently hosted in Codeplex, this community driven project is planned to have a wide range of individual editions, each of which will be focused on a selected application architecture, framework or platform, such as ASP.NET Webform, ASP.NET Dynamic Data, ASP.NET MVC, jQuery Ajax (RIA), Silverlight (RIA), Azure Service Platform (Cloud), Visual Studio Automated Test etc. See here for full list of current and future editions.

    Read the article

  • Copy TFS Build Definitions between Projects and Collections

    - by Jakob Ehn
    Originally posted on: http://geekswithblogs.net/jakob/archive/2014/06/05/copy-tfs-build-definitions-between-projects-and-collections.aspxThe last couple of years it has become apparent that using multiple team projects in TFS is generally a bad idea. There are of course exceptions to this, but there are a lot ot things that becomes much easier to do when you put all of your projects and team in the same team project. Fellow ALM MVP Martin Hinshelwood has blogged about this several times, as well as other people in the community. In particular, using the backlog and portfolio management tools makes much more sense when everything is located in the same team project. Consolidating multiple team projects into one is not that easy unfortunately, it involves migrating source code, work items, reports etc.  Another thing that also need to be migrated is build definitions. It is possible to clone build definitions within the same team project using the TFS power tools. The Community TFS Build Manager also lets you clone build definitions to other team projects. But there is no tool that allows you to clone/copy a build definition to another collection. So, I whipped up a simple console application that let you do this. The tool can be downloaded from https://onedrive.live.com/redir?resid=EE034C9F620CD58D!8162&authkey=!ACTr56v1QVowzuE&ithint=file%2c.zip   Using CopyTFSBuildDefinitions You use the tool like this: CopyTFSBuildDefinitions  SourceCollectionUrl  SourceTeamProject  BuildDefinitionName  DestinationCollectionUrl  DestinationTeamProject [NewDefinitionName] Arguments SourceCollectionUrl The URL to the TFS collection that contains the team project with the build definition that you want to copy SourceTeamProject The name of the team project that contains the build definition BuildDefinitionName Name of the build definition DestinationCollectionUrl The URL to the TFS collection that contains the team project that you want to copy your build definition to DestinationTeamProject The name of the team project in the destination collection NewDefinitionName (Optional) Use this to override the name of the new build definition. If you don’t specify this, the name will the same as the original one Example: CopyTFSBuildDefinitions  https://jakob.visualstudio.com DemoProject  WebApplication.CI https://anotheraccount.visualstudio.com     Notes Since we are (potentially) create a build definition in a new collection, there is no guarantee that the various paths that are defined in the build definition exist in the new collection. For example, a build definition refers to server paths in TFVC or repos + branches in TFGit. It also refers to build controllers that definitely don’t exist in the new collection. So there will be some cleanup to do after you copy your build definitions. You can fix some of these using the Community TFS Build Manager, for example it is very easy to apply the correct build controller to a set of build definitions The problem stated above also applies to build process templates. However, the tool tries to find a build process template in the new team project with the same file name as the one that existed in the old team project. If it finds one, it will be used for the new build definition. Otherwise is will use the default build template If you want to run the tool for many build definitions, you can use this SQL scripts, compliments of Mr. Scrum/ALM MVP Richard Hundhausen to generate the necessary commands: USE Tfs_Collection GO SELECT 'CopyTFSBuildDefinitions.exe http://SERVER:8080/tfs/collection "' + P.ProjectName + '" "' + REPLACE(BD.DefinitionName,'\','') + '" http://NEWSERVER:8080/tfs/COLLECTION TEAMPROJECT'   FROM tbl_Project P        INNER JOIN tbl_BuildGroup BG on BG.TeamProject = P.ProjectUri        INNER JOIN tbl_BuildDefinition BD on BD.GroupId = BG.GroupId   ORDER BY P.ProjectName, BD.DefinitionName   Hope that helps, let me know if you have any problems with the tool or if you find it useful

    Read the article

  • A Forming Repository of Script Samples for Automating Windows Server 2012 and Windows 8

    - by Jialiang
    Compared with Windows Server 2008/R2 that provides about 230 cmdlets, Windows Server 2012 beats that by a factor of over 10 shipping ~ 2,430 cmdlets.  You can automate almost every aspect of the server.   The new PowerShell 3.0, like Windows Server 2012, has a ton of new features.  In this automation script-centric move, Microsoft All-In-One Script Framework (AIOSF) is ready to support IT Pros with many new services and offerings coming this year.  We sincerely hope that the IT community will benefit from the effort. Here is the first one among our new services and offerings:  The team is preparing a large set of Windows 8 / Windows Server 2012 script samples based on frequently asked IT tasks that we collect in TechNet forums and support calls to Microsoft.   Because the script topics come from frequently asked IT tasks, we hope that these script samples can be helpful to many IT Pros worldwide.   With the General Availability of Windows Server 2012, we release the first three Windows Server 2012 / Windows 8 script samples today.    Get Network Adapter Properties in Windows Server 2012 and Windows 8 (PowerShell) http://gallery.technet.microsoft.com/scriptcenter/Get-Network-Adapter-37c5a913 Description: This script could be used to get network adapter properties and advanced properties in Windows Server 2012 and Windows 8. It combines the outputs of Get-NetAdapter and Get-NetAdapterAdvancedProperty. It can generate a report of network adapter configuration settings. Use Scenarios: In a real world, IT Administrators are required to check the configuration of network adapters after the deployment of new servers. One typical example is the duplex setting of network adapters. Also, IT administrators need to maintain a server list which contains network adapter configuration settings in a regular basis. Before Windows Server 2012, IT administrators often feel difficulties to handle these tasks. Acknowledgement: Thanks Greg Gu from AIOSF for collecting this script topic, and writing the script sample.  Thanks James Adams (Microsoft Premier Field Engineer) for reviewing the script sample and ensuring its quality.   How to batch create virtual machines in Windows Server 2012 (PowerShell) http://gallery.technet.microsoft.com/scriptcenter/How-to-batch-create-9efd1811 Description: This PowerShell Script illustrates how to batch create multiple virtual machines based on comma delimited file by using PowerShell 3.0 in Windows Server 2012. Use Scenarios: IT admin requires to batch creating virtual machines in Windows Server 2012, although they can use few commands due to the lack of programming knowledge. Although it’s a set of Hyper-V command-lets within Windows PowerShell, IT Admins are reluctant to use them except simple a command which is widely used. Acknowledgement: Thanks Anders Wang from AIOSF for collecting this script topic and writing the script sample.  Thanks Christopher Norris for reviewing the script sample and ensuring its quality before publishing.   Remove Windows Store Apps in Windows 8 (PowerShell) http://gallery.technet.microsoft.com/scriptcenter/Remove-Windows-Store-Apps-a00ef4a4 Description: This script can be used to remove multiple Windows Store Apps from a user account in Windows 8. It provides a list of installed Windows Store applications. You can specify the application IDs, and remove them all at once. Use Scenarios: 1. In Windows 8, you can remove a single Windows Store App by right-clicking the tile in the Start menu and choosing the uninstall command.  However, no command is provided for removing multiple Windows Store Apps all at once. If you want to do so, you can use this script sample. 2. Sometimes Windows Store Apps may crash in Windows 8.  Even though you can successfully uninstall and reinstall the App, the application may still crash after the reinstallation.  In this situation, you can use this example script to remove these Windows Store Apps cleanly. Acknowledgement: Thanks Edward Qi from AIOSF for collecting the script idea and composing the script sample.  Thanks James Adams (Microsoft Premier Field Engineer) for reviewing the script sample and ensuring its quality.   This is just the beginning, and more and more script samples are coming.  You can follow our blog (http://blogs.technet.com/b/onescript) to get the latest customer-driven script samples for Windows Server 2012 and Windows 8.

    Read the article

  • Weblogic 10.3.4 (PS3) nodemanager wont start?

    - by angelo.santagata
    Hi all, well Im back from Australia and one of the things which happened was Oracle announced the PS3 release of oracles SOA & Webcenter products have been released. Now I normally use pre-installed images but I always like to install the products at least once that way I get to see its installation caveats.. Here’s one. Installation on Windows 7 64bit, 64bit JVM, generic weblogic Server installer. All worked fine, EXCEPT I cant start the node manager, I get the following error <08-Feb-2011 17:16:48> <INFO> <Loading domains file: D:\products\wls1034\WLSERV~1.3\common\NODEMA~1\nodemanager.domains> <08-Feb-2011 17:16:48> <SEVERE> <Fatal error in node manager server> weblogic.nodemanager.common.ConfigException: Native version is enabled but nodemanager native library could not be loaded     at weblogic.nodemanager.server.NMServerConfig.initProcessControl(NMServerConfig.java:249)     at weblogic.nodemanager.server.NMServerConfig.<init>(NMServerConfig.java:190)     at weblogic.nodemanager.server.NMServer.init(NMServer.java:182)     at weblogic.nodemanager.server.NMServer.<init>(NMServer.java:148)     at weblogic.nodemanager.server.NMServer.main(NMServer.java:390)     at weblogic.NodeManager.main(NodeManager.java:31) Caused by: java.lang.UnsatisfiedLinkError: D:\products\wls1034\wlserver_10.3\server\native\win\32\nodemanager.dll: Can't load IA 32-bit .dll on a AMD 64-bit platform     at java.lang.ClassLoader$NativeLibrary.load(Native Method)     at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)     at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1728)     at java.lang.Runtime.loadLibrary0(Runtime.java:823)     at java.lang.System.loadLibrary(System.java:1028)     at weblogic.nodemanager.util.WindowsProcessControl.<init>(WindowsProcessControl.java:17)     at weblogic.nodemanager.util.ProcessControlFactory.getProcessControl(ProcessControlFactory.java:24)     at weblogic.nodemanager.server.NMServerConfig.initProcessControl(NMServerConfig.java:247)     ... 5 more Ok it appears that the node manager has gotten confused and thinks this is a 32bit install of Weblogic Server whereas it is the 64bit install.. Might have been something I did, or didnt do, on installation (e.g. –d64 on the jvm command line), however the workaround is pretty easy. 1. Create a file called nodemanager.properties in %WL_HOME%\common\nodemanager on my machine it was D:\products\wls1034\wlserver_10.3\common\nodemanager 2. Add the following line to it NativeVersionEnabled=false 3. And start it up!, this will force it not to use .DLL files and use emulation/non native methods instead..  

    Read the article

  • Silverlight Cream for February 02, 2011 -- #1039

    - by Dave Campbell
    In this Issue: Tony Champion, Gill Cleeren, Alex van Beek, Michael James, Ollie Riches, Peter Kuhn, Mike Ormond, WindowsPhoneGeek(-2-), Daniel N. Egan, Loek Van Den Ouweland, and Paul Thurott. Above the Fold: Silverlight: "Using the AutoCompleteBox" Peter Kuhn WP7: "Windows Phone Image Button" Loek Van Den Ouweland Training: "New WP7 Virtual Labs" Daniel N. Egan Shoutouts: SilverlightShow has their top 5 most popular news articles up: SilverlightShow for Jan 24-30, 2011 Rudi Grobler posted answers he gives to questions about Silverlight - Where do I start? Brian Noyes starts a series of Webinars at SilverlightShow this morning at 10am PDT: Free Silverlight Show Webinar: Querying and Updating Data From Silverlight Clients with WCF RIA Services Join your fellow geeks at Gangplank in Chandler Arizona this Saturday as Scott Cate and AZGroups brings you Azure Boot Camp – Feb 5th 2011 From SilverlightCream.com: Deploying Silverlight with WCF Services Tony Champion takes a step out of his norm (Pivot) and has a post up about deploying WCF Services with your SL app, and how to take the pain out of that without pulling out your hair. Getting ready for Microsoft Silverlight Exam 70-506 (Part 3) Gill Cleeren's part 3 of getting ready for the Silverlight Exam is up at SilverlightShow... with links to the first two parts. There's so much good information linked off these... thanks Gill and 'The Show'! A guide through WCF RIA Services attributes Alex van Beek has a post up you will probably want to bookmark unless you're not using WCF RIA... do you know all the attributes by heart? ... how about an excellent explanation of 10 of them? Using DeferredLoadListBox in a Pivot Control Michael James discusses using the DeferredLoadListBox, and then also using it with the Pivot control... but not without some pain points which he defines and gives the workaround for. WP7: Know your data Ollie Riches' latest is about Data and WP7 ... specifically 'knowing' what data you're needing/using to avoid the 90MB memory limit... He gives a set of steps to follow to measure your data model to avoid getting in trouble. Using the AutoCompleteBox Peter Kuhn takes a great look at the AutoCompleteBox... the basics, and then well beyond with custom data, item templates, custom filters, asynchronous filtering, and a behavior for MVVM async filtering. OData and Windows Phone 7 Part 2 Mike Ormond has part 2 of his OData/WP7 post up... lashing up the images to go along with the code this time out... nice looking app. WP7 RoundToggleButton and RoundButton in depth WindowsPhoneGeek is checking out the RoundToggleButton and RoundButton controls from the Coding4fun Toolkit in detail... of course where to get them, and then the setup, demo project included. All about Dependency Properties in Silverlight for WP7 WindowsPhoneGeek's latest post is a good dependency-property discussion related to WP7 development, but if you're just learning, it's a good place to learn about the subject. New WP7 Virtual Labs Daniel N. Egan posted links to 6 new WP7 Virtual Labs released on 1/25. Windows Phone Image Button Loek Van Den Ouweland has a style up on his blog that gives you an imageButton for your WP7 apps, and a sweet little video showing how it's done in Expression Blend too. Yet another free Windows Phone book for developers Paul Thurott found a link to another Free eBook for WP7 development. This one is by Puja Pramudya and is an English translation of the original, and is an introductory text, but hey... it's free... give it a look! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • MEB Support to NetBackup MMS

    - by Hema Sridharan
    In MySQL Enterprise Backup 3.6, new option was introduced to support backup to tapes via SBT interface. SBT stands for System Backup to Tape, an Oracle API that helps to perform backup and restore jobs via media management software such as Oracle's Secure Backup (OSB). There are other storage managers like IBM's Tivoli Storage Manager (TSM) and Symantec's Netbackup (NB) which are also supported by MEB but we don't guarantee that it will function as expected for every release. MEB supports SBT API version 2.0 In this blog, I am primarily going to focus the interface of MEB and Symantec's NB. If we are using tapes for backup, ensure that tape library and tape drives are compatible. Test Setup 1. Install NB 7.5 master and media servers in Linux OS. ( NB 7.1 can also be used but for testing purpose I used NB 7.5)2. Install MEB 3.8 also in Linux OS.3. Install NB admin console in your windows desktop and configure the NB master server from there. Note: Ensure that you have root user permission to install NetBackup. Configuration Steps for MEB and NB Once MEB and NB are installed, Ensure that NB is linked to MEB by specifying the library /usr/openv/netbackup/bin/libobk.so64 in the mysqlbackup command line using --sbt-lib-path. Configure the NB master server from windows console. That is configure the storage units by specifying the Storage unit name, Disk type, Media Server name etc.  Create NetBackup policies that are user selectable. But please make sure that policy type is "Oracle".  Define the clients where MEB will be executed. Some times this will be different host where MEB is run or some times in same Media server where NB and tapes are attached. Now once the installation and configuration steps are performed for MEB and NB, the next part is the actual execution.MEB should be run as single file backup using --backup-image option with prefix sbt:(it is a tag which tells MEB that it should stream the backup image through the SBT interface) which is sent to NB client via SBT interface . The resulting backup image is stored where NB stores the images that it backs up. The following diagram shows how MEB interacts with MMS through SBT interface. Backup The following parameters should also be ready for the execution,    --sbt-lib-path : Path to SBT library specific to NetBackup MMS. SBT lib for NetBackup  is in /usr/openv/netbackup/bin/libobk.so64    --sbt-environment: Environment variables must be defined specific to NetBackup. In our example below, we use     NB_ORA_SERV=myserver.com,    NB_ORA_CLIENT=myserver.com,    NB_ORA_POLICY=NBU-MEB    ORACLE_HOME = /export/home2/tmp/hema/mysql-server/ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --port=13000 --protocol=tcp --user=root --backup-image=sbt:bkpsbtNB --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64 --sbt-environment="NB_ORA_SERV=myserver.com, NB_ORA_CLIENT=myserver.com, NB_ORA_POLICY=NBU-MEB, ORACLE_HOME=/export/home2/tmp/hema/mysql-server/” --backup-dir=/export/home2/tmp/hema/MEB_bkdir/ backup-to-image ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Once backup is completed successfully, this should appear in Activity Monitor in NetBackup Console.For restore,  image contents has to be extracted using image-to-backup-dir command and then apply-log and copy-back steps are applied. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64  --backup-dir=/export/home2/tmp/hema/NBMEB/ --backup-image=sbt:bkpsbtNB image-to-backup-dir-----------------------------------------------------------------------------------------------------------------------------------Now apply logs as usual, shutdown the server and perform restore, restart the server and check the data contents. ./mysqlbackup   ---backup-dir=/export/home2/tmp/hema/NBMEB/  apply-log ./mysqlbackup --datadir=/export/home2/tmp/hema/mysql-server/mysql-5.5-meb-repo/mysql-test/var/mysqld.1/data/  --backup-dir=/export/home2/tmp/hema/MEB_bkpdir/ innodb_log_files_in_group=2 --innodb_log_file_size=5M --user=root --port=13000 --protocol=tcp copy-back The NB console should show 'Restore" job as done. If you don't see that there is something wrong with MEB or NetBackup.You can also refer to more detailed steps of MEB and NB integration in whitepaper here

    Read the article

  • SQL SERVER – Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical function LEAD() and LAG(). This functions accesses data from a subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join . It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Let us fun following query. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result. When we look at above resultset it is very clear that LEAD function gives us value which is going to come in next line and LAG function gives us value which was encountered in previous line. If we have to generate the same result without using this function we will have to use self join. In future blog post we will see the same. Let us explore this function a bit more. This function not only provide previous or next line but it can also access any line before or after using offset. Let us fun following query, where LEAD and LAG function accesses the row with offset of 2. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID,2) OVER (ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID,2) OVER (ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result. You can see the LEAD and LAG functions  now have interval of  rows when they are returning results. As there is interval of two rows the first two rows in LEAD function and last two rows in LAG function will return NULL value. You can easily replace this NULL Value with any other default value by passing third parameter in LEAD and LAG function. Let us fun following query. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID,2,0) OVER (ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID,2,0) OVER (ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result, where NULL are now replaced with value 0. Just like any other analytic function we can easily partition this function as well. Let us see the use of PARTITION BY in this clause. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result, where now the data is partitioned by SalesOrderID and LEAD and LAG functions are returning the appropriate result in that window. As now there are smaller partition in my query, you will see higher presence of NULL. In future blog post we will see how this functions are compared to SELF JOIN. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • MIXing it Up a Bit

    - by andrewbrust
    Another March, another MIX.  For the fifth year running now, Microsoft has chosen to put on a conference aimed less at software development, per se, and more at the products, experiences and designs that software development can generate.  In all four prior MIX events, the focus of the show, its keynotes and breakout sessions has been on Web products.  On day 1 of MIX 2010 that focus shifted to Windows Phone 7 Series (WP7). What little we had seen of WP7 had been shown to us in a keynote presentation, given by Microsoft’s Joe Belfiore, at the Mobile World Congress in Barcelona, Spain last month.  And today, Mr. Belfiore reprised his showmanship for the MIX 2010 audience.  Joe showed us the ins and outs of WP7 and, in a breakout session, even gave us a sneak peek of Office (specifically, Excel) on WP7.  We didn’t get to see that one month ago in Barcelona, nor did get to see email messages opened for reading, which we saw today. But beyond a tour of the phone itself, impressive though that is, we got to see apps running on it.  Those apps included Associated Press news, Seesmic (a major Twitter client) and Foursquare (a social media darling).  All three ran, ran well, and looked markedly different and better from their corresponding versions on iPhone and Android.  And the games we saw looked even better. To me though, the best demos involved the creation of WP7 apps, using Silverlight in Visual Studio and Expression Blend.  These demos were so effective because they showed important apps being built in very few steps, and by Microsoft executives to boot.  Scott Guthrie showed us how to build a Twitter API app in Visual Strudio.   Jon Harris showed us how to build a photo management and viewer application in Expression Blend, using virtually no code.  Demos of apps built from scratch to F5 without the benefit of a teacher, could be challenging.  But they went off fine, without a hitch and without a ton of opaque, generated code.  Everything written, be it C# or XAML, was easily understood, and the results were impressive. That means lots of developers can do this, and I think it means a lot will.  What I’ve seen, thus far, of iPhone and Android development looks very tedious by comparison.  Development for those platforms involve a collection of tools that integrate only to a point.  Dev work for WP7 involves use of Visual Studio, Silverlight and the same debugging experience .NET developers already know.  This was very exciting for me. All the demos harkened back to days of building apps for with Visual Basic…design the front-end, put in code-behind and then hit F5.  And that makes sense, because the phone platform, and the PC of the early 90s are both, essentially, client OS machines.  The Web was minimal and the “device” was everything. Same is true of this phone.  It’s a client app contraption that fits in your pocket. And if the platforms are comparable, hopefully so too will be the draw of ease-of-development.   WP7 has the potential to make mobile developers want to switch over, and to convince enterprise developers to get into the phone scene.  Will this propel the new phone platform to new heights, and restore Microsoft’s competiveness in the mobile arena? I hope so.  I think so.  And if Microsoft uses developers to build themselves a victory, that would be beneficial and would show that Microsoft has learned from its failures, as well as its successes.  Today I saw a few beautiful apps.  Tomorrow I hope I see a slew of others; maybe not as polished, but plentiful, attractive and stable.  That would be a victory for Microsoft, and for developers.  And it would show everyone else that developers are the kingmakers.  They need cheap, efficient dev tools and lots of respect.  Microsoft has always been the company to provide that.  Hopefully, with WP7, they will return to that persona and see how very timeless it is.

    Read the article

  • Stuck due to "knowing too much"

    - by Ran Biron
    Note more discussion at http://news.ycombinator.com/item?id=4037794 Welcome Hacker News Visitors! While HN is a fine forum for discussion and debate, Programmers - Stack Exchange is not. From the FAQ: If your motivation for asking the question is “I would like to participate in a discussion about ____”, then you should not be asking here. However, if your motivation is “I would like others to explain ____ to me”, then you are probably OK. (Discussions are of course welcome in our real time web chat.) Currently, this question is viewed by the membership of Programmers.SE as more likely to provoke unproductive discussion than constructive answers; while debates on its form and future are conducted, it will be locked to prevent arguments and vandalism. -- Shog9 I have a relatively simple development task, but every time I try to attack it, I end up spiraling in deep thoughts - how could it extending the future, what are the 2nd generation clients going to need, how does it affect "non functional" aspects (e.g. Performance, authorization...), how would it best be architectured to allow change... I remember myself a while ago, younger and, perhaps, more eager. The "me" I was then wouldn't have given a thought about all that - he would've gone ahead and wrote something, then rewrote it, then rewrote it again (and again...). The "me" today is more hesitant, more careful. I find it much easier today to sit and plan and instruct other people on how to do things than to actually go ahead and do them myself - not because I don't like to code - the opposite, I love to! - but because every time I sit at the keyboard, I end up in that same annoying place. Is this wrong? Is this a natural evolution, or did I drive myself into a rut? Fair disclosure - in the past I was a developer, today my job title is a "system architect". Good luck figuring what it means - but that's the title. Wow. I honestly didn't expect this question to generate that many responses. I'll try to sum it up. Reasons: Analysis paralysis / Over engineering / gold plating / (any other "too much thinking up-front can hurt you"). Too much experience for the given task. Not focusing on what's important. Not enough experience (and realizing that). Solutions (not matched to reasons): Testing first. Start coding (+ for fun) One to throw away (+ one API to throw away). Set time constraints. Strip away the fluff, stay with the stuff. Make flexible code (kinda opposite to "one to throw away", no?). Thanks to everyone - I think the major benefit here was to realize that I'm not alone in this experience. I have, actually, already started coding and some of the too-big things have fallen off, naturally. Since this question is closed, I'll accept the answer with most votes as of today. When/if it changes - I'll try to follow.

    Read the article

  • 9 Ways Facebook Monetization Could Change Your Marketing

    - by Mike Stiles
    Think Facebook monetization isn’t a head game? Imagine creating something so functional, fun and addictive you literally amass about 1/7th of the planet’s population as an audience. You have 1 billion users that use it at least once a month. But analysts and marketers look at what you’ve done and say, “eh…not good enough.” What if you had a TV show that garnered 1/7 of Earth’s population as an audience? How much would a spot cost? And how fast would marketers write that check, even without the targeting and engagement analytics Facebook offers? Having already changed the marketing landscape forever, if you’re Facebook’s creator, you’d have to be scratching your head and asking, “Wow, what more does a product need to do?” Facebook’s been busy answering that very question with products and betas that will likely directly affect your brand’s strategy. Item 1: Users can send physical gifts to friends through Facebook based on suggestions from user data. A giant step toward the potential power of social commerce. Item 2: Users can pay $7 to promote posts for higher visibility. Individual users, not just marketers, are being leveraged as a revenue stream. Not impressive enough? There’s also the potential Craigslist killer Facebook Marketplace. Item 3: Mobile ads. 600 million+ access Facebook on smartphones. According to the company, half of the $1 million a day generated by Sponsored Stories as of late June was coming from mobile. Ads in News Feeds seen on mobile had click-through rates 23x higher than on desktop News Feeds or the right side panel. Item 4: App developers can buy install ads that show up in mobile News Feeds so reliance on discovery in app stores is reduced. Item 5: Want your posts seen by people who never liked your Page? A test began in August where you could appear in non-fans’ News Feeds on both web and mobile. Item 6: How about an ability to use Facebook data to buy ads outside of Facebook? A mobile ad network is being tested to get your targeted messages on non-Facebook apps and sites surfaced on devices. Item 7: Facebook Collections, Facebook’s answer to Pinterest. Users can gather images of desired products and click through to the retailer to buy. Keep focusing on your imagery. Item 8: Facebook Offers, Facebook’s answer to the Groupons and Living Socials of the world. You can send deals to your fans’ News Feeds. Item 9: Facebook Exchange lets you track what fans do on Facebook and across the entire Web. Could lead to a Facebook ad network leveraging Facebook users and data but not limiting exposure to the Facebook platform. Marketers are seeing increasing value in Facebook (and Twitter for that matter).  But as social grows and adjusts, will marketing budgets aimed in that direction grow and adjust accordingly, and within a reasonable time frame? @mikestilesPhoto Christie Merrill/stock.xchng

    Read the article

  • Using the SOA-BPM VIrtualBox Appliance

    - by antony.reynolds
    Quickstart Guide to Using Oracle Appliance for SOA/BPM Recently I have been setting up some machines for fellow engineers.  My base setup consists of Oracle Enterprise Linux with Oracle Virtual Box.  Note that after installing VirtualBox I needed to add the VirtualBox Extension Pack to enable RDP access amongst other features.  In order to get them started quickly with some images I downloaded the pre-built appliance for SOA/BPM from OTN. Out of the box this provides a VirtualBox image that is pre-installed with everything you will need to develop SOA/BPM applications. Specifically by using the virtual appliance I got the following pre-installed and configured. Oracle Enterprise Linux 5 User oracle password oracle User root password oracle. Oracle Database XE Pre-configured with SOA/BPM repository. Set to auto-start on OS startup. Oracle SOA Suite 11g PS2 Configured with a “collapsed domain”, all services (SOA/BAM/EM) running in AdminServer. Listening on port 7001 Oracle BPM Suite 11g Configured in same domain as SOA Suite. Oracle JDeveloper 11g With SOA/BPM extensions. Networking The VM by default uses NAT (Network Address Translation) for network access.  Make sure that the advanced settings for port forwarding allow access through the host to guest ports.  It should be pre-configured to forward requests on the following ports Purpose Host Port Guest Port (VBox Image) SSH 2222 22 HTTP 7001 7001 Database 1521 1521 Note that only one VirtualBox image can use a given host port, so make sure you are not clashing if it seems not to work. What’s Left to Do? There is still some customization of the environment that may be required. If you need to configure a proxy server as I did then for the oracle and root users to set up an HTTP proxy Added “export http_proxy=http://proxy-host:proxy-port” to ~oracle/.bash_profile and ~root/.bash_profile Added “export http_proxy=http://proxy-host:proxy-port” to /etc/.bashrc Edited System->Preferences to set Network Proxy In Firefox set Preferences->Network->Connection Settings to “Use system proxy settings” In JDeveloper set Edit->Preferences->Web Browser and Proxy to required proxy settings You may need to configure yum to point to a public OEL yum repository – such as http://public-yum.oracle.com. If you are going to be accessing the SOA server from outside the VirtualBox image then you may want to set the soa-infra Server URLs to be the hostname of the host OS. Snap! Once I had the machine configured how I wanted to use it I took a snapshot so that I can always get back to the pristine install I have now.  Snapshots are one of the big benefits of putting a development environment into a virtualized environment.  I can make changes to my installation and if I mess it up I can restore the image to a last known good snapshot. Hey Presto!, Ready to Go This is the quickest way to get up and running with SOA/BPM Suite.  Out of the box the download will work, I only did extra customization so I could use services outside the firewall and browse outside the firewall from within by SOA VirtualBox image.  I also use yum to update the OS to the latest binaries. So have fun.

    Read the article

  • What packages do I need to compile .tex documents using XeLaTeX?

    - by maria
    Hi I'm aware of the existence of similar threads on this forum. But any of replies mach to my problem. I'm using Ubuntu 10.4 and I hadn't problems with fonts till I've decided to use XeLaTeX instead of LaTeX (cf http://tex.stackexchange.com/questions/12347/typesetting-a-document-using-arabic-script/12358#12358). The problem is that I'm not able to compile any .tex document using XeLaTeX, as well as properly display XeLaTeX documentation. As I've learn thanks to mentioned thread, XeLaTeX uses the fonts availables in general in the system. I was trying yo read fontspec documentation, but it opens in pdf with a lot of white gaps and terminal output (quite long) consist mostly of errors. This are just few lines of it: Error: Missing language pack for 'Adobe-Japan1' mapping Error: Unknown font tag 'F5.1' Error (24124): No font in show Error: Unknown font tag 'F5.1' I was trying to compile simple XeLaTeX file: \documentclass{article} \usepackage{fontspec} \setmainfont{Linux Libertine O} \begin{document} Hello World! \end{document} without succes. This is terminal output of compilation: This is XeTeX, Version 3.1415926-2.2-0.9995.2 (TeX Live 2009/Debian) restricted \write18 enabled. entering extended mode (./ex.tex LaTeX2e <2009/09/24> Babel <v3.8l> and hyphenation patterns for english, usenglishmax, dumylang, noh yphenation, polish, loaded. (/usr/share/texmf-texlive/tex/latex/base/article.cls Document Class: article 2007/10/19 v1.4h Standard LaTeX document class (/usr/share/texmf-texlive/tex/latex/base/size10.clo)) (/usr/share/texmf-texlive/tex/xelatex/fontspec/fontspec.sty (/usr/share/texmf-texlive/tex/generic/ifxetex/ifxetex.sty) (/usr/share/texmf-texlive/tex/latex/tools/calc.sty) (/usr/share/texmf-texlive/tex/latex/xkeyval/xkeyval.sty (/usr/share/texmf-texlive/tex/generic/xkeyval/xkeyval.tex (/usr/share/texmf-texlive/tex/generic/xkeyval/keyval.tex))) (/usr/share/texmf-texlive/tex/latex/base/fontenc.sty (/usr/share/texmf-texlive/tex/xelatex/euenc/eu1enc.def) (/usr/share/texmf-texlive/tex/xelatex/euenc/eu1lmr.fd)) fontspec.cfg loaded. (/usr/share/texmf-texlive/tex/xelatex/fontspec/fontspec.cfg))kpathsea: Invalid fontname `Linux Libertine O', contains ' ' ! Font \zf@basefont="Linux Libertine O" at 10.0pt not loadable: Metric (TFM) fi le or installed font not found. \zf@fontspec ...ntname \zf@suffix " at \f@size pt \unless \ifzf@icu \zf@set@... l.3 \setmainfont{Linux Libertine O} ? I can't find Linux Libertine O. Searching for otf- by aptitude gives as result: maria@maria-laptop:/etc/fonts$ aptitude search otf p emdebian-rootfs - emdebian root filesystem support p libotf-bin - A Library for handling OpenType Font - utilities p libotf-dev - A Library for handling OpenType Font - development i libotf0 - A Library for handling OpenType Font - runtime p libotf0-dbg - The libotf libraries and debugging symbols p libpam-dotfile - A PAM module which allows users to have more than one password p livecd-rootfs - construction script for the livecd rootfs p makebootfat - Utility to create a bootable FAT filesystem p otf-ipaexfont - Japanese OpenType font, IPAexFont (IPAexGothic/Mincho) p otf-ipaexfont-gothic - Japanese OpenType font, IPAexFont (IPAexGothic) p otf-ipaexfont-mincho - Japanese OpenType font, IPAexFont (IPAexMincho) p otf-ipafont - Japanese OpenType font set, IPAfont p otf-ipafont-gothic - Japanese OpenType font set, IPA Gothic font p otf-ipafont-mincho - Japanese OpenType font set, IPA Mincho font p otf-stix - the Scientific and Technical Information eXchange fonts p otf-thai-tlwg - Thai fonts in OpenType format p otf-yozvox-yozfont - Japanese proportional Handwriting OpenType font p otf2bdf - generate BDF bitmap fonts from OpenType outline fonts p robotfindskitten - Zen Simulation of robot finding kitten So font in question is not just uninstalled, but not available, if I'm not wrong. Does it mean that I lack some repositoires? I was trying also to apply solution from the thread How do I reinstall default fonts?, but the result is: maria@maria-laptop:~$ sudo apt-get install msttcorefonts [sudo] password for maria: Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting ttf-mscorefonts-installer instead of msttcorefonts ttf-mscorefonts-installer is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. maria@maria-laptop:~$ It seems that is not a usual problem for use of XeLaTeX; nobody in the mentioned thread suggested instalation of anything else than TeX Live. Thanks in advance

    Read the article

  • initrd.lz is corrupted error occured while installing 11.10

    - by zubendra
    C:\ubuntu\install\boot\initrd.lz is corrupted. Error pop-up comes up every time i am trying to install ubuntu-11.10-desktop-i386 using wubi. error comes when the installation process is almost completed. can anyone suggest a solution for this problem. Its occurring regularly. 03-19 18:01 DEBUG TaskList: ## Running copy_installation_files... 03-19 18:01 DEBUG WindowsBackend: Copying C:\DOCUME~1\HP_OWN~1.YOU\LOCALS~1\Temp\pyl59.tmp\data\custom-installation -> C:\ubuntu\install\custom-installation 03-19 18:01 DEBUG WindowsBackend: Copying C:\DOCUME~1\HP_OWN~1.YOU\LOCALS~1\Temp\pyl59.tmp\winboot -> C:\ubuntu\winboot 03-19 18:01 DEBUG WindowsBackend: Copying C:\DOCUME~1\HP_OWN~1.YOU\LOCALS~1\Temp\pyl59.tmp\data\images\Ubuntu.ico -> C:\ubuntu\Ubuntu.ico 03-19 18:01 DEBUG TaskList: ## Finished copy_installation_files 03-19 18:01 DEBUG TaskList: ## Running get_iso... 03-19 18:01 DEBUG CommonBackend: Trying to use pre-specified ISO X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 DEBUG TaskList: New task is_valid_iso 03-19 18:01 DEBUG TaskList: ### Running is_valid_iso... 03-19 18:01 DEBUG Distro: checking Ubuntu ISO X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 INFO Distro: Found a valid iso for Ubuntu: X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 DEBUG TaskList: ### Finished is_valid_iso 03-19 18:01 DEBUG TaskList: New task check_iso 03-19 18:01 DEBUG TaskList: ### Running check_iso... 03-19 18:01 DEBUG CommonBackend: Checking X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 DEBUG Distro: checking Ubuntu ISO X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 INFO Distro: Found a valid iso for Ubuntu: X:\ubuntu-11.10-desktop-i386.iso 03-19 18:01 DEBUG CommonBackend: Using distro Ubuntu i386 instead of Ubuntu amd64 03-19 18:01 DEBUG TaskList: New task get_metalink 03-19 18:01 DEBUG TaskList: #### Running get_metalink... 03-19 18:01 DEBUG downloader: downloading http://releases.ubuntu.com/11.10/ubuntu-11.10-desktop-i386.metalink > C:\ubuntu\install 03-19 18:01 ERROR CommonBackend: Cannot download metalink file http://releases.ubuntu.com/11.10/ubuntu-11.10-desktop-i386.metalink err=[Errno 4] IOError: <urlopen error (7, 'getaddrinfo failed')> 03-19 18:01 DEBUG downloader: downloading http://cdimage.ubuntu.com/daily-live/current/oneiric-desktop-i386.metalink > C:\ubuntu\install 03-19 18:01 ERROR CommonBackend: Cannot download metalink file2 http://cdimage.ubuntu.com/daily-live/current/oneiric-desktop-i386.metalink err=[Errno 4] IOError: <urlopen error (7, 'getaddrinfo failed')> 03-19 18:01 DEBUG TaskList: #### Finished get_metalink 03-19 18:01 ERROR CommonBackend: ERROR: the metalink file is not available, cannot check the md5 for X:\ubuntu-11.10-desktop-i386.iso, ignoring 03-19 18:01 DEBUG TaskList: ### Finished check_iso 03-19 18:01 DEBUG TaskList: New task copy_file 03-19 18:01 DEBUG CommonBackend: Copying X:\ubuntu-11.10-desktop-i386.iso > C:\ubuntu\install\installation.iso 03-19 18:01 DEBUG TaskList: ### Running copy_file... 03-19 18:01 DEBUG TaskList: ### Finished copy_file 03-19 18:01 DEBUG TaskList: ## Finished get_iso 03-19 18:01 DEBUG TaskList: ## Running extract_kernel... 03-19 18:01 DEBUG CommonBackend: Extracting files from ISO C:\ubuntu\install\installation.iso 03-19 18:01 DEBUG WindowsBackend: extracting md5sum.txt from C:\ubuntu\install\installation.iso 03-19 18:01 DEBUG WindowsBackend: extracting casper\vmlinuz from C:\ubuntu\install\installation.iso 03-19 18:01 DEBUG WindowsBackend: extracting casper\initrd.lz from C:\ubuntu\install\installation.iso 03-19 18:01 DEBUG CommonBackend: Checking kernel, initrd and md5sums 03-19 18:01 DEBUG CommonBackend: checking C:\ubuntu\install\boot\vmlinuz 03-19 18:01 DEBUG CommonBackend: C:\ubuntu\install\boot\vmlinuz md5 = fde150f5c6fd2de66ed7876efbfcc4c7 == fde150f5c6fd2de66ed7876efbfcc4c7 03-19 18:01 DEBUG CommonBackend: checking C:\ubuntu\install\boot\initrd.lz 03-19 18:01 DEBUG CommonBackend: C:\ubuntu\install\boot\initrd.lz md5 = 8900200c764438c1b124dff5ae92c763 != d6baee1e11f1d6de6eba6bd43dbde352 03-19 18:01 ERROR TaskList: File C:\ubuntu\install\boot\initrd.lz is corrupted Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 623, in extract_kernel Exception: File C:\ubuntu\install\boot\initrd.lz is corrupted 03-19 18:01 DEBUG TaskList: # Cancelling tasklist 03-19 18:01 ERROR root: File C:\ubuntu\install\boot\initrd.lz is corrupted Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 132, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\common\backend.py", line 623, in extract_kernel Exception: File C:\ubuntu\install\boot\initrd.lz is corrupted 03-19 18:01 DEBUG TaskList: # Finished tasklist

    Read the article

  • Is this simple XOR encrypted communication absolutely secure?

    - by user3123061
    Say Alice have 4GB USB flash memory and Peter also have 4GB USB flash memory. They once meet and save on both of memories two files named alice_to_peter.key (2GB) and peter_to_alice.key (2GB) which is randomly generated bits. Then they never meet again and communicate electronicaly. Alice also maintains variable called alice_pointer and Peter maintains variable called peter_pointer which is both initially set to zero. Then when Alice needs to send message to Peter they do: encrypted_message_to_peter[n] = message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] Where n i n-th byte of message. Then alice_pointer is attached at begining of the encrypted message and (alice_pointer + encrypted message) is sent to Peter and then alice_pointer is incremented by length of message (and for maximum security can be used part of key erased) Peter receives encrypted_message, reads alice_pointer stored at beginning of message and do this: message_to_peter[n] = encrypted_message_to_peter[n] XOR alice_to_peter.key[alice_pointer + n] And for maximum security after reading of message also erases used part of key. - EDIT: In fact this step with this simple algorithm (without integrity check and authentication) decreases security, see Paulo Ebermann post below. When Peter needs to send message to Alice they do analogical steps with peter_to_alice.key and with peter_pointer. With this trivial schema they can send for next 50 years each day 2GB / (50 * 365) = cca 115kB of encrypted data in both directions. If they need more data to send, they simple use larger memory for keys for example with today 2TB harddiscs (1TB keys) is possible to exchange next 50years 60MB/day ! (thats practicaly lots of data for example with using compression its more than hour of high quality voice communication) It Seems to me there is no way for attacker to read encrypted message without keys even if they have infinitely fast computer. because even with infinitely fast computer with brute force they get ever possible message that can fit to length of message, but this is astronomical amount of messages and attacker dont know which of them is actual message. I am right? Is this communication schema really absolutely secure? And if its secure, has this communication method its own name? (I mean XOR encryption is well-known, but whats name of this concrete practical application with use large memories at both communication sides for keys? I am humbly expecting that this application has been invented someone before me :-) ) Note: If its absolutely secure then its amazing because with today low cost large memories it is practicaly much cheeper way of secure communication than expensive quantum cryptography and with equivalent security! EDIT: I think it will be more and more practical in future with lower a lower cost of memories. It can solve secure communication forever. Today you have no certainty if someone succesfuly atack to existing ciphers one year later and make its often expensive implementations unsecure. In many cases before comunication exist step where communicating sides meets personaly, thats time to generate large keys. I think its perfect for military communication for example for communication with submarines which can have installed harddrive with large keys and military central can have harddrive for each submarine they have. It can be also practical in everyday life for example for control your bank account because when you create your account you meet with bank etc.

    Read the article

  • Tech Ed/BI Conference 2010: A Recovering Industry in a Recovering City

    - by andrewbrust
    I tried writing a post for this blog last night, while at the this year’s Microsoft Tech Ed and Business Intelligence conferences, in New Orleans. But I literally fell asleep while writing it.  That’s probably a sign that my readers might have done the same while reading it. Why the writer’s block? This was a very good show for me, but I think I was having trouble figuring out exactly why.  Now that I’m on the flight home, I’m starting to piece it together. One reason, for sure, was that I’ve spent years in both the developer and the BI worlds, and a show that combined the two was really enjoyable for me.  Typically, the subject matter, the attendees, the Microsoft execs and managers, and even the social circles have been separate.  This year’s Tech Ed facilitated a fusion of each of these previously segregated groups.  That was good for me as a speaker; for example, I facilitated a Birds of a Feather session on PowerPivot (Microsoft’s new self-service BI offering) which was well-attended, and by a large number of non-BI pros.  The fusion was good for me as an attendee too, as Microsoft BI, in the form of a new Pivot Viewer control, made it into the Day 1 keynote, demoed by Microsoft’s key BI champion, Amir Netz.  And it was good for me socially, as I was able to meet with peers in both camps, and at one location. Speaking of meeting with industry colleagues, I did a lot of that at this show.  Probably for the first time ever, I carefully scheduled and conducted a series of meetings with friends and business acquaintances in the developer tools, data visualization, utilities, publishing and training areas of the Microsoft ecosystem.  Beside the time efficiencies in conducting so many meetings, I discovered another benefit. I got a real handle on the tech industry’s economic health. The news here is good.  First of all, 2010 has been a great year for just about everyone I spoke to.  The mood is positive, energy is high, and people are working really hard.  This is, of course, refreshing to see, and it’s a huge relief.  Add to that the fact that this year’s Tech Ed was about 2.5 times larger in headcount than last year’s (based on numbers from unofficial, but reliable, sources), and the economic prognosis seems excellent.  But there’s more to it than that. Here’s the thing: everyone I talked to seems to be working, and succeeding, at changing their business models to adapt to changes in the industry.  Whether it’s the Internet’s impact on publishing and training, the increased importance of the developer audience in South Asia, the shift of affordable developer and business talent to unfamiliar locales abroad, or even lapses in Microsoft’s performance in the market, partner companies aren’t just rolling with the punches; they’re welcoming the changes and working them to their advantage.  No one seemed downtrodden, or even fatigued.  Even for businesses who have seen core revenue streams become commoditized, everyone seems to be changing their market strategy and winning.  Even Microsoft, of whom I have been critical recently, showed signs of successful hard work and playbook change, in the maturing of their cloud strategy, their commitment to it and their excitement around it.  And the embedded, managed, self-service BI strategy that Microsoft has been touting looks like it’s already being embraced by customers, even though PowerPivot, and other new Microsoft BI products, were released only recently. The collective optimism I have witnessed, and that I have felt, tells me good things about this industry and the economy.  The stock market had huge mood swings during my stay, and that may yet subdue the industry recovery I have seen this week.  Nonetheless, I am convinced that a strong foundation of hard work, innovative thinking and, if I may,  true renaissance is underlying this industry’s success. That kind of strength will generate a strong recovery, I am certain, whether now or once we’re past another round of choppy weather in the broader economy.  The fundamentals are good.

    Read the article

  • Change the Way Google Search Results Display in Firefox

    - by Asian Angel
    Are you tired of the default look for search results at Google? If you want a different and customized pleasing look for them, then join us as we look at the GoogleMonkeyR User Script. Note: User Style Scripts & User Scripts can be added to most browsers but we are using Firefox & the Greasemonkey extension for our example here. Before Here is the standard look for search results at Google…not bad but it really does not stand out that well either. Installing the User Script You may be asking yourself what makes this particular user script different from others. Take a look at the list of goodies that you get access to and you will understand: Multiple columns of results Removes “Sponsored Links” Add numbers to the results Auto-load more results Removes web search dialogues Open links in a new tab Favicons GooglePreview Self updating Can be configured from a simple user dialogue To get started click on the Webpage Install Button. Once you click on the Webpage Install Button you will see the following window asking for confirmation to add the user script to Firefox. Click Install to complete the process. GoogleMonkeyR in Action Refreshing the same search page shown above shows a noticeable difference already. The light blue background makes the search results stand out a bit better. This is an improvement from before but you will definitely want to have a look to see just how far you can go… Right click on the Greasemonkey Status Bar Icon, go to User Script Commands, and select GoogleMonkeyR Preferences. Once you have clicked on GoogleMonkeyR Preferences the search page will be shaded out and you will have access to the user script’s preferences. This is where you can really make your search results unique looking! Here are the changes that we started out with… After refreshing our search results things looked even better. A look at the entire page of results with our browser maximized and set for two columns. If you have the Auto load more results Option enabled new results will be added very quickly as you scroll down. Our set of search results after adding Favicons & GooglePreview Images. Conclusion If you have been wanting a more dramatic and pleasing look for the search results at Google then you can not go wrong with the GoogleMonkeyR User Script. Change as little or as much as you want to get that perfect look in your browser. Link Install the GoogleMonkeyR User Script Download the Greasemonkey extension for Firefox (Mozilla Add-ons) Similar Articles Productive Geek Tips Make Firefox Quick Search Use Google’s Beta Search KeysMake Firefox Built-In Search Box Use Google’s Experimental Search KeysMake Firefox Show Google Results for Default Address Bar SearchesCombine Wolfram Alpha & Google Search Results in FirefoxHow To Run 4 Different Google Searches at Once In the Same Tab TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics

    Read the article

  • The Value of SOA Specialization - Fujitsu

    - by Jürgen Kress
    Thanks for the nice ink The Value of Specialization In my last post  I talked about Fujitsu's achievement in obtaining SOA and other specializations, but I have heard murmurings from other partners about what just is the value? I think Oracle have to do more to advertise the benefits to customers, we need to see customers asking for specialization for it to really work, but Oracle have made great promises about only recommending those partners who are specialized. For us there was another benefit. Oracle was sponsoring the 3rd Annual SOA Symposium in Berlin and invited us as their first specialized partner to take part. There is a great blog about the symposium on the SOA community blog site. This is real commitment from Oracle and we have other marketing opportunities being worked on with Jürgen. This does generate leads so my message to other Oracle Partners is, you need to do this, it is worthwhile.   Fujitsu - First SOA Specialized Partner Globally Just before Oracle Open World I found out that Fujitsu had achieved the first SOA Specialization globally. I think most partners know what the requirements are for Specialization and that in itself is challenging but the bureaucracy around the actual submission is an exercise in tenacity. I won’t go into that now; I have had my dig at Oracle this month, but enough to say the process could be improved. As a platinum partner we needed 5 specializations and we decided to go for SOA first. The reasoning behind this is that our Oracle Practice is known for being applications centric. We have always had an excellent technical capability but no one ever talked about that, it was just part and parcel of an implementation. However today we have just as many bids that are technology lead as there is applications lead, so it seemed a good plan to work on the areas we were not known for. We appointed a capability lead to be responsible for putting the team through the training and testing and Rosemary (Kell) was excellent, she ensured that everyone was on track and that it wasn’t just getting put into the ‘to do list’. In Fujitsu everyone in the Oracle Practice has an objective to achieve the competency tests in their area, so achieving the 2 pre sales, 2 sales and 1 support was no problem at all. We actually had 22 with the support capability proficiency.  The implementation specialist exams are much harder, more like OCP in the database area. We had help from the Oracle SOA Community; Jürgen Kress who runs this in EMEA is really motivational. At the time we started SOA was a beta exam which means you do not get the results immediately but again we put forward more than we needed. Manjit Chopra, Sukhraj Sahota, Emely Patra, Ian Scorrer and Sunny Sidhu all took the exam and eventually got the results they wanted they had passed. Congratulations. Here is Jurgen expalining why specialization is important. After the tests came the submissions where you need to include deals and experience, this was my bit, and persuading Oracle we really deserved the specialization. Finally we got the news we had been awarded the specialization, and a few days later that we were first globally. I am very proud. However there is no rest for the wicked and we plodded on to make the 5 specializations needed for Platinum and now we are working on the new Diamond status and I think SOA will be one of our 5 ‘super specializations’. This is a global Fujitsu initiative and I work closely with my colleague in Germany Jessika Weiss. It was nice to be able to have a press release about this and a comment from Judson Althoff  head of Oracle Alliances. For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA,SOA Community,OPN,Oracle,Fujitsu,Debra Lilley,Jürgen Kress,Specialization,SOA Specialization

    Read the article

  • Creating a branch for every Sprint

    - by Martin Hinshelwood
    There are a lot of developers using version control these days, but a feature of version control called branching is very poorly understood and remains unused by most developers in favour of Labels. Most developers think that branching is hard and complicated. Its not! What is hard and complicated is a bad branching strategy. Just like a bad software architecture a bad branch architecture, or one that is not adhered to can prove fatal to a project. We I was at Aggreko we had a fairly successful Feature branching strategy (although the developers hated it) that meant that we could have multiple feature teams working at the same time without impacting each other. Now, this had to be carefully orchestrated as it was a Business Intelligence team and many of the BI artefacts do not lend themselves to merging. Today at SSW I am working on a Scrum team delivering a product that will be used by many hundreds of developers. SSW SQL Deploy takes much of the pain out of upgrading production databases when you are not using the Database projects in Visual Studio. With Scrum each Scrum Team works for a fixed period of time on a single sprint. You can have one or more Scrum Teams involved in delivering a product, but all the work must be merged and tested, ready to be shown to the Product Owner at the the Sprint Review meeting at the end of the current Sprint. So, what does this mean for a branching strategy? We have been using a “Main” (sometimes called “Trunk”) line and doing a branch for each sprint. It’s like Feature Branching, but with only ONE feature in operation at any one time, so no conflicts Figure: DEV folder containing the Development branches.   I know that some folks advocate applying a Label at the start of each Sprint and then rolling back if you need to, but I have always preferred the security of a branch. Like: being able to create a release from Main that has Sprint3 code even while Sprint4 is being worked on. being sure I can always create a stable build on request. Being able to guarantee a version (labels are not auditable) Be able to abandon the sprint without having to delete the code (rare I know, but would be a mess if it happened) Being able to see the flow of change sets through to a safe release It helps you find invalid dependencies when merging to Main as there may be some file that is in everyone’s Sprint branch, but never got checked in. (We had this at the merge of Sprint2) If you are always operating in this way as a standard it makes it easier to then add more scrum teams in the future. Muscle memory of this way of working. Don’t Like: Additional DB space for the branches Baseless merging between sprint branches when changes are directly ported Note: I do not think we will ever attempt this! Maybe a bit tougher to see the history between sprint branches since the changes go up through Main and down to another sprint branch Note: What you would have to do is see which Sprint the changes were made in and then check the history he same file in that Sprint. A little bit of added complexity that you would have to do anyway with multiple teams. Over time, you can end up with a lot of old unused sprint branches. Perhaps destroy with /keephistory can help in this case. Note: We ALWAYS delete the Sprint branch after it has been merged into Main. That is the theory anyway, and as you can see from the images Sprint2 has already been deleted. Why take the chance of having a problem rolling back or wanting to keep some of the code, when you can just abandon a branch and start a new one? It just seems easier and less painful to use a branch to me! What do you think?   Technorati Tags: TFS,TFS2010,Software Development,ALM,Branching

    Read the article

  • Combine the Address & Search Bars in Firefox

    - by Asian Angel
    The Search Bar in Firefox is very useful for finding additional information or images while browsing but the UI space it takes up can be frustrating at times. Now you can reclaim that UI space and still have access to all that searching goodness with the Foobar extension. Note: This is about the Foobar Firefox extension and not to be confused with Foobar2000 the open source music player. Before If you have the “Search Bar” displayed there is no doubt that it is taking up valuable space in your browser’s UI. What you need is the ability to reclaim that UI space and still have the same access to your search capability as before…no more sacrificing one for a gain with the other. After As soon as you have installed the extension you can see that the top part of your browser will look much sleeker without the “Search Bar” to clutter it up. The “Search Engine Icon” will now be visible inside of your “Address Bar” as seen here. You will be able to access the same “Search Engine Menu” as before by clicking on the “Search Engine Icon”. There are two display modes for search results (setting available in the “Options”). The first one shown here is “Simple Mode” where all results are in a condensed format. Notice that not only are there search suggestions but also “Bookmarks & History” listings as well. You can literally get the best of both when conducting a search. Note: The number of entries for search suggestions and bookmark/history listings can be adjusted higher or lower in the “Options”. The second one is “Rich Mode” where the results are shown with more details. Choose the “mode” that best suits your personal style. For our first example you can see the results when we conducted a quick search on “Windows 7” (using the first of the three offerings shown from Bing). Our second example was a search for “Flowers” using our Photobucket search engine. Once again nice results opened in a new tab for us. Options The options are easy to go through. It is really nice to be able to choose the number of results that you want displayed and the format that you want them shown in. Note: Changing the “Suggestion popup style” will require a browser restart to take effect. Conclusion If you love using the “Search Bar” in Firefox but want to reclaim the UI space then you will definitely want to add this extension to your browser. The ability to customize the number of results and choose the formatting make this extension even better. Links Download the Foobar extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Combine the Address Bar and Progress Bar Together in FirefoxHide Some or All of the GUI Bars in FirefoxEnable Partial Match AutoComplete in the Firefox Address BarQuick Firefox UI TweaksAdd Search Forms to the Firefox Search Bar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Scan your PC for nasties with Panda ActiveScan CleanMem – Memory Cleaner AceStock – The Personal Stock Monitor Add Multiple Tabs to Office Programs The Wearing of the Green – St. Patrick’s Day Theme (Firefox) Perform a Background Check on Yourself

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Key-Value Pair Databases and Document Databases – Day 13 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Relational Database and NoSQL database in the Big Data Story. In this article we will understand the role of Key-Value Pair Databases and Document Databases Supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (Yesterday’s post) NoSQL Databases (Yesterday’s post) Key-Value Pair Databases (This post) Document Databases (This post) Columnar Databases (Tomorrow’s post) Graph Databases (Tomorrow’s post) Spatial Databases (Tomorrow’s post) Key Value Pair Databases Key Value Pair Databases are also known as KVP databases. A key is a field name and attribute, an identifier. The content of that field is its value, the data that is being identified and stored. They have a very simple implementation of NoSQL database concepts. They do not have schema hence they are very flexible as well as scalable. The disadvantages of Key Value Pair (KVP) database are that they do not follow ACID (Atomicity, Consistency, Isolation, Durability) properties. Additionally, it will require data architects to plan for data placement, replication as well as high availability. In KVP databases the data is stored as strings. Here is a simple example of how Key Value Database will look like: Key Value Name Pinal Dave Color Blue Twitter @pinaldave Name Nupur Dave Movie The Hero As the number of users grow in Key Value Pair databases it starts getting difficult to manage the entire database. As there is no specific schema or rules associated with the database, there are chances that database grows exponentially as well. It is very crucial to select the right Key Value Pair Database which offers an additional set of tools to manage the data and provides finer control over various business aspects of the same. Riak Rick is one of the most popular Key Value Database. It is known for its scalability and performance in high volume and velocity database. Additionally, it implements a mechanism for collection key and values which further helps to build manageable system. We will further discuss Riak in future blog posts. Key Value Databases are a good choice for social media, communities, caching layers for connecting other databases. In simpler words, whenever we required flexibility of the data storage keeping scalability in mind – KVP databases are good options to consider. Document Database There are two different kinds of document databases. 1) Full document Content (web pages, word docs etc) and 2) Storing Document Components for storage. The second types of the document database we are talking about over here. They use Javascript Object Notation (JSON) and Binary JSON for the structure of the documents. JSON is very easy to understand language and it is very easy to write for applications. There are two major structures of JSON used for Document Database – 1) Name Value Pairs and 2) Ordered List. MongoDB and CouchDB are two of the most popular Open Source NonRelational Document Database. MongoDB MongoDB databases are called collections. Each collection is build of documents and each document is composed of fields. MongoDB collections can be indexed for optimal performance. MongoDB ecosystem is highly available, supports query services as well as MapReduce. It is often used in high volume content management system. CouchDB CouchDB databases are composed of documents which consists fields and attachments (known as description). It supports ACID properties. The main attraction points of CouchDB are that it will continue to operate even though network connectivity is sketchy. Due to this nature CouchDB prefers local data storage. Document Database is a good choice of the database when users have to generate dynamic reports from elements which are changing very frequently. A good example of document usages is in real time analytics in social networking or content management system. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Multidimensional Thinking–24 Hours of Pass: Celebrating Women in Technology

    - by smisner
    It’s Day 1 of #24HOP and it’s been great to participate in this event with so many women from all over the world in one long training-fest. The SQL community has been abuzz on Twitter with running commentary which is fun to watch while listening to the current speaker. If you missed the fun today because you’re busy with all that work you’ve got to do – don’t despair. All sessions are recorded and will be available soon. Keep an eye on the 24 Hours of Pass page for details. And the fun’s not over today. Rather than run 24 hours consecutively, #24HOP is now broken down into 12-hours over two days, so check out the schedule to see if there’s a session that interests you and fits your schedule. I’m pleased to announce that my business colleague Erika Bakse ( Blog | Twitter) will be presenting on Day 2 – her debut presentation for a PASS event. (And I’m also pleased to say she’s my daughter!) Multidimensional Thinking: The Presentation My contribution to this lineup of terrific speakers was Multidimensional Thinking. Here’s the abstract: “Whether you’re developing Analysis Services cubes or creating PowerPivot workbooks, you need to get into a multidimensional frame of mind to produce a model that best enables users to answer their business questions on their own. Many database professionals struggle initially with multidimensional models because the data modeling process is much different than the one they use to produce traditional, third normal form databases. In this session, I’ll introduce you to the terminology of multidimensional modeling and step through the process of translating business requirements into a viable model.” If you watched the presentation and want a copy of the slides, you can download a copy here. And you’re welcome to download the slides even if you didn’t watch the presentation, but they’ll make more sense if you did! Kimball All the Way There’s only so much I can cover in the time allotted, but I hope that I succeeded in my attempt to build a foundation that prepares you for starting out in business intelligence. One of my favorite resources that will get into much more detail about all kinds of scenarios (well beyond the basics!) is The Data Warehouse Toolkit (Second Edition) by Ralph Kimball. Anything from Kimball or the Kimball Group is worth reading. Kimball material might take reading and re-reading a few times before it makes sense. From my own experience, I found that I actually had to just build my first data warehouse using dimensional modeling on faith that I was going the right direction because it just didn’t click with me initially. I’ve had years of practice since then and I can say it does get easier with practice. The most important thing, in my opinion, is that you simply must prototype a lot and solicit user feedback, because ultimately the model needs to make sense to them. They will definitely make sure you get it right! Schema Generation One question came up after the presentation about whether we use SQL Server Management Studio or Business Intelligence Development Studio (BIDS) to build the tables for the dimensional model. My answer? It really doesn’t matter how you create the tables. Use whatever method that you’re comfortable with. But just so happens that it IS possible to set up your design in BIDS as part of an Analysis Services project and to have BIDS generate the relational schema for you. I did a Webcast last year called Building a Data Mart with Integration Services that demonstrated how to do this. Yes, the subject was Integration Services, but as part of that presentation, I showed how to leverage Analysis Services to build the tables, and then I showed how to use Integration Services to load those tables. I blogged about this presentation in September 2010 and included downloads of the project that I used. In the blog post, I explained that I missed a step in the demonstration. Oops. Just as an FYI, there were two more Webcasts to finish the story begun with the data – Accelerating Answers with Analysis Services and Delivering Information with Reporting Services. If you want to just cut to the chase and learn how to use Analysis Services to build the tables, you can see the Using the Schema Generation Wizard topic in Books Online.

    Read the article

  • Big Data – Basics of Big Data Analytics – Day 18 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the various components in Big Data Story. In this article we will understand what are the various analytics tasks we try to achieve with the Big Data and the list of the important tools in Big Data Story. When you have plenty of the data around you what is the first thing which comes to your mind? “What do all these data means?” Exactly – the same thought comes to my mind as well. I always wanted to know what all the data means and what meaningful information I can receive out of it. Most of the Big Data projects are built to retrieve various intelligence all this data contains within it. Let us take example of Facebook. When I look at my friends list of Facebook, I always want to ask many questions such as - On which date my maximum friends have a birthday? What is the most favorite film of my most of the friends so I can talk about it and engage them? What is the most liked placed to travel my friends? Which is the most disliked cousin for my friends in India and USA so when they travel, I do not take them there. There are many more questions I can think of. This illustrates that how important it is to have analysis of Big Data. Here are few of the kind of analysis listed which you can use with Big Data. Slicing and Dicing: This means breaking down your data into smaller set and understanding them one set at a time. This also helps to present various information in a variety of different user digestible ways. For example if you have data related to movies, you can use different slide and dice data in various formats like actors, movie length etc. Real Time Monitoring: This is very crucial in social media when there are any events happening and you wanted to measure the impact at the time when the event is happening. For example, if you are using twitter when there is a football match, you can watch what fans are talking about football match on twitter when the event is happening. Anomaly Predication and Modeling: If the business is running normal it is alright but if there are signs of trouble, everyone wants to know them early on the hand. Big Data analysis of various patterns can be very much helpful to predict future. Though it may not be always accurate but certain hints and signals can be very helpful. For example, lots of data can help conclude that if there is lots of rain it can increase the sell of umbrella. Text and Unstructured Data Analysis: unstructured data are now getting norm in the new world and they are a big part of the Big Data revolution. It is very important that we Extract, Transform and Load the unstructured data and make meaningful data out of it. For example, analysis of lots of images, one can predict that people like to use certain colors in certain months in their cloths. Big Data Analytics Solutions There are many different Big Data Analystics Solutions out in the market. It is impossible to list all of them so I will list a few of them over here. Tableau – This has to be one of the most popular visualization tools out in the big data market. SAS – A high performance analytics and infrastructure company IBM and Oracle – They have a range of tools for Big Data Analysis Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Data Scientist. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How do I install Revenge of the Titans?

    - by Akash
    I've downloaded the .deb file of Revenge of the Titans, and installed it using Ubuntu Software Center. Now, when I try to launch it using the software launcher nothing happens. Any ideas? The .deb file was downloaded from the Humble Indie Bundle. I am unable to launch it from the terminal ( the command revenge-of-the-titans says command not found ). I also tried the .tar.gz. When I extract it and run ./revenge.sh , nothing happens. No output on the terminal or anything at all. I have set chmod 777 revenge.sh as well. The command /opt/revengeofthetitans/revenge.sh does not give any output. If I run gedit /opt/revengeofthetitans/revenge.sh in the terminal: > #!/bin/bash > # > # revenge.sh > # > ############################################################################### > > SCRIPT="`basename $0`" > GAMEDIR="${HOME}/.revenge_of_the_titans_1.80" LOGFILE="${GAMEDIR}/${SCRIPT}.log" > INSTDIR="`dirname $0`" ; cd > "${INSTDIR}" ; INSTDIR="`pwd`" > > [[ ! -d "${GAMEDIR}" ]] && mkdir -m > 0755 "${GAMEDIR}" > > JARPATH="patch.jar:RevengeOfTheTitans.jar:data-hib.jar:gfx.jar:fonts.jar:images.jar:music.jar:fx-mono.jar:fx-stereo.jar:gamecommerce.jar:common.jar:spgl-lite.jar:lwjgl.jar:lwjgl_util.jar:jorbis.jar:jinput.jar" > > # XMODIFIERS is cleared here to prevent SCIM screwing up keyboard > input XMODIFIERS= java \ > -noverify \ > -Djava.library.path="${INSTDIR}" \ > -Dorg.lwjgl.util.NoChecks=true \ > -Dorg.lwjgl.librarypath="${INSTDIR}" \ > -Dnet.puppygames.applet.Launcher.resources=/resources-hib.dat > \ > -Dnet.puppygames.applet.Game.gameResource=game.hib > \ > -XX:MaxGCPauseMillis=3 \ > -Xms64m \ > -Xmx375m \ > -Xincgc \ > -cp "${JARPATH}" \ > net.puppygames.applet.Launcher \ > "$@" \ > >"${LOGFILE}" 2>&1 > > exit 0 > > # > # EOF > # > ###############################################################################

    Read the article

  • Extract Audio from a Video File with Pazera Free Audio Extractor

    - by DigitalGeekery
    Have you ever wanted to extract some or all of the audio from a video file?  Today we’ll take a look at Pazera Free Audio Extractor. A simple audio converter that specializes in that very task. Download the Pazera Free Audio Extractor. (See download link below) You’ll need to unzip the download folder, but there is no need to install the application. Simply double-click the AudioExtractor.exe file to run the application. To add your video files to the queue to be converted, click on the Add files  button at the top left. You can add multiple files to the queue and convert them all at one time. Browse for your video file, and click Open.   Your video will be added to the Queue for processing.   Under Output directory you can choose to output to a folder of your choice. Outputting to the same folder as the input folder is the default.   Pazera Free Audio Extractor includes pre-configured profiles that will simplify the process of choosing conversion settings. To load a profile, choose one from the Profile drop down list and then click the Load button. You can choose to output to MP3, AAC, AC3, WMA, FLAC, OGG or WAV file format.   You will see the profile update the Audio settings in the panels at the lower left of the application. If you wish, you may also select your own custom settings. Advanced Settings The Advanced settings can be used if you want to extract only a portion of the the audio, such as a clip of dialog or a song from a movie. To extract only a portion of the audio, set the start time by selecting the Start time offset check box, then entering the time in the video clip where the audio begins. To set the end time, begin by selecting the Duration check box. Now, you can either select the Duration radio button and enter the amount of time for which you would like to extract the audio, or you can select the End time offset radio button and enter the time in the video clip where the audio ends. When you are ready to convert, click the CONVERT button on the menu at the top of the screen.   An output box will open and display the conversion progress. When finished, click Close.   Now you are ready to enjoy your audio clip. Pazera Free Audio Extractor is a basic audio tool that is easy enough for everyone to use. It runs on Windows only and supports most common video formats including AVI, FLV, MP4, MPG, MOV, 3GP, and WMV. Download Free Audio Extractor 1.3 Similar Articles Productive Geek Tips Eufony Free Audio Player – Resource Gentle Audio PlayerConvert .3GP and .3G2 Files to AVI / MPEG for FreeTurn Off Auto-Play of Audio and Video CDs and DVDs in UbuntuHow to Make/Edit a movie with Windows Movie Maker in Windows VistaEasily Change Audio File Formats with XRECODE TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs

    Read the article

  • I love it when a plan comes together

    - by DavidWimbush
    I'm currently working on an application so that our Marketing department can produce most of their own mailing lists without my having to get involved. It was all going well until I got stuck on the bit where the actual SQL query is generated but a rummage in Books Online revealed a very clean solution using some constructs that I had previously dismissed as pointless. Typically we want to email every customer who is in any of the following n groups. Experience shows that a group has the following definition: <people who have done A> [(AND <people who have done B>) | (OR <people who have done C>)] [APART FROM <people who have done D>] When doing these by hand I've been using INNER JOIN for the AND, UNION for the OR, and LEFT JOIN + WHERE D IS NULL for the APART FROM. This would produce two quite different queries: -- Old OR select  A.PersonID from  (   -- A   select  PersonID   from  ...   union  -- OR   -- C   select  PersonID   from  ...   ) AorC   left join  -- APART FROM   (   select  PersonID   from  ...   ) D on D.PersonID = AorC.PersonID where  D.PersonID is null -- Old AND select  distinct main.PersonID from  (   -- A   select  PersonID   from  ...   ) A   inner join  -- AND   (   -- B   select  PersonID   from  ...   ) B on B.PersonID = A.PersonID   left join  -- APART FROM   (   select  PersonID   from  ...   ) D on D.PersonID = A.PersonID where  D.PersonID is null But when I tried to write the code that can generate the SQL for any combination of those (along with all the variables controlling what each SELECT did and what was in all the optional bits of each WHERE clause) my brain started to hurt. Then I remembered reading about the (then new to me) keywords INTERSECT and EXCEPT. At the time I couldn't see what they added but I thought I would have a play and see if they might help. They were perfect for this. Here's the new query structure: -- The way forward select  PersonID from  (     (       (       -- A       select  PersonID       from  ...       )       union      -- OR        intersect  -- AND       (       -- B/C       select  PersonID       from  ...       )     )     except     (     -- D     select  PersonID     from  ...     )   ) x I can easily swap between between UNION and INTERSECT, and omit B, C, or D as necessary. Elegant, clean and readable - pick any 3! Sometimes it really pays to read the manual.

    Read the article

< Previous Page | 668 669 670 671 672 673 674 675 676 677 678 679  | Next Page >