Search Results

Search found 621 results on 25 pages for 'optimizer transformations'.

Page 15/25 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • How to programatically retarget animations from one skeleton to another?

    - by Fraser
    I'm trying to write code to transfer animations that were designed for one skeleton to look correct on another skeleton. The source animations consist only of rotations except for translations on the root (they're the mocap animations from the CMU motion capture database). Many 3D applications (eg Maya) have this facility built-in, but I'm trying to write a (very simple) version of it for my game. I've done some work on bone mapping, and because the skeletons are hierarchically similar (bipeds), I can do 1:1 bone mapping for everything but the spine (can work on that later). The problem, however, is that the base skeleton/bind poses are different, and the bones are different scales (shorter/longer), so if I just copy the rotation straight over it looks very strange: I've tried multiplying by the original bone's absolute rotation, then by the inverse of the target, and vice-versa... kind of a shot in the dark, and indeed it didn't work. (Tried relative transformations too)... I'm not sure where to go from here, so if anyone has any resources on stuff like this (papers, source code, etc), that would be really helpful. Thanks!

    Read the article

  • How to remove HDD Low virus

    - by samsudeen
    “HDD Low virus” is a new  fake system optimizer application which started affecting all  the Windows ( XP, vista, Windows 7) based computers world wide starting from Monday. It gets installed to the computers without notice by passing all our antivirus software. The infected computers will suddenly popup a system error  similar to the below screen shot and tries to shut down the computer.   Though the major anti virus companies have not yet release an update for this virus, We can easily remove this virus using the below steps Steps to remove HDD Low virus Press Alt+Ctrl+Delete and go the the Task Manager -> Process and kill the process with name [random number].exe ( e.g 123410.exe) Go to Run -> type msconfig to launch the System Configuration utility. In the Start up Tab un check  all the services with random name (e.g jygkgs.exe) and note folder path of the service in the Command column. Go to that folder path and delete all the exe files with random name manually ( It is recommended to use command prompt to delete the files) Delete all the HDD low files in the below path %Desktop%\HDD Low.lnk %Programs%\HDD Low\Uninstall HDD Low.lnk %Programs%\HDD Low\HDD Low.lnk Open registry using Run-> regedit.exe search for the below key and delete software\Microsoft\Windows\CurrentVersion\Run [random number].exe” Restart the computer Also update your anti virus definition and run a full scan of your computer to remove any affected files. This article titled,How to remove HDD Low virus, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • How does an optimizing compiler react to a program with nested loops?

    - by D.Singh
    Say you have a bunch of nested loops. public void testMethod() { for(int i = 0; i<1203; i++){ //some computation for(int k=2; k<123; k++){ //some computation for(int j=2; j<12312; j++){ //some computation for(int l=2; l<123123; l++){ //some computation for(int p=2; p<12312; p++){ //some computation } } } } } } When the above code reaches the stage where the compiler will try to optimize it (I believe it's when the intermediate language needs to converted to machine code?), what will the compiler try to do? Is there any significant optimization that will take place? I understand that the optimizer will break up the loops by means of loop fission. But this is only per loop isn't it? What I mean with my question is will it take any action exclusively based on seeing the nested loops? Or will it just optimize the loops one by one? If the Java VM complicates the explanation then please just assume that it's C or C++ code.

    Read the article

  • Oracle is a Leader again in Gartner’s Magic Quadrant for E-commerce

    - by David Dorf
    Although e-commerce represents only 10% of the typical brick-and-mortar retailer’s sales, that percentage continues to climb.  So it’s no wonder that many retailers are considering the purchase of new e-commerce platforms to provide a commerce experience that keeps customers coming back.  And once again, Oracle and IBM lead the pack, identified as leaders in Gartner’s 2013 Magic Quadrant for E-Commerce along with hybris.  Many retailers are realizing the need to support Commerce Anywhere, allowing customers to interact with brands on their own terms.  Gartner reinforces this trend saying, “E-commerce is moving beyond just an online selling channel to integrated platforms delivering a unified customer experience. Traditionally, most organizations have been investing in the online channels with the objective of driving additional sales. However, customers increasingly are expecting a seamless buying experience across all channels, and e-commerce is a critical part of this evolution since it is a point where other channels are integrating to synchronize the customer experience across channels." Oracle saw this trend coming and acquired ATG, FatWire, and Endeca, all leaders in their respective markets, starting back in 2010.  The assets have been combined as Oracle Commerce and represent a comprehensive solution for retailers to sell via the Web while offering the best customer experience possible.  Retailers like JCPenney, American Apparel, and Kohl’s have recently licensed Oracle Commerce as part of their transformations. In the next two years we’ll begin to see more separation between the retailers that have a Commerce Anywhere strategy, and those that continue to flail with separate channels.  Integrating online and offline commerce, along with mobile and social aspects are becoming crucial to success in the industry.

    Read the article

  • Mark Hurd Believes HR is the Next Major Revenue Driver: Read His Latest LinkedIn Influencer Blog

    - by kristin.jellison
    “Most CEOs realize they need to make some dramatic changes in how they recruit people, align and manage performance, make compensation decisions, and optimize talent,” Oracle President Mark Hurd writes. The key issue, he explains, is that many CEOs aren’t equipping their HR teams with the tools and resources they need to unlock employees’ full value. This oversight is keeping HR organizations walled off from revenue generation and customer engagements—two chief sources of value for a company. So what is a CEO to do, given tightening budgets, a sluggish economy and a rapidly changing workforce? Hurd’s answer: invest in a modern Human Capital Management (HCM) system—one equipped with built-in intelligence and predictive analytics capabilities. To find out more about how to deliver effective HCM transformations, read Mark Hurd’s full article, “How CEOs Can Transform HR into a Revenue Driver” and visit the Oracle HCM Cloud Service site. We also encourage you to log into your LinkedIn account and “Follow” Mark to receive future posts. Share the link to his blog with your networks via Twitter, Facebook and other social media channels. You can also “Like” the post on Oracle’s LinkedIn and Facebook pages, and/or retweet via @Oracle.

    Read the article

  • Writing a Book, and Moving my Blog

    - by Ben Nevarez
    I started blogging about SQL Server here at SQLblog back in July, 2009 and it was a lot of fun, I enjoyed it a lot. Then later, after a series of blog posts about the Query Optimizer, I was invited to write an entire book about that same topic. But after a few months I realized that it was going to be hard to continue both blogging and writing chapters for a book, this in addition to my regular day job, so I decided to stop blogging for a little while.   Now that I have finished the last chapter of the book and I am working on the final chapter reviews, I decided to start blogging again. This time I am moving my blog to   http://www.benjaminnevarez.com   Same as my previous posts I plan to write about my topics of interest, like the relational engine, and basically anything related to SQL Server. Hopefully you find my new blog interesting and useful.   Finally, I would like to thank Adam for allowing me to blog here. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • Random Monday Thoughs

    - by Terry Goldman
    On this Monday morning my thoughts center on why is it so hard to embrace governance, any form of governance for that matter, be it software development governance, SOA governance, data governance, IT governance, so on a so fourth?Most customers that I meet tend to think that they don't need don't need governance as all is good within the enterprise. The question I generally pose to colleges and customers, is you have to think of governance as an insurance policy. Take of instance, if you just bought a new car, perhaps your "dream" car, would you drive it on the open streets without having the car insured? Probably not.Governance is what insurance is to new cars, be it to SOA, IT transformations and software development. Governance is a insurance policy against risk of failure. Now once I put it in this context, ask yourself, does governance have value to your organization? Most people now get it. Once the seed of governance is planted at the executive level of an organization, it becomes a exercise in planting an that idea into key personnel within the organization. Then the justification for governance grows and grows across the enterprise.Thats my food for though in this Monday morning.FYI, stay tuned for an upcoming multi-part article on using Oracle Enterprise Repository to build a Enterprise Continuum as described in TOGAF v9.0.

    Read the article

  • CodePlex Daily Summary for Saturday, September 01, 2012

    CodePlex Daily Summary for Saturday, September 01, 2012Popular ReleasesDotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...EntLib.com????????: EntLib.com???????? v3.0: EntLib eCommerce Solution ???Microsoft .Net Framework?????????????????????。Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.NicAudio: NicAudio 2.0.6: ac3,dts Solved some initialization issues with no-linear decode.ExpressProfiler: Initial release of ExpressProfiler v1.2: This is initial release of ExpressProfilerMath.NET Numerics: Math.NET Numerics v2.2.1: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Since v2.2.0: Student-T density more robust for very large degrees of freedom Sparse Kronecker product much more efficient (now leverages sparsity) Direct access to raw matrix storage implementations for advanced extensibility Now also separate package for signed core library with a strong name (we dropped strong names in v2.2.0) Also available as NuGet packages...Microsoft SQL Server Product Samples: Database: AdventureWorks Databases – 2012, 2008R2 and 2008: About this release This release consolidates AdventureWorks databases for SQL Server 2012, 2008R2 and 2008 versions to one page. Each zip file contains an mdf database file and ldf log file. This should make it easier to find and download AdventureWorks databases since all OLTP versions are on one page. There are no database schema changes. For each release of the product, there is a light-weight and full version of the AdventureWorks sample database. The light-weight version is denoted by ...Christoc's DotNetNuke Module Development Template: DotNetNuke Project Templates V1.1 for VS2012: This release is specifically for Visual Studio 2012 Support, distributed through the Visual Studio Extensions gallery at http://visualstudiogallery.msdn.microsoft.com/ After you build in Release mode the installable packages (source/install) can be found in the INSTALL folder now, within your module's folder, not the packages folder anymore Check out the blog post for all of the details about this release. http://www.dotnetnuke.com/Resources/Blogs/EntryId/3471/New-Visual-Studio-2012-Projec...Home Access Plus+: v8.0: v8.0.0901.1830 RELEASE CHANGED TO BETA Any issues, please log them on http://www.edugeek.net/forums/home-access-plus/ This is full release, NO upgrade ZIP will be provided as most files require replacing. To upgrade from a previous version, delete everything but your AppData folder, extract all but the AppData folder and run your HAP+ install Documentation is supplied in the Web Zip The Quota Services require executing a script to register the service, this can be found in there install ...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3406 (September 2012): New features: Extended ReflectionClass libxml error handling, constants DateTime::modify(), DateTime::getOffset() TreatWarningsAsErrors MSBuild option OnlyPrecompiledCode configuration option; allows to use only compiled code Fixes: ArgsAware exception fix accessing .NET properties bug fix ASP.NET session handler fix for OutOfProc mode DateTime methods (WordPress posting fix) Phalanger Tools for Visual Studio: Visual Studio 2010 & 2012 New debugger engine, PHP-like debugging ...NougakuDoCompanion: v1.1.0: Add temp folder of local resource, Resize local resource. Change launch ruby commnadline args from rack to bundle. 1.NougakuDoCompanion v1.1.0 cspkg.zip - cspkg and ServiceConfiguration.xml (small , medium, large, extra large vm) - include NougakudoSetupTool.exe and readme.txt 2.NougakuDoCompanion v1.1.0.zip - Source code. include NougakudoSetupTool.exe - include activerecord-sqlserver-adapter patch in paches folder. 3.Depends tools. - Windows Azure SDK for .NET June 2012(1.7SP1) - Windows ...WatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.14.06: Whats New Added CKEditor 3.6.4 oEmbed Plugin can now handle short urls changes The Template File can now parsed from an xml file instead of js (More Info...) Style Sets can now parsed from an xml file instead of js (More Info...) Fixed Showing wrong Pages in Child Portal in the Link Dialog Fixed Urls in dnnpages Plugin Fixed Issue #6969 WordCount Plugin Fixed Issue #6973 File-Browser: Fixed Deleting of Files File-Browser: Improved loading time File-Browser: Improved the loa...MabiCommerce: MabiCommerce 1.0.1: What's NewSetup now creates shortcuts Fix spelling errors Minor enhancement to the Map window.ScintillaNET: ScintillaNET 2.5.2: This release has been built from the 2.5 branch. Version 2.5.2 is functionally identical to the 2.5.1 release but also includes the XML documentation comments file generated by Visual Studio. It is not 100% comprehensive but it will give you Visual Studio IntelliSense for a large part of the API. Just make sure the ScintillaNET.xml file is in the same folder as the ScintillaNET.dll reference you're using in your projects. (The XML file does not need to be distributed with your application)....Facebook Web Parts for SharePoint 2010: Version 1.0.1 - WSP: SharePoint 2010 solution (WSP) Resolved a bug from Version 1.0 - WSP where user profile names would not properly update.CUDAfy.NET: CUDAfy V1.10 BETA: This beta version of CUDAfy V1.10 requires CUDA 5.0 RC when using the Maths libraries. Add: Support for CUDA 5 RC (required if using Maths libraries). Fix: Lock method when multi-threading enabled could dead-lock. Add: Architecture sm_35. Add: Support for context switching. Fix: Translation of PI and E must be done using InvariantCulture. Add: tcc driver property (HighPerformanceDriver). Add: GetDevice always sets the current context to the device context that was got. Add: D...Contactor: GSMContactorProgram V1.0 - Source Code: This is the source code for the program, For Visual Studio 2012 RCTouchInjector: TouchInjector 1.1: Version 1.1: fixed a bug with the autorun optionWinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.0: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...BlackJumboDog: Ver5.7.1: 2012.08.25 Ver5.7.1 (1)?????·?????LING?????????????? (2)SMTP???(????)????、?????\?????????????????????New Projectsberry: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxBore Holes: A C program to read data of 11 bore holes which were drilled around a site in Sarawak, Malaysia. The data is displayed graphically as well as on a console.codeSHOW: This is a Windows 8 HTML/JS project with the express goal of showing simple how-to concepts for developing in Windows 8 using JavaScript (codefoster.com)crt-upgrade: ??????C???????????????。????????,????,?????、??、??、?、???、??????。 ???????C????????????,?????C????????????。?????????????????。DnevnikEnvironment?hange: bla bla blaDotNetNuke Translator: This is a Windows Application that can be used locally to translate resources (resx files) for a DotNetNuke installation.Get Connected with twitter & Pull data from user's twitter account: This source code will be useful for ASP.Net MVC C# developers. This contains Get Connected with Twitter & Pull user's information with his permission.GuideCP: ?????????? ??????? ??????????? ?? ??????? ?????? ??????, ?????? ??????, ????????? ? ???? ?? ???????????.Heart of Iron Smart Editor: This is A Heart Of Iron 3 SaveGame editorJSMerge: JSCombine is a command-line utility that is designed to help authors of Javascript libraries combine numerous *.js files into one, comprehensive file.Open Source Compiler, Optimizer and VM for a C-Like Language: Here, you can download an open-source compiler, optimizer and multi-core code generator for a C-like language and modify it in order to meet your requirements.OpenShip .NET - multi-carrier shipping system for Fedex, UPS and USPS: This is a proposed project for a multi-carrier shipping system to create shipments, get rates and track packages for Fedex, UPS and USPS.PogoPlug.NET: Low- and high-level class libraries encapsulating the PogoPlug API.Qi: Qi breathes life into your .NET projects by providing a collection of common helper methods and extensions so that you can get on with building your applicationSalesforce SSIS Transfer: This a project that leverages Salesforce.com's API (both Bulk and standard) to incrementally download a copy of your org's SF.com database.ServiceMon - Extensible Service Monitoring Utility: Standalone service monitoring tool which uses an extensible, scriptable plugin model to define monitoring actions with built-in support for HTTP GET Seven Up Seven Down: Seven Up Seven Down Game by Aditya Gupta Readme - How to play 1. Choose a bet amount 2. Select either 7up or 7down or 7 3. For 7up and 7down if you win yoSharePoint Import Data Timer: Custom timer job for Sharepoint 2010 which imports the results from SQL queries into Sharepoint lists.Smart Rabbit: M-Rabbit is Mihmojsos platform! Whit Smart Rabbit you can boot your Mihmojsos OS without restarting your computer!Sofire Suite: Sofire Suite ?????? 2009 ? 08 ??????????。????????????,???? V ??? Sofire2011(???????????????),???? Sofire.v1.5 ???。To be decided: summary testzwparking: zwparking

    Read the article

  • Tips on combining the right Art Assets with a 2D Skeleton and making it flexible

    - by DevilWithin
    I am on my first attempt to build a skeletal animation system for 2D side-scrollers, so I don't really have experience of what may appear in later stages. So i ask advice from anyone that been there and done that! My approach: I built a Tree structure, the root node is like the center-of-mass of the skeleton, allowing to apply global transformations to the skeleton. Then, i make the hierarchy of the bones, so when moving a leg, the foot also moves. (I also make a Joint, which connects two bones, for utility). I load animations to it from a simple key frame lerp, so it does smooth movement. I map the animation hierarchy to the skeleton, or a part of it, to see if the structure is alike, otherwise the animation doesnt start. I think this is pretty much a standard implementation for such a thing, even if i want to convert it to a Rag Doll on the fly.. Now to my question: Imagine a game like prototype, there is a skeleton animation of the main character, which can animate all meshes in the game that are rigged the same way.. That way the character can transform into anything without any extra code. That is pretty much what i want to do for a side-scroller, in theory it sounds easy, but I want to know if that will work well. If the different people will be decently animated when using the same skeleton-animation pair. I can see this working well with a Stickman, but what about actual humans? Will the perspective look right, or i will need to dynamically change the sprites attached to bones? What do you recommend for such a system?

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 28 (sys.dm_db_stats_properties)

    - by Tamarick Hill
    The sys.dm_db_stats_properties Dynamic Management Function returns information about the statistics that are currently on your database objects. This function takes two parameters, an object_id and a stats_id. Let’s have a look at the result set from this function against the AdventureWorks2012.Sales.SalesOrderHeader table. To obtain the object_id and stats_id I will use a CROSS APPLY with the sys.stats system table. SELECT sp.* FROM sys.stats s CROSS APPLY sys.dm_db_stats_properties(s.object_id, s.Stats_id) sp WHERE sp.object_id = object_id('Sales.SalesOrderHeader') The first two columns returned by this function are the object_id and the stats_id columns. The next column, ‘last_updated’, gives you the date and the time that a particular statistic was last updated. The next column, ‘rows’, gives you the total number of rows in the table as of the last statistic update date. The ‘rows_sampled’ column gives you the number of rows that were sampled to create the statistic. The ‘steps’ column represents the number of specific value ranges from the statistic histogram. The ‘unfiltered_rows’ column represents the number of rows before any filters are applied. If a particular statistic is not filtered, the ‘unfiltered_rows’ column will always equal the ‘rows’ column. Lastly we have the ‘modification_counter’ column which represents the number of modification to the leading column in a given statistic since the last time the statistic was updated. Probably the most important column from this Dynamic Management Function is the ‘last_updated’ column. You want to always ensure that you have accurate and updated statistics on your database objects. Accurate statistics are vital for the query optimizer to generate efficient and reliable query execution plans. Without accurate and updated statistics, the performance of your SQL Server would likely suffer. For more information about this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/jj553546.aspx Folllow me on Twitter @PrimeTimeDBA

    Read the article

  • Thank You MySQL Community! MySQL 5.6.9 Release Candidate Available Now!

    - by Rob Young
    The MySQL Community continues its good work in testing and refining MySQL 5.6, and as such the next iteration of the 5.6 Release Candidate is now available for download.  You can get MySQL 5.6.9 here (look under the "Development Releases" tab).  This version is the result of feedback we have gotten since MySQL 5.6.7 was announced at MySQL Connect in late September. As iron sharpens iron, Community feedback sharpens the quality and performance of MySQL so please download 5.6.9 and let us know how we can improve it as we move toward the production-ready product release in early 2013. MySQL 5.6 is designed to meet the agility demands of the next generation of web apps and services and includes across the board improvements to the Optimizer, InnoDB performance/scale and online DDL operations, self-healing Replication, Performance Schema Instrumentation, Security and developer enabling NoSQL functionality.  You can learn all the details and follow MySQL Engineering blogs on all of the key features in this MySQL DevZone article. On a related note, plan to join this week's live webinars to learn more about MySQL 5.6 Self-Healing Replication Clusters and Building the Next Generation of Web, Cloud, SaaS, Embedded Application and Services with MySQL 5.6.  Hurry!  Seating is limited!  As always, thanks for your continued support of MySQL!

    Read the article

  • Why do you hate Java? Is it the language or the framework? [closed]

    - by zneak
    According to you all, Java is the third most-hated language here. The two other most hated languages are PHP and VBScript. (It's quite funny how they stand together on the podium.) I'd like to make it known that the question mostly addresses people who don't like Java. I assume here a number of subjective opinions as facts because they're usually considered true among people who don't like Java, and I don't want to be convinced otherwise here. If you're a Java enthusiast, you might find this question frustrating. It's never been made clear if people hate Java itself, or if they hate it because of the framework, or if it's a mixture of the two. On a side you have the language, where you have: the "everything should be an object" philosophy, even in instances where it should obviously be something else (event handlers I'm pointing you); checked exceptions; the idea that all logic should be presented as methods and properties is a big no-no; the fact that "closures" created by anonymous types only include final variables and arguments, but will allow write access to any member of the parent class; a few more. On the other side, you have the JDK, with... its load of inconsistencies and overengineering; monolithic class hierarchies; meaningless base exceptions like IOException (though other frameworks have similar exception hierarchies); sluggish responsiveness even with Swing; a few more. My question is, do you think that, if either one (Java or the JDK) was taken alone, and the other was dropped in favor of something else, the new combination would be better? For instance, if you could use the C# syntax with the JDK (adapting get*/set* methods into properties, and interfaces with only one method into delegates), or the Java syntax with the .NET Framework (doing the inverse transformations), would things get better in your opinion?

    Read the article

  • JRockit R28 "Ropsten" released

    - by tomas.nilsson
    R28 is a major release (as indicated by the careless omissions of "minor" and "revision" numbers. The formal name would be R28.0.0). Our customers expect grand new features and innovation from major releases, and "Ropsten" will not disappoint. One of the biggest challenges for IT systems is after the fact diagnostics. That is - Once something has gone wrong, the act of trying to figure out why it went wrong. Monitoring a system and keeping track of system health once it is running is considered a hard problem (one that we to some extent help our customers solve already with JRockit Mission Control), but doing it after something occurred is close to impossible. The most common solution is to set up heavy logging (and sacrificing system performance to do the logging) and hope that the problem occurs again. No one really thinks that this is a good solution, but it's the best there is. Until now. Inspired by the "Black box" in airplanes, JRockit R28 introduces the Flight Recorder. Flight Recorder can be seen as an extremely detailed log, but one that is always on and that comes without a cost to system performance. With JRockit Flight Recorder the customer will be able to get diagnostics information about what happened _before_ a problem occurred, instead of trying to guess by looking at the fallout. Keywords that are important to the customer are: • Extremely detailed, always on, diagnostics information • No performance overhead • Powerful tooling to visualize the data recorded. • Enables diagnostics of bugs and SLA breaches after the fact. For followers of JRockit, other additions are: • New JMX agent that allows JRMC to be used through firewalls more easily • Option to generate HPROF dumps, compatible with tools like Eclipse MAT • Up to 64 BG compressed references (previously 4) • View memory allocation on a thread level (as an Mbean and in Mission Control) • Native memory tracking (Command line and Mbean) • More robust optimizer. • Dropping support for Java 1.4.2 and Itanium If you have any further questions, please email [email protected]. The release can be downloaded from http://www.oracle.com/technology/software/products/jrockit/index.html

    Read the article

  • SQL Contest – Result of Cartoon Contest

    - by pinaldave
    Earlier we had an excellent contest ran with the help of Embarcadero Technologies. We had two different contests on the same day sponsored by the kind folks at Embarcadero. Here are the details of the winners. 1) Win USD 25 Amazon Gift Cards (10 Units) We had announced that we will award USD 25 Amazon Gift Cards to 10 lucky winners who will download the DB Optimizer between Nov 29 to Dec 8. Here is the name of the winners. Winners will get Amazon Gift Cards USD 25 in the next 5 days of this blog post to their registered email address. If you do not receive the card, do send me email (Pinal at sqlauthority.com) and I will follow up on the details. Name of the winners: Ramdas Narayanan Krishna Uppuluri Donna Kray Santosh Gupta Robert Small Samit Bhatt Bernd Baumanns Rodrigo Oriola Jim Woodin Alfred Sandou 2) Win Star Wars R2-D2 Inflatable R/C We had cartoon contest. If you have not read the cartoon – I suggest you go over this cartoon story one more time. The task was to give the correct answer with some interesting note along with it. We selected a few good quotes and put them together. We later on picked the winner by using random algorithm. The winner gets fantastic Star Wars R2-D2 Inflatable R/C. Name of the winner: Aadhar Joshi. He wins R2-D2. You can read his comment over here. Thank you all for participating in the contest – this was fun – if you have liked it do let me know and we will come up with something new for you next time. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Why Oracle Data Integrator for Big Data?

    - by Mala Narasimharajan
    Big Data is everywhere these days - but what exactly is it? It’s data that comes from a multitude of sources – not only structured data, but unstructured data as well.  The sheer volume of data is mindboggling – here are a few examples of big data: climate information collected from sensors, social media information, digital pictures, log files, online video files, medical records or online transaction records.  These are just a few examples of what constitutes big data.   Embedded in big data is tremendous value and being able to manipulate, load, transform and analyze big data is key to enhancing productivity and competitiveness.  The value of big data lies in its propensity for greater in-depth analysis and data segmentation -- in turn giving companies detailed information on product performance, customer preferences and inventory.  Furthermore, by being able to store and create more data in digital form, “big data can unlock significant value by making information transparent and usable at much higher frequency." (McKinsey Global Institute, May 2011) Oracle's flagship product for bulk data movement and transformation, Oracle Data Integrator, is a critical component of Oracle’s Big Data strategy. ODI provides automation, bulk loading, and validation and transformation capabilities for Big Data while minimizing the complexities of using Hadoop.  Specifically, the advantages of ODI in a Big Data scenario are due to pre-built Knowledge Modules that drive processing in Hadoop. This leverages the graphical UI to load and unload data from Hadoop, perform data validations and create mapping expressions for transformations.  The Knowledge Modules provide a key jump-start and eliminate a significant amount of Hadoop development.  Using Oracle Data Integrator together with Oracle Big Data Connectors, you can simplify the complexities of mapping, accessing, and loading big data (via NoSQL or HDFS) but also correlating your enterprise data – this correlation may require integrating across heterogeneous and standards-based environments, connecting to Oracle Exadata, or sourcing via a big data platform such as Oracle Big Data Appliance. To learn more about Oracle Data Integration and Big Data, download our resource kit to see the latest in whitepapers, webinars, downloads, and more… or go to our website on www.oracle.com/bigdata

    Read the article

  • XmlException - inserting attribute gives "unexpected token" exception

    - by Anders Svensson
    Hi, I have an XmlDocument object in C# that I transform, using XslTransform, to html. In the stylesheet I insert an id attribute in a span tag, taking the id from an element in the XmlDocument. Here is the template for the element: <xsl:template match="word"> <span> <xsl:attribute name="id"><xsl:value-of select="@id"></xsl:value-of></xsl:attribute> <xsl:apply-templates/> </span> </xsl:template> But then I want to process the result document as an xhtml document (using the XmlDocument dom). So I'm taking a selected element in the html, creating a range out of it, and try to load the element using XmlLoad(): wordElem.LoadXml(range.htmlText); But this gives me the following exception: "'598' is an unexpected token. The expected token is '"' or '''. Line 1, position 10." And if I move the cursor over the range.htmlText, I see the tags for the element, and the "id" shows without quotes, which confuses me (i.e.SPAN id=598 instead of SPAN id="598"). To confuse the matter further, if I insert a blank space or something like that in the value of the id in the stylesheet, it works fine, i.e.: <span> <xsl:attribute name="id"><xsl:text> </xsl:text> <xsl:value-of select="@id"></xsl:value-of></xsl:attribute> <xsl:apply-templates/> </span> (Notice the whitespace in the xsl:text element). Now if I move the cursor over the range.htmlText, I see an id with quotes as usual in attributes (and as it shows if I open the html file in notepad or something). What is going on here? Why can't I insert an attribute this way and have a result that is acceptable as xhtml for XmlDocument to read? I feel I am missing something fundamental, but all this surprises me, since I do this sort of transformations using xsl:attribute to insert attributes all the time for other types of xsl transformations. Why doesn't XmlDocument accept this value? By the way, it doesn't matter if it is an id attribute. i have tried with the "class" attribute, "style" etc, and also using literal values such as "style" and setting the value to "color:red" and so on. The compiler always complains it is an unvalid token, and does not include quotes for the value unless there is a whitespace or something else in there (linebreaks etc.). I hope I have provided enough information. Any help will be greatly appreciated. Basically, what I want to accomplish is set an id in a span element in html, select a word in a webbrowser control with this document loaded, and get the id attribute out of the selected element. I've accomplished everything, and can actually do what I want, but only if I use regex e.g. to get the attribute value, and I want to be able to use XmlDocument instead to simply get the value out of the attribute directly. I'm sure I'm missing something simple, but if so please tell me. Regards, Anders

    Read the article

  • xsl:include template with no default namespace causes xmlns=""

    - by CraftyFella
    Hi, I've got a problem with xsl:include and default namespaces which is causing the final xml document contain nodes with the xmlns="" In this synario I have 1 source document which is Plain Old XML and doesn't have a namespace: <?xml version="1.0" encoding="UTF-8"?> <SourceDoc> <Description>Hello I'm the source description</Description> <Description>Hello I'm the source description 2</Description> <Description/> <Title>Hello I'm the title</Title> </SourceDoc> This document is transformed into 2 different xml documents each with their own default namespace. First Document: <?xml version="1.0" encoding="utf-8"?> <OutputDocType1 xmlns="http://MadeupNS1"> <Description >Hello I'm the source description</Description> <Description>Hello I'm the source description 2</Description> <Title>Hello I'm the title</Title> </OutputDocType1> Second Document: <?xml version="1.0" encoding="utf-8"?> <OutputDocType2 xmlns="http://MadeupNS2"> <Description>Hello I'm the source description</Description> <Description>Hello I'm the source description 2</Description> <DocTitle>Hello I'm the title</DocTitle> </OutputDocType2> I want to be able to re-use the template for descriptions in both of the transforms. As it's the same logic for both types of document. To do this I created a template file which was *xsl:include*d in the other 2 transformations: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output indent="yes" method="xml"/> <xsl:template match="Description[. != '']"> <Description> <xsl:value-of select="."/> </Description> </xsl:template> </xsl:stylesheet> Now the problem here is that this shared transformation can't have a default Namespace as it will be different depending on which of the calling transformations calls it. E.g. for First Document Transformation: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output indent="yes" method="xml"/> <xsl:template match="SourceDoc"> <OutputDocType1 xmlns="http://MadeupNS1"> <xsl:apply-templates select="Description"/> <xsl:if test="Title"> <Title> <xsl:value-of select="Title"/> </Title> </xsl:if> </OutputDocType1> </xsl:template> <xsl:include href="Template.xsl"/> </xsl:stylesheet> This actually outputs it as follows: <?xml version="1.0" encoding="utf-8"?> <OutputDocType1 xmlns="http://MadeupNS1"> <Description xmlns="">Hello I'm the source description</Description> <Description xmlns="">Hello I'm the source description 2</Description> <Title>Hello I'm the title</Title> </OutputDocType1> Here is the problem. On the description Lines I get an xmlns="" Does anyone know how to solve this issue? Thanks Dave

    Read the article

  • Oracle Warehouse Builder és Enterprise ETL

    - by Fekete Zoltán
    Friss és ropogós az adatlap!!! Fogyasszátok egészséggel: ODI Enterprise Edition: Warehouse Builder Enterprise ETL white paper. A jó hír: minden megvásárolt Oracle Database-hez ingyenese használható az Oracle Warehouse Builder alap (core) funkcionalitása. Mi is az az OWB core funkcionalitás, és mit használhatunk az opciókban? Az Enterprise ETL funkcionalitás az Oracle Data Integrator Enterprise Edition licensz részeként érheto el az OWB-hez. Azok a funkciók, amik csak az ODI EE licensszel érhetok el (a korábbi OWB Enterprise ETL opció is ennek a része) megtekinthetok itt is a szöveg alján. Ezek: - Transportable ETL modules, multiple configurations, and pluggable mappings - Operators for pluggable mapping, pluggable mapping input signature, pluggable mapping output signature - Design Environment Support for RAC - Metadata change propagation - Schedulable Mappings and Process Flows - Slowing Changing Dimensions (SCD) Type 2 and 3 - XML Files as a target - Target load ordering - Seeded spatial and streams transformations - Process Flow Activity templates - Process Flow variables support - Process Flow looping activities such as For Loop and While Loop - Process Flow Route and Notification activities - Metadata lineage and impact analysis - Metadata Extensibility - Deployment to Discoverer EUL - Deployment to Oracle BI Beans catalog Tehát ha komolyabb környezetben szeretném használni az OWB-t, több környezetbe deployálni, stb, akkor szükség van az ODI EE licenszre is. ODI Enterprise Edition: Warehouse Builder Enterprise ETL white paper.

    Read the article

  • Minimize useless tweaking of a numeric app

    - by Potatoswatter
    I'm developing a numeric application (nonlinear optimizer), with a zillion knobs to tweak and rising. It's not my first foray into this domain, but this time there are even more variables in the code and I'm on a tight schedule. Don't want to waste time fiddling. Days or even months can potentially be wasted adjusting variables, recompiling, and reprocessing benchmark datasets. The resulting data is viewed and trouble spots are checked. The overall quality of the solution is reported by the program but the meaning of the report could change over time. (Numeric units for the report are one thing I'm trying to nail down.) One main problem is organizing result files to identify each with specific code changes. Note taking can be a pain, is there software to help with this? Are there agreed best practices to making this kind of development cycle reliably move forward? The solver package converges to its optimal solution with mechanical determination, but I'm all too familiar with the way an excess of design decisions can mire development.

    Read the article

  • Can't use the hardware scissor any more, should I use the stencil buffer or manually clip sprites?

    - by Alex Ames
    I wrote a simple UI system for my game. There is a clip flag on my widgets that you can use to tell a widget to clip any children that try to draw outside their parent's box (for scrollboxes for example). The clip flag uses glScissor, which is fed an axis aligned rectangle. I just added arbitrary rotation and transformations to my widgets, so I can rotate or scale them however I want. Unfortunately, this breaks the scissor that I was using as now my clip rectangle might not be axis aligned. There are two ways I can think of to fix this: either by using the stencil buffer to define the drawable area, or by having a wrapper function around my sprite drawing function that will adjust the vertices and texture coords of the sprites being drawn based on the clipper on the top of a clipper stack. Of course, there may also be other options I can't think of (something fancy with shaders possibly?). I'm not sure which way to go at the moment. Changing the implementation of my scissor functions to use the stencil buffer probably requires the smallest change, but I'm not sure how much overhead that has compared to the coordinate adjusting or if the performance difference is even worth considering.

    Read the article

  • Oracle Enterprise Manager Extensibility News - June 2014

    - by Joe Diemer
    Introducing Extensibility Exchange Version 2 On the heals of Enterprise Manager 12c Release 4 this week comes version 2.0 of the Extensibility Exchange.  A new theme allows optimal viewing on a number of different computing devices from large monitor displays to tablets to smartphones.   One of the first things you'll notice is a scrollable banner with the latest news related to Enterprise Manager and extensibility.  Along with the "slider" and the latest entries from Oracle and the Partner community, new features like a tag cloud and an auto-complete search box provide a better way to find the plug-in, connector or other Enterprise Manager entity you are looking for.  Once you find it, a content details page with specific info related to that particular entity will enable you to access it at the provider's site and also rate and comment on that particular item. You can also send an email from the content details page which is routed to the developer.   And if you want to use version 1 of the Extensibility Exchange instead, you will be able to do so via the "Classic" option.  Check it out today at http://www.oracle.com/goto/emextensibility. Recent Additions from Oracle's Partner Community A number of important 3rd party plug-ins have been contributed by Oracle's partner community, which can be accessed via the Extensibility Exchange or by clicking the links in this blog: Dell Open Manage Fusion I-O ION Accelerator NetApp SANtricity E-Series PostgreSQL by Blue Medora You can also check out the following best practices and labs available via the Exchange: Riverbed Stingray Traffic Manager Reference Architecture Datavail Alert Optimizer Custom Templates Apps Associates' Oracle Enterprise Manager "Test Drives" for Oracle Database 12c Management Oracle Enterprise Manager Monitoring Essentials Oracle Application Management Suite for Oracle E-Business Suite

    Read the article

  • How do I capture a 10053 trace for a SQL statement called in a PL/SQL package?

    - by Maria Colgan
    Traditionally if you wanted to capture an Optimizer trace (10053) for a SQL statement you would issue an alter session command to switch on a 10053 trace for that entire session, and then issue the SQL statement you wanted to capture the trace for. Once the statement completed you would exit the session to disable the trace. You would then look in the USER_DUMP_DEST directory for the trace file. But what if the SQL statement you were interested  in was actually called as part of a PL/SQL package? Oracle Database 11g, introduced a new diagnostic events infrastructure, which greatly simplifies the task of generating a 10053 trace for a specific SQL statement in a PL/SQL package. All you will need to know is the SQL_ID for the statement you are interested in. Instead of turning on the trace event for the entire session you can now switch it on for a specific SQL ID. Oracle will then capture a 10053 trace for the corresponding SQL statement when it is issued in that session. Remember the SQL statement still has to be hard parsed for the 10053 trace to be generated.  Let's begin our example by creating a PL/SQL package called 'cal_total_sales'. The SQL statement we are interested in is the same as the one in our original example, SELECT SUM(AMOUNT_SOLD) FROM SALES WHERE CUST_ID = :B1. We need to know the SQL_ID of this SQL statement to set up the trace, and we can find in V$SQL. We now have everything we need to generate the trace. Finally  you would look in the USER_DUMP_DEST directory for the trace file with the name you specified. Maria Colgan+

    Read the article

  • links for 2011-02-14

    - by Bob Rhubart
    Glenn Fawcett: Solaris Eye for the Linux Guy, or how I learned to stop worrying about Linux and Love Solaris (Part 1) Glenn says: "This entry goes out to my Oracle techie friends that have been in the Linux camp for sometime now and are suddenly finding themselves needing to know more about Solaris… hmmmm… I wonder if this has anything to do with Solaris now being an available option with Exadata?"  (tags: linux solaris oracle) Enterprise Software Development with Java: High Performance JPA with GlassFish and Coherence - Part 2 Oracle ACE Director Markus Eisele describes "the steps you have to take to configure a JPA backed Cache with Coherence and how you could use it from within GlassFish as a high performance data store." (tags: oracle otn oracleace java glassfish coherence) TOGAF a Registered Trademark and Surpasses 15k Certifications EA Blogs Mike Walker relays news on the TOGAF standard. (tags: entarch togaf) Weblogic or wait? | Capping IT Off | Capgemini "So when would you move over to the new Oracle Technology?" asks Arjan Kramer. " Well, as always there can be several reasons..." (tags: oracle capgemini weblogic) Random Monday Thoughs (Art of SOA Governance) "Governance is what insurance is to new cars, be it to SOA, IT transformations and software development. Governance is a insurance policy against risk of failure." - Terry Goldman (tags: oracle otn soa soagovernance)

    Read the article

  • How do I cut and paste commands from your blog?

    - by Maria Colgan
    At the recent ODTUG  Kscope 12 conference several people told me that they really enjoyed our blog on the Optimizer but were frustrated because they couldn’t cut and paste the commands used in the blog posts straight into their environment. Typically I use screen shots in the blog posts to make the commands clear but it does mean that it is impossible to cut and paste the commands into your environment. In order to get around this I have created a downloadable .sql script for each of our blog posts. You should now see the sentence “You can get a copy of the script I used to generate this post here”, appearing at the bottom of each blog post. Clicking on the link will open the .sql script that contains all of the commands used in the post. You can either save the entire script or just cut and paste the particular command you are interested in! I have added scripts for all of this year’s blog posts and am slowly making my way through our old posts until we have a script for everything we have posted to date. Hopefully this will help! +Maria Colgan

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >