Search Results

Search found 24220 results on 969 pages for 'performance tools'.

Page 114/969 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Clean file separators in Ruby without File.join

    - by kerry
    I love anything that can be done to clean up source code and make it more readable.  So, when I came upon this post, I was pretty excited.  This is precisely the kind of thing I love. I have never felt good about ‘file separator’ strings b/c of their ugliness and verbosity. In Java we have: 1: String path = "lib"+File.separator+"etc"; And in Ruby a popular method is: 1: path = File.join("lib","etc") Now, by overloading the ‘/’ operator on a String in Ruby: 1: class String 2: def /(str_to_join) 3: File.join(self, str_to_join) 4: end 5: end We can now write: 1: path = 'lib'/'src'/'main' Brilliant!

    Read the article

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • Submitting a sitemap to take care of inherited Google crawler errors

    - by leeand00
    I have an awful lot of Google Crawler errors (1000 or so) after I inherited a site that the previous owner migrated without moving much of their content. Would generating a map of the current site and submitting it to Google help fix this? Is there any quicker, automated way to eliminate errors other than clicking each and every site error? Note: I have already tried automating this on my own.

    Read the article

  • Does sitewide html refactoring affect Google traffic?

    - by Name
    Good morning, I have recently made a big structural change on my site and the very next day the number of Google impressions went from 75.000 to 3.000, with a proportional drop of traffic from searches. No URLs were changed, neither were the page titles or descriptions. Everything is exactly the same, but different looking, except that it does barely appear on Google anymore. Anybody has a clue to why?

    Read the article

  • Graffiti is a Sinatra-inspired Groovy Framework

    - by kerry
    Playing around with Sinatra the other day and realized I could really use something like this for Groovy. Thus, Graffiti was born. It’s basically a thin wrapper around Jetty. At first, I thought I might write my own server for it (everybody needs to do that once, don’t they?), but decided to invoke the ’simplest thing that could possibly work’ principle. Here is the requisite ‘Hello World’ example: import graffiti.* @Grab('com.goodercode:graffiti:1.0-SNAPSHOT') @Get('/helloworld') def hello() { 'Hello World' } Graffiti.serve this The code, plus more documentation is hosted under my github account.

    Read the article

  • Firefox is pounding my system what should I do

    - by nikhil
    I'm running the latest version of firefox 17.0.1 on ubuntu 12.10 on a Acer Aspire One 722 Netbook. It has an amd dual core C60 processor and 2GB RAM. As you can see, firefox is absolutely killing my system, it responds really slowly and opening tabs is a royal pain. I have on an average 4-5 open tabs at a given time. Is there something that I can do to make my browsing experience more zippy? Additionally I run the following addons Firebug HTTPS Everywhere Ad block plus

    Read the article

  • Strict Pomodoro and other time management Chrome extensions

    - by kerry
    I have recently begun using the Pomodoro Technique to increase my productivity. However, I still find myself getting sucked in to the vortex of useless information that is the internet. With that in mind I began searching for a useful chrome extension to replace the Android Pomodoro app I have been using to manage my ‘doros. I even considered writing it myself. Luckily, I stumbled on one that had a similar featureset to what I was looking for. Strict Pomodoro is an excellent Chrome extension for practicing Pomodoro. Though lacking a few key features, such as the ability to set the duration of your pomodoros and breaks, it still has a key feature that helps me stay on task. It blocks time sucking websites. You can set filter lists and it will keep you from accessing them during a Pomodoro. Effectively reminding you to stay on task. Also, the author readily admits that it was quickly put together and new features may be added down the road. For now, it is still an excellent option. For those of you who do not practice Pomodoro but are trying to stay on task. The StayFocusd extension will effectively manage the amount of time you spend on useless (non-productive) sites. It also has a rich feature set that may be better for your work habits. OK, breaks over. Time to get back to work. 25 minutes at a time.

    Read the article

  • Bin packing part 6: Further improvements

    - by Hugo Kornelis
    In part 5 of my series on the bin packing problem, I presented a method that sits somewhere in between the true row-by-row iterative characteristics of the first three parts and the truly set-based approach of the fourth part. I did use iteration, but each pass through the loop would use a set-based statement to process a lot of rows at once. Since that statement is fairly complex, I am sure that a single execution of it is far from cheap – but the algorithm used is efficient enough that the entire...(read more)

    Read the article

  • Spring Roo Database Reverse Engineer with Oracle

    - by kerry
    So you are trying to reverse engineer an Oracle database with roo? Unfortunately, due to licensing restrictions with the Oracle JDBC Drivers, this is a little difficult. There are a few blog posts and forum threads that address the problem but I figured I would post what worked for me here. First, you need to download the appropriate Oracle Drivers from Oracle. The required login, stringent password requirements, nosy registration form, and general system instability made this a pretty painful step for me. I’d also like to say that companies that have password requirements that don’t allow symbols (or any other non-standard requirement) have a special place in my heart. Having to recover my password every time I go to your site virtually guarantees I will only go there when I absolutely have to (not often). Anyways, once you have it downloaded you need to install is with maven: mvn install:install-file -Dfile=~/Downloads/ojdbc6.jar -DgroupId=com.oracle -DartifactId=ojdbc6 -Dversion=11.2.0.3 -Dpackaging=jar -DgeneratePom=true Here comes the fun part. You need to create an osgi wrapper for the driver to install it in roo. Otherwise, roo cannot see the driver. Create a new folder and put the contents of the oracle roo addon pom gist I created. Now build it with maven. You may want to change some of the artifact ids and dependencies for your particular situation. mvn package No open a roo shell and execute the following command: osgi install --url file:///Users/me/my-osgi-project/target/the-jar-it-built.jar Now run (in roo): jpa setup --provider HIBERNATE --database ORACLE dependency remove --groupId com.oracle --artifactId ojdbc14 --version 10.2.0.2 dependency add --groupId com.oracle --artifactId ojdbc6 --version 11.2.0.3 database properties set --key database.driverClassName --value oracle.jdbc.OracleDriver database properties set --key database.url --value jdbc:oracle:thin:@%YOUR_CONNECTION_INFO% database properties set --key database.username --value %YOUR_USERNAME% database properties set --key database.password --value %YOUR_PASSWORD% database reverse engineer --schema %YOUR_SCHEMA% --package ~.domain If you have any package loading exceptions when running the reverse engineer command you can uninstall the osgi bundle, set the package to optional in the osgi pom in the IncludedPackages tag (javax.some.package.*;resolution:=optional) rebuild, then reinstall in roo.

    Read the article

  • SCOM, 90 Days In, II. Noise.

    - by merrillaldrich
    Once you get past the basic architecture of a SCOM implementation, and build the servers, and so on, the first real problem is … well, noise. Suddenly (depending on how you deploy) the system will reach out, like marching army ants or a some very clever cybernetic spider and find, and then proceed to yell at you about, every single problem on every server you didn’t know you had. That, of course, is the point. Still, a tool like this is not useful if it doesn’t surface the real problems from the...(read more)

    Read the article

  • Error java.lang.OutOfMemoryError: getNewTla using Oracle EPM products

    - by Marc Schumacher
    Running into a Java out of memory error, it is very common behaviour in the field that the Java heap size will be increased. While this might help to solve a heap space out of memory error, it might not help to fix an out of memory error for the Thread Local Area (TLA). Increasing the available heap space from 1 GB to 16 GB might not even help in this situation. The Thread Local Area (TLA) is part of the Java heap, but as the name already indicates, this memory area is local to a specific thread so there is no need to synchronize with other threads using this memory area. For optimization purposes the TLA size is configurable using the Java command line option “-XXtlasize”. Depending on the JRockit version and the available Java heap, the default values vary. Using Oracle EPM System (mainly 11.1.2.x) the following setting was tested successfully: -XXtlasize:min=8k,preferred=128k More information about the “-XXtlasize” parameter can be found in the JRockit documentation: http://docs.oracle.com/cd/E13150_01/jrockit_jvm/jrockit/jrdocs/refman/optionXX.html

    Read the article

  • Migrating from a wordpress.com to wordpress.org blog without harming SEO

    - by kikio
    I've had a Wordpress.com weblog for 3 years. And its pages have a good pagerank and are shown in first search results pages. Because of the limitations, I should migrate to my own WordPress. How to migrate safely with the minimum SEO problems? (I know how to export content in wordpress.com and import it to a new wordpress.org blog.) Note 1: links structure and site design are different on the new wordpress blog. (I don't like wordpress.com links structure :| ) Note 2: as you know, it's not possible to edit .htaccess file on wordpress.com. so I can't use 301 redirects.

    Read the article

  • Confirm that a dns zone is served by a nameserver

    - by adam
    We currently have a domain which has custom nameservers. Our host has their own nameservers. I'd like to switch our domain to use our host's nameservers for a while. Our host tells me that their nameservers hold a replica of our dns zone, but I'd like to confirm this before I switch. Is there a command line tool I can use that I can use to answer the question "does this nameserver know the dns zone of this domain?" Hope that makes sense! Thanks, Adam

    Read the article

  • Tracking unique views for a site showing my advertisements [on hold]

    - by user580950
    I am in trouble. I placed and advertisement on a website in 2012. The website said they got 950,000 unique visits each month. Early in 2012 I advertised with them. The advertisement didn't worked out. I checked in 2-3 months time and I saw that the unique visitors on the site was 8,000 at that time. I immediately closed the account. I don't remember which site I used to check the unique visitors. The advertising company has filed a dispute against me. So is there any tool that can show me the 2012 stats for any website? I tried Google Trends but it doesn't show statistics.

    Read the article

  • What are the common maintenance tasks on ubuntu?

    - by DaNieL
    When i was using windows, i used to run defrags, ccleaner and revouninstaller once a month to keep the system and the registry clean. I know ubuntu (and all linux distro) has a different system structure and doesnt need defrags, but i've heard there are some mainenance tasks that help to keep the system clean (for example, sudo apt-get clean or sudo apt-get autoremove) How many of those commands/software (and please explain what they do and if they can compromise the system stability) do you know and use regularly?

    Read the article

  • SQLAuthority News – Great Time Spent at Great Indian Developers Summit 2014

    - by Pinal Dave
    The Great Indian Developer Conference (GIDS) is one of the most popular annual event held in Bangalore. This year GIDS is scheduled on April 22, 25. I will be presented total four sessions at this event and each session is very different from each other. Here are the details of four of my sessions, which I presented there. Pluralsight Shades This event was a great event and I had fantastic fun presenting a technology over here. I was indeed very excited that along with me, I had many of my friends presenting at the event as well. I want to thank all of you to attend my session and having standing room every single time. I have already sent resources in my newsletter. You can sign up for the newsletter over here. Indexing is an Art I was amazed with the crowd present in the sessions at GIDS. There was a great interest in the subject of SQL Server and Performance Tuning. Audience at GIDS I believe event like such provides a great platform to meet and share knowledge. Pinal at Pluralsight Booth Here are the abstract of the sessions which I had presented. They were recorded so at some point in time they will be available, but if you want the content of all the courses immediately, I suggest you check out my video courses on the same subject on Pluralsight. Indexes, the Unsung Hero Relevant Pluralsight Course Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame SQL Server for unsatisfactory performance, the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side to indexes. To master the art of performance tuning one has to understand the fundamentals of indexes and the best practices associated with the same. We will cover various aspects of Indexing such as Duplicate Index, Redundant Index, Missing Index as well as best practices around Indexes. SQL Server Performance Troubleshooting: Ancient Problems and Modern Solutions Relevant Pluralsight Course Many believe Performance Tuning and Troubleshooting is an art which has been lost in time. However, truth is that art has evolved with time and there are more tools and techniques to overcome ancient troublesome scenarios. There are three major resources that when bottlenecked creates performance problems: CPU, IO, and Memory. In this session we will focus on High CPU scenarios detection and their resolutions. If time permits we will cover other performance related tips and tricks. At the end of this session, attendees will have a clear idea as well as action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. We will discuss about performance tuning in this session with the help of Demos. Pinal Dave at GIDS MySQL Performance Tuning – Unexplored Territory Relevant Pluralsight Course Performance is one of the most essential aspects of any application. Everyone wants their server to perform optimally and at the best efficiency. However, not many people talk about MySQL and Performance Tuning as it is an extremely unexplored territory. In this session, we will talk about how we can tune MySQL Performance. We will also try and cover other performance related tips and tricks. At the end of this session, attendees will not only have a clear idea, but also carry home action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. You will also witness some impressive performance tuning demos in this session. Hidden Secrets and Gems of SQL Server We Bet You Never Knew Relevant Pluralsight Course SQL Trio Session! It really amazes us every time when someone says SQL Server is an easy tool to handle and work with. Microsoft has done an amazing work in making working with complex relational database a breeze for developers and administrators alike. Though it looks like child’s play for some, the realities are far away from this notion. The basics and fundamentals though are simple and uniform across databases, the behavior and understanding the nuts and bolts of SQL Server is something we need to master over a period of time. With a collective experience of more than 30+ years amongst the speakers on databases, we will try to take a unique tour of various aspects of SQL Server and bring to you life lessons learnt from working with SQL Server. We will share some of the trade secrets of performance, configuration, new features, tuning, behaviors, T-SQL practices, common pitfalls, productivity tips on tools and more. This is a highly demo filled session for practical use if you are a SQL Server developer or an Administrator. The speakers will be able to stump you and give you answers on almost everything inside the Relational database called SQL Server. I personally attended the session of Vinod Kumar, Balmukund Lakhani, Abhishek Kumar and my favorite Govind Kanshi. Summary If you have missed this event here are two action items 1) Sign up for Resource Newsletter 2) Watch my video courses on Pluralsight Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL Tagged: GIDS

    Read the article

  • Problem with Ogmo Editor (is Tiled Editor a solution?)

    - by Mentoliptus
    I made a level editor for a puzzle game with Ogmo Editor and gave it to our designer/level designer. When he downloaded and started Ogmo, his CPU went to 100%. I looked at my CPU usage while Ogmo is running, and it goes from 20% to 30% (which is also high for an application alike Ogmo). He has a Windows 7 VM running on his Mac and I have a normal Windows PC, can this be a problem? I found a thread on FlashFunk forum that confirms that Ogmo has CPU usage issues. Has anybody maybe solved this issue? The solution seems to use Tiled Editor, but I never used it before. Is it difficult to change a level editor from Ogmo to Tiled? Can they export in the same format (XML with CSV elements for my puzzle game)?

    Read the article

  • How is this site so fast?

    - by user8628
    how is the website http://dftba.com/ so fast? when i click a link it loads right then? what makes it work like this? how do i make it work like this on my site? some of the objects on the site are being hosted by a website called ecogeek-cdn.net? who is this company and why do they host the images of this site? i have been looking into this site some time because i want this site to be like mine site they site use Apache they site use Python (when asked the developer told me this) they site use jquery and jqueryui they site is custom built not using wordpress they site is ownedhosted by liquidweb they site gets a million users a month they site launched in january they site uses cpanel they site does not have SSH or FTP (i tried to connect but it denied me all) they does have SSH and FTP but only allowed by their addresses Please; my english is not as good as yours

    Read the article

  • Drawing a textured triangle with CPU instead of GPU

    - by Jenko
    I understand the benefits of GPU rendering and such, but for a certain limited application I need to render textured triangles purely using CPU. I've built a 3D engine capable of object handling, transform, projection, culling and the likes ... now all I need is a little code snippet that draws a single textured triangle onto a bitmap... any language accepted! Inputs: Texture bitmap, Triangle U/V/W coords, Triangle X/Y screen coords Output: The textured triangle drawn at the given screen coords I've currently been using a platform function to draw triangles to screen, but I'm looking to handle it myself to speeden up the process.

    Read the article

  • SQL Server Training in the UK–SSIS, MDX, Admin, MDS, Internals

    - by simonsabin
    If you are looking for SQL Server training they there is no better place to start than a new company Technitrain Its been setup by a fellow MVP and SQLBits Organiser Chris Webb. Why this company rather than any others? Training based on real world experience by the best in the business. The key to Technitrain’s model is not to cram the shelves high with courses and get some average Joe trainers to deliver them. Technitrain bring in world renowned experts in their fields to deliver courses written...(read more)

    Read the article

  • CFO Central

    - by antonella.buonagurio(at)oracle.com
    CFO Central è il portale web Oracle interamente dedicato ai temi di interesse del Chief Financial Officer. Un vero centro informazioni virtuale per i responsabili Amministrazione, Finanza e Controllo che comprende: le news selezionate a cura di CFO Market Watch, i piu' recenti casi di successo dei nostri clienti, i processi di management gestiti dal CFO e le soluzioni più innovative per gestirli e migliorarli, tutti gli eventi Oracle specificatamente focalizzati sulla funzione finanziaria ed un esclusivo Centro Risorse specialistico per il CFO. www.oraclecfo.com  

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • A Warning to Those Using sys.dm_exec_query_stats

    - by Adam Machanic
    The sys.dm_exec_query_stats view is one of my favorite DMVs. It has replaced a large chunk of what I used to use SQL Trace for--pulling metrics about what queries are running and how often--and it makes this kind of data collection painless and automatic. What's not to love? But use cases for the view are a topic for another post. Today I want to quickly point out an inconsistency. If you're using this view heavily, as I am, you should know that in some cases your queries will not get a row. One...(read more)

    Read the article

  • Attunity Oracle CDC Solution for SSIS - Beta

    We in no way work for Attunity but we were asked to test drive a beta version of their Oracle CDC solution for SSIS.  Everybody should know that moving more data than you need to takes too much time and uses resources that may better be employed doing something else.  Change data Capture is a technology that is designed to help you identify only the data that has had something done to it and you can therefore move only what is needed.  Microsoft have implemented this exact functionality into SQL server 2008 and I really like it there.  Attunity though are doing it on Oracle. DISCLAIMER: This is a BETA release and some of the parts are a bit ugly/difficult to work with.  The idea though is definitely right and the product once working does exactly what it says on the tin.  They have always been helpful to me when I have had a problem with the product and if that continues then beta testing pain should be eased somewhat. In due course I am going to be doing some videos around me using the product.  If you use Oracle and SSIS then give it a go. Here is their product description.   Attunity is a Microsoft SQL Server technology partner and the creator of the Microsoft Connectors for Oracle and Teradata, currently available in SQL Server 2008 Enterprise Edition. Attunity released a beta version of the Attunity Oracle-CDC for SSIS, a product that integrates continually changing Oracle data into SSIS, efficiently and in real-time. Attunity designed the product and integrated it into SSIS to create the simple creation of change data capture (CDC) solutions, accelerate implementation time, and reduce resources and costs. They also utilize log-based CDC so the solution has minimal impact on the Oracle source system. You can use the product to implement enterprise-class data replication, synchronization, and real-time business intelligence (BI) and data warehousing projects, quickly and efficiently, leveraging their existing SQL Server investments and resource skills. Attunity architected the product specifically for the Microsoft SSIS developer community and the product is available for both SQL Server 2005 and SQL Server 2008. It offers the following key capabilities: · Log-based, non-intrusive Oracle CDC · Full integration into SSIS and the Business Intelligence Developer Studio · Automatic generation of SSIS packages for CDC as well as full-loads of Oracle data · Filtering of Oracle tables and columns at the source · Monitoring and control of CDC processing Click to learn more and download the beta.

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >