Search Results

Search found 1440 results on 58 pages for 'adam terlson'.

Page 4/58 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Ubuntu + Raid on HP proliant DL 160 G6

    - by Adam Matan
    Hi, I'm, trying to install Ubuntu 8.04 Server on an HP Proliant DL160 G6. The HP hardware is certified by Ubuntu for the 9.04 version, which I can't install due to company policy. The problem is that Ubuntu would not recognize the RAID 1+0 disk configured by the BIOS. The raid creates one ~470GB disk from two 500GB physical disks. Any ideas? Adam

    Read the article

  • webcam streaming on mobile phone

    - by adam
    hey guys im stuck on a project i have i need to find a way to stream my home webcam onto my mobile phone, in this case on a blackberry bold 9700, also i have an i pod touch 2.0. the problem i keep coming across is that every program i find askes for money, i am looking for a way to do this for free, ive been trying using vlc but have so far been unable to get the stream to work, any help would be great thanks a lot adam

    Read the article

  • MySQL: Load database to memory

    - by Adam Matan
    Hi, Is there a way to load an entire MySQL database to the RAM, especially on en EC2 server? The database is quite small (~500 MegaBytes) I have enough memory Speed issues are crucial - the resulted queries are used to serve a dynamic webpage. Thanks, Adam

    Read the article

  • Monitor utilities

    - by Adam Davis
    I'm a big fan of good monitor usage, but only currently use a few utilities to help me attain display nirvana across several systems and monitors. Part of this is due to not knowing what's available. Please list one monitor utility that you use and what it does for you per answer, and avoid duplicates - comment on and vote up the existing answers rather than adding a duplicate. Also, if there are existing questions that delve more specifically into one area of monitor utilities link that in a separate answer. -Adam

    Read the article

  • Internal Server Error issue with Codeigniter. Cant open linked page

    - by Adam Perlis
    Hi I am VERY new to backend development. I am trying to setup a User Registration using Codeigniter. I have gotten pretty far but for some reason I can get the "Create Account button working. I have my site up and running on t-mrkt.com/mycisite would you guys mind taking a look and seeing where I might be going wrong? Here is a link to my files: https://www.dropbox.com/sh/z54j4qd50yf3t0p/MnnghGBB_D Thanks for your help! Adam

    Read the article

  • Random password generator: many, in columns, on command line, in Linux

    - by Adam Backstrom
    A while back, I came across a random password generator for the command line that displayed a grid of "memorable" passwords. Output was something like this: adam@host:~$ CantRememberThisCommand lkajsdf aksjdfl kqwrupo qwerpoi qwerklw zxlkelq The idea was that you could run this utility while someone was looking over your shoulder, and still pick a password with some level of secrecy due to the large number of choices. I cannot remember what this utility was called. Oh interwebs, can you help?

    Read the article

  • New version of SQL Server Data Tools is now available

    - by jamiet
    If you don’t follow the SQL Server Data Tools (SSDT) blog then you may not know that two days ago an updated version of SSDT was released (and by SSDT I mean the database projects, not the SSIS/SSRS/SSAS stuff) along with a new version of the SSDT Power Tools. This release incorporates a an updated version of the SQL Server Data Tier Application Framework (aka DAC Framework, aka DacFX) which you can read about on Adam Mahood’s blog post SQL Server Data-Tier Application Framework (September 2012) Available. DacFX is essentially all the gubbins that you need to extract and publish .dacpacs and according to Adam’s post it incorporates a new feature that I think is very interesting indeed: Extract DACPAC with data – Creates a database snapshot file (.dacpac) from a live SQL Server or Windows Azure SQL Database that contains data from user tables in addition to the database schema. These packages can be published to a new or existing SQL Server or Windows Azure SQL Database using the SqlPackage.exe Publish action. Data contained in package replaces the existing data in the target database. In short, .dacpacs can now include data as well as schema. I’m very excited about this because one of my long-standing complaints about SSDT (and its many forebears) is that whilst it has great support for declarative development of schema it does not provide anything similar for data – if you want to deploy data from your SSDT projects then you have to write Post-Deployment MERGE scripts. This new feature for .dacpacs does not change that situation yet however it is a very important pre-requisite so I am hoping that a feature to provide declaration of data (in addition to declaration of schema which we have today) is going to light up in SSDT in the not too distant future. Read more about the latest SSDT, Power Tools & DacFX releases at: Now available: SQL Server Data Tools - September 2012 update! by Janet Yeilding New SSDT Power Tools! Now for both Visual Studio 2010 and Visual Studio 2012 by Sarah McDevitt SQL Server Data-Tier Application Framework (September 2012) Available by Adam Mahood @Jamiet

    Read the article

  • Oracle Developer Day OOP 2013 – become a Java expert & get a free ticket

    - by JuergenKress
    Want to become a Java Expert? Want to learn more about Java Roadmap, Java EE, Java FX, Java Cloud, ADF mobile, Rest and big data and try it hands-on? Make sure you attend the Oracle Developer Day 2013 with Adam Bien, Markus Eisele, Torsten Winterberg, Guido Schmut,  Wolfgang Weigend and Peter Doschkinow! Thursday January 24th 2013 Munich Conference Center Agenda 9.00-9.30:        Java Überblick und Roadmap – Wolfgang Weigend 9.30-10.00:       Java FX  – Peter Doschkinow 10.00-10.30:       ADF Mobile - Torsten Winterberg 10.30-11.00:       Pause 11.00-11.45:       Java EE – Adam Bien 11.45.12.15:       Java Cloud – Markus Eisele 12.15-12.45:       Java, big data & service bus & twitter– Guido Schmutz 12.45-14.30:       Mittag 14.30-16.30:       Hans-on workshops (parallel) Java FX Hands On ADF Mobile Glassfish Website with detail and Agenda Free registration for Exhibition and Oracle Developer Day For more information about Java please visit www.oracle.com/java WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: OOP 2013,Oracle Developer Day,OOP Oracle,Adam Bien,Markus Eisele,Guido Schmutz,Torsten Winterberg,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Big AdventureWorks2012

    - by jamiet
    Last week I launched AdventureWorks on Azure, an initiative to make SQL Azure accessible to anyone, in my blog post AdventureWorks2012 now available for all on SQL Azure. Since then I think its fair to say that the reaction has been lukewarm with 31 insertions into the [dbo].[SqlFamily] table and only 8 donations via PayPal to support it; on the other hand those 8 donators have been incredibly generous and we nearly have enough in the bank to cover a full year’s worth of availability. It was always my intention to try and make this offering more appealing and to that end I have used an adapted version of Adam Machanic’s make_big_adventure.sql script to massively increase the amount of data in the database and give the community more scope to really push SQL Azure and see what it is capable of. There are now two new tables in the database: [dbo].[bigProduct] with 25200 rows [dbo].[bigTransactionHistory] with 7827579 rows The credentials to login and use AdventureWorks on Azure are as they were before: Server mhknbn2kdz.database.windows.net Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Remember, if you want to support AdventureWorks on Azure simply click here to launch a pre-populated PayPal Send Money form - all you have to do is login, fill in an amount, and click Send. We need more donations to keep this up and running so if you think this is useful and worth supporting, please please donate.   I mentioned that I had to adapt Adam’s script, the main reasons being: Cross-database queries are not yet supported in SQL Azure so I had to create a local copy of [dbo].[spt_values] rather than reference that in [master] SELECT…INTO is not supported in SQL Azure The 1GB limit of SQLAzure web edition meant that there would not be enough space to store all the data generated by Adam’s script so I had to decrease the total number of rows. The amended script is available on my SkyDrive at https://skydrive.live.com/redir.aspx?cid=550f681dad532637&resid=550F681DAD532637!16756&parid=550F681DAD532637!16755 @Jamiet

    Read the article

  • APress Deal of the Day - 1/June/2012 - Introducing Visual C# 2010

    - by TATWORTH
    Today's $10 Deal of the Day from APress at http://www.apress.com/9781430231714 is Introducing Visual C# 2010."If you're new to C# programming, this book is the ideal way to get started. Respected author Adam Freeman guides you through the C# language by carefully building up your knowledge from fundamental concepts to advanced features." Adam Freeman is an excellent author. This is an excellent introduction to C# programming and a manual for those with experience. Having read through book, I am very impressed by its practical approach to C#. I cannot improve on the by-line "Get started on your C# journey with an expert by your side leading by example" Adam Freeman teaches C# by precept and example. I suspect he drives a Volvo C30 as it comes up in many of the code examples! Throughout the book there are numerous links back and forth so as to avoid over complicating the current topic. I have have no hesitation in recommending this book both to programmers starting out with C# and to the seasoned professional. It is a book that should be on every C# development team's book shelf.

    Read the article

  • Machine only responds to network requests from machines it is pinging

    - by ILikeFood
    I have two machines. WOPR: Ubuntu server edition 10.10 LTS 32 bit Adam Selene: Windows 7 home premium 64 bit / Ubuntu Desktop 10.10 LTS 64 bit I want to be able to SSH from Adam Selene to WOPR, so I connect them to the same network. Here's where things get weird. I cannot connect to WOPR in any way under normal circumstances. But, if WOPR is pinging Adam, then it starts responding to ping requests, HTTP gets, and SSH tunnels. I'm an amateur, and brand new to Ubuntu server, so I suspect there's a misconfiguration somewhere, but there's an off chance it's a bug in the OS. Does anyone know what might cause this behavior? Thanks a lot!

    Read the article

  • Writing tests for Rails plugins

    - by Adam
    I'm working on a plugin for Rails that would add limited in-memory caching to ActiveRecord's finders. The functionality itself is mature enough, but I can't for the life of me get unit tests to work with the plugin. I now have under vendor/plugins/my_plugin/test/my_plugin_test.rb a standard subclass of ActiveSupport::TestCase with a couple of basic tests. I try running 'rake test' from the plugin directory, and I have confirmed that this task loads the ruby file with the test case, but it doesn't actually run any of the tests. I followed the Rails plugin guide (http://guides.rubyonrails.org/plugins.html) where applicable, but it seems to be horribly outdated (it suggests things that Rails now do automatically, etc.) The only output I get is this: Kakadu:ingenious_record adam$ rake test (in /Users/adam/Sites/1_PRK/vendor/plugins/ingenious_record) /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -Ilib:lib:test "/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake/rake_test_loader.rb" "test/ingenious_record_test.rb" The simplest test case looks like this: require 'test_helper' require 'active_record' class IngeniousRecordTest < ActiveSupport::TestCase test "example" do assert false end end This should definitely produce at least some output, and the only test in that file should produce a failed assertion. Any ideas what I could do to get Rails to run my tests?

    Read the article

  • How can I draw an arrow at the edge of the screen pointing to an object that is off screen?

    - by Adam Henderson
    I am wishing to do what is described in this topic: http://www.allegro.cc/forums/print-thread/283220 I have attempted a variety of the methods mentioned here. First I tried to use the method described by Carrus85: Just take the ratio of the two triangle hypontenuses (doesn't matter which triagle you use for the other, I suggest point 1 and point 2 as the distance you calculate). This will give you the aspect ratio percentage of the triangle in the corner from the larger triangle. Then you simply multiply deltax by that value to get the x-coordinate offset, and deltay by that value to get the y-coordinate offset. But I could not find a way to calculate how far the object is away from the edge of the screen. I then tried using ray casting (which I have never done before) suggested by 23yrold3yrold: Fire a ray from the center of the screen to the offscreen object. Calculate where on the rectangle the ray intersects. There's your coordinates. I first calculated the hypotenuse of the triangle formed by the difference in x and y positions of the two points. I used this to create a unit vector along that line. I looped through that vector until either the x coordinate or the y coordinate was off the screen. The two current x and y values then form the x and y of the arrow. Here is the code for my ray casting method (written in C++ and Allegro 5) void renderArrows(Object* i) { float x1 = i->getX() + (i->getWidth() / 2); float y1 = i->getY() + (i->getHeight() / 2); float x2 = screenCentreX; float y2 = ScreenCentreY; float dx = x2 - x1; float dy = y2 - y1; float hypotSquared = (dx * dx) + (dy * dy); float hypot = sqrt(hypotSquared); float unitX = dx / hypot; float unitY = dy / hypot; float rayX = x2 - view->getViewportX(); float rayY = y2 - view->getViewportY(); float arrowX = 0; float arrowY = 0; bool posFound = false; while(posFound == false) { rayX += unitX; rayY += unitY; if(rayX <= 0 || rayX >= screenWidth || rayY <= 0 || rayY >= screenHeight) { arrowX = rayX; arrowY = rayY; posFound = true; } } al_draw_bitmap(sprite, arrowX - spriteWidth, arrowY - spriteHeight, 0); } This was relatively successful. Arrows are displayed in the bottom right section of the screen when objects are located above and left of the screen as if the locations of the where the arrows are drawn have been rotated 180 degrees around the center of the screen. I assumed this was due to the fact that when I was calculating the hypotenuse of the triangle, it would always be positive regardless of whether or not the difference in x or difference in y is negative. Thinking about it, ray casting does not seem like a good way of solving the problem (due to the fact that it involves using sqrt() and a large for loop). Any help finding a suitable solution would be greatly appreciated, Thanks Adam

    Read the article

  • SQL Server Optimizer Malfunction?

    - by Tony Davis
    There was a sharp intake of breath from the audience when Adam Machanic declared the SQL Server optimizer to be essentially "stuck in 1997". It was during his fascinating "Query Tuning Mastery: Manhandling Parallelism" session at the recent PASS SQL Summit. Paraphrasing somewhat, Adam (blog | @AdamMachanic) offered a convincing argument that the optimizer often delivers flawed plans based on assumptions that are no longer valid with today’s hardware. In 1997, when Microsoft engineers re-designed the database engine for SQL Server 7.0, SQL Server got its initial implementation of a cost-based optimizer. Up to SQL Server 2000, the developer often had to deploy a steady stream of hints in SQL statements to combat the occasionally wilful plan choices made by the optimizer. However, with each successive release, the optimizer has evolved and improved in its decision-making. It is still prone to the occasional stumble when we tackle difficult problems, join large numbers of tables, perform complex aggregations, and so on, but for most of us, most of the time, the optimizer purrs along efficiently in the background. Adam, however, challenged further any assumption that the current optimizer is competent at providing the most efficient plans for our more complex analytical queries, and in particular of offering up correctly parallelized plans. He painted a picture of a present where complex analytical queries have become ever more prevalent; where disk IO is ever faster so that reads from disk come into buffer cache faster than ever; where the improving RAM-to-data ratio means that we have a better chance of finding our data in cache. Most importantly, we have more CPUs at our disposal than ever before. To get these queries to perform, we not only need to have the right indexes, but also to be able to split the data up into subsets and spread its processing evenly across all these available CPUs. Improvements such as support for ColumnStore indexes are taking things in the right direction, but, unfortunately, deficiencies in the current Optimizer mean that SQL Server is yet to be able to exploit properly all those extra CPUs. Adam’s contention was that the current optimizer uses essentially the same costing model for many of its core operations as it did back in the days of SQL Server 7, based on assumptions that are no longer valid. One example he gave was a "slow disk" bias that may have been valid back in 1997 but certainly is not on modern disk systems. Essentially, the optimizer assesses the relative cost of serial versus parallel plans based on the assumption that there is no IO cost benefit from parallelization, only CPU. It assumes that a single request will saturate the IO channel, and so a query would not run any faster if we parallelized IO because the disk system simply wouldn’t be able to handle the extra pressure. As such, the optimizer often decides that a serial plan is lower cost, often in cases where a parallel plan would improve performance dramatically. It was challenging and thought provoking stuff, as were his techniques for driving parallelism through query logic based on subsets of rows that define the "grain" of the query. I highly recommend you catch the session if you missed it. I’m interested to hear though, when and how often people feel the force of the optimizer’s shortcomings. Barring mistakes, such as stale statistics, how often do you feel the Optimizer fails to find the plan you think it should, and what are the most common causes? Is it fighting to induce it toward parallelism? Combating unexpected plans, arising from table partitioning? Something altogether more prosaic? Cheers, Tony.

    Read the article

  • Google Talk Chat/Conference Solutions

    - by Adam Davis
    I started using the old confbot python conference script in 2005 for my family. This essentially implements an IRC like conference room over Google Talk (or any Jabber/XMPP server). It has significantly increased family communication, and has become rather indispensable due to this. Recently it's begun to have severe problems (people can't see each other in the conference room) which has nearly killed the usefulness of it. Before I develop my own software or debug confbot (probably not - it uses an older jabber library that hasn't been updated since 2003) I wanted to see what other solutions exist that meet our needs: Supports Google Talk (Sorry, I'm not going to try to convince everyone involved to move to a new IM or other client) Free and open source (ideal, but not required) Runs on Windows (Not a web service run by someone else) Implements basic functionality such as kick/ban, emotes Remembers who joined the conference room across restarts Obeys Do Not Disturb and Busy status Archives all activity -Adam

    Read the article

  • Long file path returning 404 for "hello.htm"

    - by Adam Kane
    Hello, I have a long file path that works on my server, but a simliar path returns a 404 error when it is on my clients (IIS6) server (http://ddmat.com/). Here's the functioning file path on my server: http://www.forgefx.com/projects/ddmat/install/Application Files/McCurdys_1_0_0_0/Content/FBX/CCAE1B33/Roof-sectionB-02.fbm/hello.htm My guesses: Maybe the file path is too long? Maybe the ".fbm" in the directory path is invalid? Sorry for the vauge problem description. Please let me know what additional info I can provide that'd be helpful. Update: The problem happens even in short paths, with no spaces: http://www.myserver/test.folder/hell.htm Thanks, Adam

    Read the article

  • Redirecting X output

    - by Adam Matan
    Hi, I have a small program that checks some elements of a web service. The program shows graphics output and displays commmand-line results as well. I have been trying to automate this program to run periodically on a server in my office. Problem is, It only works when I have X enabled - either directly on the server, or via ssh -X. Following Google, I have tried Xvfb, which gave me quite cryptic error message: Xvfb :1 -screen 0 1600x1200x32 Fatal server error: Server is already active for display 1 If this server is no longer running, remove /tmp/.X1-lock and start again. Any ideas how to run it? I'm actually looking for the X equivalent of &>/dev/null... Thanks in advance, Adam

    Read the article

  • SQL: How to select rows from a table while ignoring the duplicate field values?

    - by Maxxon
    How to select rows from a table while ignoring the duplicate field values? Here is an example: id user_id message 1 Adam "Adam is here." 2 Peter "Hi there this is Peter." 3 Peter "I am getting sick." 4 Josh "Oh, snap. I'm on a boat!" 5 Tom "This show is great." 6 Laura "Textmate rocks." What i want to achive is to select the recently active users from my db. Let's say i want to select the 5 recently active users. The problem is, that the following script selects Peter twice. mysql_query("SELECT * FROM messages ORDER BY id DESC LIMIT 5 "); What i want is to skip the row when it gets again to Peter, and select the next result, in our case Adam. So i don't want to show my visitors that the recently active users were Laura, Tom, Josh, Peter, and Peter again. That does not make any sense, instead i want to show them this way: Laura, Tom, Josh, Peter, (skipping Peter) and Adam. Is there an SQL command i can use for this problem?

    Read the article

  • Assigning resources to MS Project 2007

    - by adam
    Hi, I'm planning a redesign of a site in Project 2007. I have three developers to hand, all with the same skills. There are about 80 templates to be rendered as part of the redesign, and each template has been added as a project task. Each of these tasks can be done by any of the 3 devs, and each will take a day (with a few exceptions). There is no order in which the tasks must be completed, so there are no predecessor rules. I'd like to be able to assign tasks to a 'Developer' resource group, and for Project to see that three tasks can be done at once (as the group has three resources members) and queue the tasks as such. Googling leads me to Team Assignment, but that appears to be part of Project Server. Surely I can do this in standalone Project? Thanks, Adam

    Read the article

  • SQL: Speed Improvement - Cluttered union query

    - by vol7ron
    SELECT * FROM ( SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( a.user_id = b.user_id ) UNION SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( lower(a.f_name)=lower(b.f_name) AND lower(a.l_name)=lower(b.l_name) ) ) foo -- UNION -- SELECT a.user_id , a.f_name , a.l_name , '' , '' , '' FROM current_tbl a WHERE a.user_id NOT IN ( select user_id from( SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( a.user_id = b.user_id ) UNION SELECT a.user_id, a.f_name, a.l_name, b.user_id, b.f_name, b.l_name FROM current_tbl a INNER JOIN import_tbl b ON ( lower(a.f_name)=lower(b.f_name) AND lower(a.l_name)=lower(b.l_name) ) ) bar ) ORDER BY user_id Example of table population: current_tbl: ------------------------------- user_id | f_name | l_name ---------+----------+---------- A1 | Adam | Acorn A2 | Beth | Berry A3 | Calv | Chard | | import_tbl: ------------------------------- user_id | f_name | l_name ---------+----------+---------- A1 | Adam | Acorn A2 | Beth | Butcher <- last_name different | | Expected Output: ----------------------------------------------------------------------- user_id1 | f_name1 | l_name1 | user_id2 | f_name2 | l_name2 ----------+-----------+-----------+------------+-----------+----------- A1 | Adam | Acorn | A1 | Adam | Acorn A2 | Beth | Berry | A2 | Beth | Butcher A3 | Calv | Chard | | | Doing this method gets rid of conditions where the row would be: A2 | Beth | Berry | A2 | Beth | Butcher But it keeps the A3 row I hope this makes sense and I haven't overly simplified it. This is a continuation question from my other question. The succession of these improvements has dropped the query down from ~32000ms to where it's at now ~1200ms - quite an improvement. I supect I can optimize by using UNION ALL in the subquery and of course the usual index optimizations, but I'm looking for the best SQL optimization. FYI this particular case is for PostgreSQL.

    Read the article

  • Prevent EC2 machine from halt, poweroff, shutdown

    - by Adam Matan
    Hi, EC2 Ubuntu servers erase all disk contents when being shut down. Following an unfortunate accident, I have decided to prevent the command-live halt, poweroff and shutdown. What's the best way to do it? I thought about renaming these commands (at /sbin) to something like HALT_RENAMED___ERASES_ALL_DISK_CONTENTS. Are there any files, other than the three listed above, that needs to be handled? I've noticed that halt and poweroff are merely links to reboot. Should reboot be renamed, too? Adam

    Read the article

  • How can I make my monitor run at it's native resolution under Kubuntu 9.10?

    - by Adam Matan
    Hi, I have installed Kubuntu 9.10 afresh on an HP desktop computer with a Samsung SyncMaster 2243 and Intel integrated graphics card. The screen resolution is fixed on 1280x1024 instead of the native 1680x1050, which makes my eyes bleed. $ lspci -k |grep "VGA" -A2 00:02.0 VGA compatible controller: Intel Corporation 82G33/G31 Express Integrated Graphics Controller (rev 10) Kernel driver in use: i915 Kernel modules: i915 and my xorg.conf: /etc/X11$ cat xorg.conf Section "Device" Identifier "Configured Video Device" Driver "vesa" EndSection Section "Monitor" Identifier "Configured Monitor" EndSection Section "Screen" Identifier "Default Screen" Monitor "Configured Monitor" Device "Configured Video Device" EndSection Any ideas how to make this driver work? I found no working solutions on Google searches. Thanks, Adam

    Read the article

  • Free, Linux-based rescue CD for Windows machines

    - by Adam Matan
    Hi, Too often, I'm being called to help a friend who screwed a Windows machine by some creative methods. Th usual remedy is backing up the hard drive contents and reinstalling. Right now, this is done by removing the defected hard drive to my machine. I figured out that using a rescue disk running some version of Linux might ease the process. I'm looking for: NTFS access Partition tools Large variety of drivers (Network, Hard drives, etc.) GUI and some rescue wizards a great plus. Any ideas? Adam

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >