Search Results

Search found 1354 results on 55 pages for 'rob forrest'.

Page 6/55 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Windows Azure WebRole stuck in a deployment loop

    - by Rob G
    I've been struggling with this one for a couple of days now. My current Windows Azure WebRole is stuck in a loop where the status keeps changing between Initializing, Busy, Stopping and Stopped. It never goes live, and I can can never see the website as a result. The WebRole is an "out of the box" MVC 2 application with Copy Local set to true on the Mvc dll and I haven't even tried hooking up a storage or WorkerRole yet, and there is nothing really happening inside the Start method that I can see would crash. I've really tried going back to basics to ensure nothing can complicate the process and the website launches without a problem on the Dev Fabric and yes it looks just like the standard "Home", "About" MVC app - just can't get it running in the cloud! Funny thing is, a few days ago, this exact package worked on the staging area in the cloud, and I could even see it in the browser - but could never get it swapped over to production, so I deleted everything and started from scratch, and now I can't even get it running on staging... Does anyone have any ideas on what I could do to diagnose this problem myself because since logging this problem on the forums 2 days ago, there has been no improvement or feedback. Any help appreciated, Regards, Rob G

    Read the article

  • Error Log states that I have MySQL connect error, yet script runs fine

    - by rob - not a robber
    Hello All, First, thanks for all the help I've received so far from StackOverflow. I've learned much. Once again, I'm posing a rudimentary question that I've searched on, but cannot find the exact answer to. Here or on PHP.net. It's sort of like what this guy asked, but not exactly: http://stackoverflow.com/questions/288603/mysql-throwing-query-error-yet-finishing-query-just-fine-why So, I saw my errorlog ballooning up when I checked my site directory and opened to notice that a bunch of errors have been recorded since I wrote this new Admin area. I know something is obviously awry with my scripting for the error to be thrown, but the weird thing is, the script actually runs through and pulls all the data I need without breaking. The log contains: PHP Warning: mysql_query() [function.mysql-query]: Access denied for user 'someuser'@'localhost' (using password: NO) in /home/mysite/adminconsole.php on line 15 I don't get that because that very line is where I setup my connection... the exact same way I do it everywhere else on the site with no problem. After that error, I have these thrown at the same time [09-Apr-2010 08:44:18] PHP Warning: mysql_query() [function.mysql-query]: A link to the server could not be established in /home/mysite/adminconsole.php on line 15 [09-Apr-2010 08:44:18] PHP Warning: mysql_fetch_array(): supplied argument is not a valid MySQL result resource in /home/mysite/adminconsole.php on line 16 From what I read in the other guys thread, the problem is the contents of the query maybe? Maybe my query is malformed? Thanks so much for any guidance you can provide. -Rob

    Read the article

  • Why isn't UIScrollView always calling viewDidLoad on subviews?

    - by Rob S.
    I'm a bit confused on the behaviour of UIScrollView as it pertains to subview loading. In my app, I lazily load subviews into my scroller. Most of the time, -viewDidLoad is called on the subview immediately after adding it the UIScrollView. There is one scenario where it isn't being called. At the 'end' of my scroll view I have a "please wait" view. When it is fully scrolled on to the page, it fades out, I add the subview and -viewDidLoad is not called. In this case, when I remove the last subview and add another subview, I get nothing. I've tried [scrollView setNeedsDisplay] and [scrollView setNeedsLayout] to no avail. I've also sent the same messages to the view I just added - no dice. Does anyone have any insight here? Many people have many questions about -viewDidLoad and I haven't been able to find one related to direct subviews inside of a scrollview. Or I have and I haven't realized it :) Thanks in advance! Rob

    Read the article

  • Passing custom Python objects to nosetests

    - by Rob
    I am attempting to re-organize our test libraries for automation and nose seems really promising. My question is, what is the best strategy for passing Python objects into nose tests? Our tests are organized in a testlib with a bunch of modules that exercise different types of request operations. Something like this: testlib \-testmoda \-testmodb \-testmodc In some cases the test modules (i.e. testmoda) is nothing but test_something(), test_something2() functions while in some cases we have a TestModB class in testmob with the test_anotherthing1(), test_anotherthing2() functions. The cool thing is that nose easily finds both. Most of those test functions are request factory stuff that can easily share a single connection to our server farm. Thus we do a lot of test_something1(cnn), TestModB.test_anotherthing2(cnn), etc. Currently we don't use nose, instead we have a hodge-podge of homegrown driver scripts with hard-coded lists of tests to execute. Each of those driver scripts creates its own connection object. Maintaining those scripts and the connection minutia is painful. I'd like to take free advantage of nose's beautiful discovery functionality, passing in a connection object of my choosing. Thanks in advance! Rob P.S. The connection objects are not pickle-able. :(

    Read the article

  • sending HTML email with a variable in the URL

    - by Rob Crouch
    I am using the following script to send a dynamic page (php) as a html email... the page being emailed uses a variable in the URL to determine what record is shown from a database <? $evid = $_GET['evid']; $to = '[email protected]'; $subject = 'A test email!'; // To send HTML mail, the Content-type header must be set $headers = 'MIME-Version: 1.0' . "\r\n"; $headers .= 'Content-type: text/html; charset=iso-8859-1' . "\r\n"; // Put your HTML here $message = file_get_contents('http://www.url.co.uk/diary/i.php?evid=2'); // Mail it mail($to, $subject, $message, $headers); ?> as you can see the html file being emailed has a evid variable... if i set this to $evid and try to send the variable when running the current script I get an error... does anyone know of a way round this? hope i explained that clear enough Rob

    Read the article

  • How do I run a command as a different user from a root cronjob?

    - by rob
    I seem to be stuck between an NFS limitation and a Cron limitation. So I've got root cron (on RHEL5) running a shell script that, among other things, needs to rsync some files over an NFS mount. And the files on the NFS mount are owned by the apache user with mode 700, so only the apache user can run the rsync command -- running as root yields a permission error (NFS being a rare case, apparently, where the root user is not all-powerful?) When I just want to run the rsync by hand, I can use "sudo -u apache rsync ..." But sudo no workie in cron -- it says "sudo: sorry, you must have a tty to run sudo". I don't want to run the whole script as apache (i.e. from apache's crontab) because other parts of the script do require root -- it's just that one command that needs to run as apache. And I would really prefer not to change the mode on the files, as that will involve significant changes to other applications. There's gotta be a way to accomplish "sudo -u apache" from cron?? thanks! rob

    Read the article

  • Windows Mobile : How to bind dropdown's selectedvalue to a column in table A and the list data to a

    - by Rob
    Hi, I am trying to learn the basics of Windows Mobile development against SQL CE and have come across a basic problem. I have two tables. One called Customers that stores customer info and has an identity column called ID as the primary key. The other table is called Orders which has a column called CustomerID (the FK constraint is present). I have added a DataSet to the project that contains both tables and have autogenerated the edit/view forms. This has produced a text control for the CustomerID column in the Order table for the new/edit form and I deleted it and replaced it with a dropdown list. Then, using the 'Advanced' databinding options (in Properties) I set the datasource of the list to the Customers table setting the value to the ID field and the text to the CustomerName field. I then set the SelectedValue of the list box to the CustomerID field of the Orders dataset. So far so good. When I run the app in the emulator and view the 'New' form for Orders the Customer dropdown is indeed populated with a list of customer names and I can select one and happily create a new order successfully. This is confirmed when I see the order appear in the Orders Grid form. However, when I then click on the order in the grid and then select 'Edit' the order loads but the dropdown always shows the first customer in the list and doesn't seem to bind the SelectedValue to the Orders dataset CustomerID field. Now I am an ASP.NET guy and normally hand craft the DAL and it's binding to the UI so I'm not entirely sure where to look to investigate what is going wrong here as this is all generated code. I am sure it is something very trivial but any pointers would be appreciated. My gut feeling is that the SelectedValue and the Customers.CustomerID values do not match for some reason? Many thanks, Rob.

    Read the article

  • Multiple SessionFactories in Windows Service with NHibernate

    - by Rob Taylor
    Hi all, I have a Webapp which connects to 2 DBs (one core, the other is a logging DB). I must now create a Windows service which will use the same business logic/Data access DLLs. However when I try to reference 2 session factories in the Service App and call the factory.GetCurrentSession() method, I get the error message "No session bound to current context". Does anyone have a suggestion about how this can be done? public class StaticSessionManager { public static readonly ISessionFactory SessionFactory; public static readonly ISessionFactory LoggingSessionFactory; static StaticSessionManager() { string fileName = System.Configuration.ConfigurationSettings.AppSettings["DefaultNHihbernateConfigFile"]; string executingPath = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase); fileName = executingPath + "\\" + fileName; SessionFactory = cfg.Configure(fileName).BuildSessionFactory(); cfg = new Configuration(); fileName = System.Configuration.ConfigurationSettings.AppSettings["LoggingNHihbernateConfigFile"]; fileName = executingPath + "\\" + fileName; LoggingSessionFactory = cfg.Configure(fileName).BuildSessionFactory(); } } The configuration file has the setting: <property name="current_session_context_class">call</property> The service sets up the factories: private ISession _session = null; private ISession _loggingSession = null; private ISessionFactory _sessionFactory = StaticSessionManager.SessionFactory; private ISessionFactory _loggingSessionFactory = StaticSessionManager.LoggingSessionFactory; ... _sessionFactory = StaticSessionManager.SessionFactory; _loggingSessionFactory = StaticSessionManager.LoggingSessionFactory; _session = _sessionFactory.OpenSession(); NHibernate.Context.CurrentSessionContext.Bind(_session); _loggingSession = _loggingSessionFactory.OpenSession(); NHibernate.Context.CurrentSessionContext.Bind(_loggingSession); So finally, I try to call the correct factory by: ISession session = StaticSessionManager.SessionFactory.GetCurrentSession(); Can anyone suggest a better way to handle this? Thanks in advance! Rob

    Read the article

  • Is A Web App Feasible For A Heavy Use Data Entry System?

    - by Rob
    Looking for opinions on this, we're working on a project that is essentially a data entry system for a production line. Heavy data input by users who normally work in Excel or other thick client data systems. We've been told (as a consequence) that we have to develop this as a thick client using .NET. Our argument was to develop as a web app, as it resolves a lot of issues and would be easier to write and maintain. Their argument against the web is that (supposedly) the web is not ready yet for a heavy duty data entry system, and that the web in a browser does not offer the speed, responsiveness, and fluid experience for the end-user that a thick client can (citing things such as drag and drop, rapid auto-entry and data navigation, etc.) Personally, I think that with good form design and JQuery/AJAX, a web app could do everything a thick client does just as well, and they just don't know what they're talking about. The irony is that a thick client has to go to a lot more effort to manage the deployment and connectivity back to the central data server than a web app would need to do, so in terms of speed I would expect a web app to be faster. What are the thoughts of those out there? Are there any technologies currently in production use that modern data entry systems are being developed as web apps in? Appreciate any feedback. Regards, Rob.

    Read the article

  • Java client server sending bytes receiver listens indefinitely

    - by Rob
    Hello, I'm trying to write a Java program for Windows that involves communication with a server program located on a foreign machine.My program successfully connects to the server, successfully writes a byte array to it, and waits for a response. I know that the server is printing bytes (the response) back to me one byte at a time. I've tried using a DataInputStream object with various methods (read, readByte etc.), I've tried using a BufferedReader object with its methods (read, readLine etc.) but all the reader objects and various methods that I've used all come up against the same problem. The bytes are being successfully read (each time a byte or bytes are read, I can print them to the console, and they are what I'd expect them to be). The problem is that my reader doesn't know when to stop reading. Even if the server has sent all its bytes, the reader function on my end waits for more data, indefinitely, and so the program hangs at the read function. This problem seems to affect all the techniques that I have tried. I've been running tests with a simple client program and server program, each about 40 or 50 lines long, where the client connects to the server, and sends some bytes to it. All the techniques I've tried for the server reader result in the same problem mentioned above (the server hangs waiting for more input from the client, even though it has sent all its data). I'm really desperate for some help on this. It's important that I get this program finished soon, and it's basically complete except for this communication issue. Any help is much appreciated! -Rob

    Read the article

  • Using pointers to adjust global objects in objective-c

    - by Rob
    Ok, so I am working with two sets of data that are extremely similar, and at the same time, these data sets are both global NSMutableArrays within the object. data_set_one = [[NSMutableArray alloc] init]; data_set_two = [[NSMutableArray alloc] init]; Two new NSMutableArrays are loaded, which need to be added to the old, existing data. These Arrays are also global. xml_dataset_one = [[NSMutableArray alloc] init]; xml_dataset_two = [[NSMutableArray alloc] init]; To reduce code duplication (and because these data sets are so similar) I wrote a void method within the class to handle the data combination process for both Arrays: -(void)constructData:(NSMutableArray *)data fromDownloadArray:(NSMutableArray *)down withMatchSelector:(NSString *)sel_str Now, I have a decent understanding of object oriented programming, so I was thinking that if I were to invoke the method with the global Arrays in the data like so... [self constructData:data_set_one fromDownloadArray:xml_dataset_one withMatchSelector:@"id"]; Then the global NSMutableArrays (data_set_one) would reflect the changes that happen to "array" within the method. Sadly, this is not the case, data_set_one doesn't reflect the changes (ex: new objects within the Array) outside of the method. Here is a code snippet of the problem // data_set_one is empty // xml_dataset_one has a few objects [constructData:(NSMutableArray *)data_set_one fromDownloadArray:(NSMutableArray *)xml_dataset_one withMatchSelector:(NSString *)@"id"]; // data_set_one should now be xml_dataset_one, but when echoed to screen, it appears to remain empty And here is the gist of the code for the method, any help is appreciated. -(void)constructData:(NSMutableArray *)data fromDownloadArray:(NSMutableArray *)down withMatchSelector:(NSString *)sel_str { if ([data count] == 0) { data = down; // set data equal to downloaded data } else if ([down count] == 0) { // download yields no results, do nothing } else { // combine the two arrays here } } This project is not ARC enabled. Thanks for the help guys! Rob

    Read the article

  • DBIx::Class base result class

    - by Rob
    Hi there, I am trying to create a model for Catalyst by using DBIx::Class::Schema::Loader. I want the result classes to have a base class I can add methods to. So MyTable.pm inherits from Base.pm which inherits from DBIx::Class::core (default). Somehow I cannot figure out how to do this. my create script is below, can anyone tell me what I am doing wrong? The script creates my model ok, but all resultset classes just directly inherit from DBIx::Class::core without my Base class in between. #!/usr/bin/perl use DBIx::Class::Schema::Loader qw/ make_schema_at /; #specifically for the entities many-2-many relation $ENV{DBIC_OVERWRITE_HELPER_METHODS_OK} = 1; make_schema_at( 'MyApp::Schema', { dump_directory => '/tmp', debug => 1, overwrite_modifications => 1, components => ['EncodedColumn'], #encoded password column use_namespaces => 1, default_resultset_class => 'Base' }, [ 'DBI:mysql:database=mydb;host=localhost;port=3306','rob', '******' ], );

    Read the article

  • Resize iframe to show first element works except in IE 7

    - by Rob Fenwick
    I have two iframes on my home page, the script below is in the head of the page that is being displayed in the iframe, there are several divisions on the page in a container div with an id of 'content', I want to size the iframe on the home page so that just the first div is initially seen and to scroll to see the rest. It is working in all browsers that I have tried except IE 7, I don't care too much about earlier browsers. IE 7 is acting like the page being shown is blank and sizing the iframe to 0 height, can someone tell me why IE 7 is having a problem with it, and failing that how can I get IE 7 to ignore the script? function resizeIframe() { //get the firstChild of a container div with the id 'content' var div01 = document.getElementById("content").firstChild; //find the first element ignoring white spaces and returns while(div01.nodeType!=1){ div01 = div01.nextSibling; } // get the height of first element var boxHeight = div01.clientHeight; //set the height of iframe the id of the iframe is 'news' parent.document.getElementById('news').height = boxHeight; } I have the function called in the body tag. If someone could help me I'd very much appreciate it. The page that it is on is at wsuu.org The version with the script is not up but you can get an idea of what I'm trying to do. Rob

    Read the article

  • AGENT: The World's Smartest Watch

    - by Rob Chartier
    AGENT: The World's Smartest Watch by Secret Labs + House of Horology Disclaimer: Most if not all of this content has been gleaned from the comments on the Kickstarter project page and comments section. Any discrepancies between this post and any documentation on agentwatches.com, kickstarter.com, etc.., those official sites take precedence. Overview The next generation smartwatch with brand-new technology. World-class developer tools, unparalleled battery life, Qi wireless charging. Kickstarter Page, Comments Funding period : May 21, 2013 - Jun 20, 2013 MSRP : $249 Other Urls http://www.agentwatches.com/ https://www.facebook.com/agentwatches http://twitter.com/agentwatches http://pinterest.com/agentwatches/ http://paper.li/robchartier/1371234640 Developer Story The first official launch of the preview SDK and emulator will happen on 20-Jun-2013.  All development will be done in Visual Studio 2012, using the .NET Micro Framework SDK 2.3.  The SDK will ship with the first round of the expected API for developers along with an emulator. With that said, there is no need to wait for the SDK.  You can download the tooling now and get started with Apps and Faces immediately.  The only thing that you will not be able to work with is the API; but for example, watch faces, you can start building the basic face rendering with the Bitmap graphics drawing in the .NET Micro Framework.   Does it look good? Before we dig into any more of the gory details, here are a few photos of the current available prototype models.   The watch on the tiny QI Charter   If you wander too far away from your phone, your watch will let you know with a vibration and a message, all but one button will dismiss the message.   An app showing the premium weather data!   Nice stitching on the straps, leather and silicon will be available, along with a few lengths to choose from (short, regular, long lengths). On to those gory details…. Hardware Specs Processor 120MHz ARM Cortex-M4 processor (ATSAM4SD32) with secondary AVR co-processor Flash & RAM 2MB of onboard flash and 160KB of RAM 1/4 of the onboard flash will be used by the OS The flash is permanent (non-volatile) storage. Bluetooth Bluetooth 4.0 BD/EDR + LE Bluetooth 4.0 is backwards compatible with Bluetooth 2.1, so classic Bluetooth functions (BD/EDR, SPP/AVRCP/PBAP/etc.) will work fine. Sensors 3D Accelerometer (Motion) ST LSM303DLHC Ambient Light Sensor Hardware power metering Vibration Motor (You can pulse it to create vibration patterns, not sure about the vibration strength - driven with PWM) No piezo/speaker or microphone. Other QI Wireless Charging, no NFC, no wall adapter included Custom LED Backlight No GPS in the watch. It uses the GPS in your phone. AGENT watch apps are deployed and debugged wirelessly from your PC via Bluetooth. RoHS, Pb-free Battery Expected to use a CR2430-sized rechargeable battery – replaceable (Mouser, Amazon) Estimated charging time from empty is 2 hours with provided charger 7 Days typical with Bluetooth on, 30 days with Bluetooth off (watch-face only mode) The battery should last at least 2 years, with 100s of charge cycles. Physical dimensions Roughly 38mm top-to-bottom on the front face 35mm left-to-right on the front face and around 12mm in depth 22mm strap Two ~1/16" hex screws to attach the watch pin The top watchcase material candidates are PVD stainless steel, brushed matte ceramic, and high-quality polycarbonate (TBD). The glass lens is mineral glass, Anti-glare glass lens Strap options Leather and silicon straps will be available Expected to have three sizes Display 1.28" Sharp Memory Display The display stays on 100% of the time. Dimensions: 128x128 pixels Buttons Custom "Pusher" buttons, they will not make noise like a mouse click, and are very durable. The top-left button activates the backlight; bottom-left changes apps; three buttons on the right are up/select/down and can be used for custom purposes by apps. Backup reset procedure is currently activated by holding the home/menu button and the top-right user button for about ten seconds Device Support Android 2.3 or newer iPhone 4S or newer Windows Phone 8 or newer Heart Rate monitors - Bluetooth SPP or Bluetooth LE (GATT) is what you'll want the heart monitor to support. Almost limitless Bluetooth device support! Internationalization & Localization Full UTF8 Support from the ground up. AGENT's user interface is in English. Your content (caller ID, music tracks, notifications) will be in your native language. We have a plan to cover most major character sets, with Latin characters pre-loaded on the watch. Simplified Chinese will be available Feature overview Phone lost alert Caller ID Music Control (possible volume control) Wireless Charging Timer Stopwatch Vibrating Alarm (possibly custom vibrations for caller id) A few default watch faces Airplane mode (by demand or low power) Can be turned off completely Customizable 3rd party watch faces, applications which can be loaded over bluetooth. Sample apps that maybe installed Weather Sample Apps not installed Exercise App Other Possible Skype integration over Bluetooth. They will provide an AGENT app for your smartphone (iPhone, Android, Windows Phone). You'll be able to use it to load apps onto the watch.. You will be able to cancel phone calls. With compatible phones you can also answer, end, etc. They are adopting the standard hands-free profile to provide these features and caller ID.

    Read the article

  • Fixing Robocopy for SQL Server Jobs

    - by Most Valuable Yak (Rob Volk)
    Robocopy is one of, if not the, best life-saving/greatest-thing-since-sliced-bread command line utilities ever to come from Microsoft.  If you're not using it already, what are you waiting for? Of course, being a Microsoft product, it's not exactly perfect. ;)  Specifically, it sets the ERRORLEVEL to a non-zero value even if the copy is successful.  This causes a problem in SQL Server job steps, since non-zero ERRORLEVELs report as failed. You can work around this by having your SQL job go to the next step on failure, but then you can't determine if there was a genuine error.  Plus you still see annoying red X's in your job history.  One way I've found to avoid this is to use a batch file that runs Robocopy, and I add some commands after it (in red): robocopy d:\backups \\BackupServer\BackupFolder *.bak rem suppress successful robocopy exit statuses, only report genuine errors (bitmask 16 and 8 settings)set/A errlev="%ERRORLEVEL% & 24" rem exit batch file with errorlevel so SQL job can succeed or fail appropriatelyexit/B %errlev% (The REM statements are simply comments and don't need to be included in the batch file) The SET command lets you use expressions when you use the /A switch.  So I set an environment variable "errlev" to a bitwise AND with the ERRORLEVEL value. Robocopy's exit codes use a bitmap/bitmask to specify its exit status.  The bits for 1, 2, and 4 do not indicate any kind of failure, but 8 and 16 do.  So by adding 16 + 8 to get 24, and doing a bitwise AND, I suppress any of the other bits that might be set, and allow either or both of the error bits to pass. The next step is to use the EXIT command with the /B switch to set a new ERRORLEVEL value, using the "errlev" variable.  This will now return zero (unless Robocopy had real errors) and allow your SQL job step to report success. This technique should also work for other command-line utilities.  The only issues I've found is that it requires the commands to be part of a batch file, so if you use Robocopy directly in your SQL job step you'd need to place it in a batch.  If you also have multiple Robocopy calls, you'll need to place the SET/A command ONLY after the last one.  You'd therefore lose any errors from previous calls, unless you use multiple "errlev" variables and AND them together. (I'll leave this as an exercise for the reader) The SET/A syntax also permits other kinds of expressions to be calculated.  You can get a full list by running "SET /?" on a command prompt.

    Read the article

  • SharpDX: best practice for multiple RenderForms?

    - by Rob Jellinghaus
    I have an XNA app, but I really need to add multiple render windows, which XNA doesn't do. I'm looking at SharpDX (both for multi-window support and for DX11 / Metro / many other reasons). I decided to hack up the SharpDX DX11 MultiCubeTexture sample to see if I could make it work. My changes are pretty trivial. The original sample had: [STAThread] private static void Main() { var form = new RenderForm("SharpDX - MiniCubeTexture Direct3D11 Sample"); ... I changed this to: struct RenderFormWithActions { internal readonly RenderForm Form; // should just be Action but it's not in System namespace?! internal readonly Action RenderAction; internal readonly Action DisposeAction; internal RenderFormWithActions(RenderForm form, Action renderAction, Action disposeAction) { Form = form; RenderAction = renderAction; DisposeAction = disposeAction; } } [STAThread] private static void Main() { // hackity hack new Thread(new ThreadStart(() = { RenderFormWithActions form1 = CreateRenderForm(); RenderLoop.Run(form1.Form, () = form1.RenderAction(0)); form1.DisposeAction(0); })).Start(); new Thread(new ThreadStart(() = { RenderFormWithActions form2 = CreateRenderForm(); RenderLoop.Run(form2.Form, () = form2.RenderAction(0)); form2.DisposeAction(0); })).Start(); } private static RenderFormWithActions CreateRenderForm() { var form = new RenderForm("SharpDX - MiniCubeTexture Direct3D11 Sample"); ... Basically, I split out all the Main() code into a separate method which creates a RenderForm and two delegates (a render delegate, and a dispose delegate), and bundles them all together into a struct. I call this method twice, each time from a separate, new thread. Then I just have one RenderLoop on each new thread. I was thinking this wouldn't work because of the [STAThread] declaration -- I thought I would need to create the RenderForm on the main (STA) thread, and run only a single RenderLoop on that thread. Fortunately, it seems I was wrong. This works quite well -- if you drag one of the forms around, it stops rendering while being dragged, but starts again when you drop it; and the other form keeps chugging away. My questions are pretty basic: Is this a reasonable approach, or is there some lurking threading issue that might make trouble? My code simply duplicates all the setup code -- it makes a duplicate SwapChain, Device, Texture2D, vertex buffer, everything. I don't have a problem with this level of duplication -- my app is not intensive enough to suffer resource issues -- but nonetheless, is there a better practice? Is there any good reference for which DirectX structures can safely be shared, and which can't? It appears that RenderLoop.Run calls the render delegate in a tight loop. Is there any standard way to limit the frame rate of RenderLoop.Run, if you don't want a 400FPS app eating 100% of your CPU? Should I just Thread.Sleep(30) in the render delegate? (I asked on the sharpdx.org forums as well, but Alexandre is on vacation for two weeks, and my sister wants me to do a performance with my app at her wedding in three and a half weeks, so I'm mighty incented here! http://robjsoftware.org for details of what I'm building....)

    Read the article

  • Nepotism In The SQL Family

    - by Rob Farley
    There’s a bunch of sayings about nepotism. It’s unpopular, unless you’re the family member who is getting the opportunity. But of course, so much in life (and career) is about who you know. From the perspective of the person who doesn’t get promoted (when the family member is), nepotism is simply unfair; even more so when the promoted one seems less than qualified, or incompetent in some way. We definitely get a bit miffed about that. But let’s also look at it from the other side of the fence – the person who did the promoting. To them, their son/daughter/nephew/whoever is just another candidate, but one in whom they have more faith. They’ve spent longer getting to know that person. They know their weaknesses and their strengths, and have seen them in all kinds of situations. They expect them to stay around in the company longer. And yes, they may have plans for that person to inherit one day. Sure, they have a vested interest, because they’d like their family members to have strong careers, but it’s not just about that – it’s often best for the company as well. I’m not announcing that the next LobsterPot employee is one of my sons (although I wouldn’t be opposed to the idea of getting them involved), but actually, admitting that almost all the LobsterPot employees are SQLFamily members… …which makes this post good for T-SQL Tuesday, this month hosted by Jeffrey Verheul (@DevJef). You see, SQLFamily is the concept that the people in the SQL Server community are close. We have something in common that goes beyond ordinary friendship. We might only see each other a few times a year, at events like the PASS Summit and SQLSaturdays, but the bonds that are formed are strong, going far beyond typical professional relationships. And these are the people that I am prepared to hire. People that I have got to know. I get to know their skill level, how well they explain things, how confident people are in their expertise, and what their values are. Of course there people that I wouldn’t hire, but I’m a lot more comfortable hiring someone that I’ve already developed a feel for. I need to trust the LobsterPot brand to people, and that means they need to have a similar value system to me. They need to have a passion for helping people and doing what they can to make a difference. Above all, they need to have integrity. Therefore, I believe in nepotism. All the people I’ve hired so far are people from the SQL community. I don’t know whether I’ll always be able to hire that way, but I have no qualms admitting that the things I look for in an employee are things that I can recognise best in those that are referred to as SQLFamily. …like Ted Krueger (@onpnt), LobsterPot’s newest employee and the guy who is representing our brand in America. I’m completely proud of this guy. He’s everything I want in an employee. He’s an experienced consultant (even wrote a book on it!), loving husband and father, genuine expert, and incredibly respected by his peers. It’s not favouritism, it’s just choosing someone I’ve been interviewing for years. @rob_farley

    Read the article

  • Security in OBIEE 11g, Part 2

    - by Rob Reynolds
    Continuing the series on OBIEE 11g, our guest blogger this week is Pravin Janardanam. Here is Part 2 of his overview of Security in OBIEE 11g. OBIEE 11g Security Overview, Part 2 by Pravin Janardanam In my previous blog on Security, I discussed the OBIEE 11g changes regarding Authentication mechanism, RPD protection and encryption. This blog will include a discussion about OBIEE 11g Authorization and other Security aspects. Authorization: Authorization in 10g was achieved using a combination of Users, Groups and association of privileges and object permissions to users and Groups. Two keys changes to Authorization in OBIEE 11g are: Application Roles Policies / Permission Groups Application Roles are introduced in OBIEE 11g. An application role is specific to the application. They can be mapped to other application roles defined in the same application scope and also to enterprise users or groups, and they are used in authorization decisions. Application roles in 11g take the place of Groups in 10g within OBIEE application. In OBIEE 10g, any changes to corporate LDAP groups require a corresponding change to Groups and their permission assignment. In OBIEE 11g, Application roles provide insulation between permission definitions and corporate LDAP Groups. Permissions are defined at Application Role level and changes to LDAP groups just require a reassignment of the Group to the Application Roles. Permissions and privileges are assigned to Application Roles and users in OBIEE 11g compared to Groups and Users in 10g. The diagram below shows the relationship between users, groups and application roles. Note that the Groups shown in the diagram refer to LDAP Groups (WebLogic Groups by default) and not OBIEE application Groups. The following screenshot compares the permission windows from Admin tool in 10g vs 11g. Note that the Groups in the OBIEE 10g are replaced with Application Roles in OBIEE 11g. The same is applicable to OBIEE web catalog objects.    The default Application Roles available after OBIEE 11g installation are BIAdministrator, BISystem, BIConsumer and BIAuthor. Application policies are the authorization policies that an application relies upon for controlling access to its resources. An Application Role is defined by the Application Policy. The following screenshot shows the policies defined for BIAdministrator and BISystem Roles. Note that the permission for impersonation is granted to BISystem Role. In OBIEE 10g, the permission to manage repositories and Impersonation were assigned to “Administrators” group with no control to separate these permissions in the Administrators group. Hence user “Administrator” also had the permission to impersonate. In OBI11g, BIAdministrator does not have the permission to impersonate. This gives more flexibility to have multiple users perform different administrative functions. Application Roles, Policies, association of Policies to application roles and association of users and groups to application roles are managed using Fusion Middleware Enterprise Manager (FMW EM). They reside in the policy store, identified by the system-jazn-data.xml file. The screenshots below show where they are created and managed in FMW EM. The following screenshot shows the assignment of WebLogic Groups to Application Roles. The following screenshot shows the assignment of Permissions to Application Roles (Application Policies). Note: Object level permission association to Applications Roles resides in the RPD for repository objects. Permissions and Privilege for web catalog objects resides in the OBIEE Web Catalog. Wherever Groups were used in the web catalog and RPD has been replaced with Application roles in OBIEE 11g. Following are the tools used in OBIEE 11g Security Administration: ·       Users and Groups are managed in Oracle WebLogic Administration console (by default). If WebLogic is integrated with other LDAP products, then Users and Groups needs to managed using the interface provide by the respective LDAP vendor – New in OBIEE 11g ·       Application Roles and Application Policies are managed in Oracle Enterprise Manager - Fusion Middleware Control – New in OBIEE 11g ·       Repository object permissions are managed in OBIEE Administration tool – Same as 10g but the assignment is to Application Roles instead of Groups ·       Presentation Services Catalog Permissions and Privileges are managed in OBI Application administration page - Same as 10g but the assignment is to Application Roles instead of Groups Credential Store: Credential Store is a single consolidated service provider to store and manage the application credentials securely. The credential store contains credentials that either user supplied or system generated. Credential store in OBIEE 10g is file based and is managed using cryptotools utility. In 11g, Credential store can be managed directly from the FMW Enterprise Manager and is stored in cwallet.sso file. By default, the Credential Store stores password for deployed RPDs, BI Publisher data sources and BISystem user. In addition, Credential store can be LDAP based but only Oracle Internet Directory is supported right now. As you can see OBIEE security is integrated with Oracle Fusion Middleware security architecture. This provides a common security framework for all components of Business Intelligence and Fusion Middleware applications.

    Read the article

  • A Week of DNN – March 19, 2010

    - by Rob Chartier
    DotNetNuke 5.3.0 Released! New Features Templated User Profiles - User profile pages are now publicly viewable, and layout is controlled by the Admin. Photo field in User Profile - Users can upload a photo to their profile.  We also added support for User Specific data storage.  User Messaging - Users can send direct messages to other system users.  This also includes an out-of-the-box asynchronous, provider based, message platform.  You will see more of this in future releases. Search Engine Sitemap Provider - The sitemap now allows module admins to plug in sitemap logic for individual modules. Taxonomy Manager - Administrators can create flat or hierarchical taxonomies that can be shared and used across modules.  Supporting SEO and Social features at the core is an important piece for DotNetNuke moving forward. (Last Minute Update: 5.3.1 will be released with some last minute updates early next week) DotNetNuke as a Scalable Content management System (CMS) Power, Reliability & Feature Richness – DotNetNuke an Open Source Framework How to Search Engine Optimize dotnetnuke dotnetnuke Training Video – Setting DNN Security DotNetNuke Module Template [CS] (Free) XsltDb - DotNetNuke XSLT module with database and ajax support (Free) Create a non-Award Winning DotNetNuke Skin (part 1, part 2, part 3) Test Driven example module nearly refactored to Web Forms MVP Ajax Search v1.0.0 Released! (Live Demo) Tutorials: Backup DNN, Restore DNN, Move DNN from Backup (By Mitchel Sellers) A tag cloud based on the new 5.3 Taxonomy Engage: Tell-a-Friend 1.1 released (FREE module)  549 DotNetNuke Videos: DNN Creative Magazine Issue 54 Out Now  http://www.dotnetnuke.com/Community/Forums/tabid/795/forumid/112/threadid/355615/scope/posts/Default.aspx

    Read the article

  • MySQL 5.5 is GA!

    - by rob.young(at)oracle.com
    It is my pleasure to announce that MySQL 5.5 is now GA and ready for production deployment.  You can read Oracle's official press release here. I am excited about 5.5 because of the performance and scalability gains, new replication enhancements and overall improved technical efficiencies.  Congratulations and a sincere "Thanks!" go out to the entire MySQL Community and product engineering teams for making 5.5 the best release of MySQL to date.Please join us for today's MySQL Technology Update webcast where Tomas Ulin and I will cover what's new in MySQL 5.5 and provide an update on the other technologies we are working on. You can download MySQL 5.5 here.  All of the documentation and what's new information is here.  There is also a great article on MySQL 5.5 and the MySQL community here.Thanks for reading, and as always, THANKS for your support of MySQL!

    Read the article

  • Analytic functions – they’re not aggregates

    - by Rob Farley
    SQL 2012 brings us a bunch of new analytic functions, together with enhancements to the OVER clause. People who have known me over the years will remember that I’m a big fan of the OVER clause and the types of things that it brings us when applied to aggregate functions, as well as the ranking functions that it enables. The OVER clause was introduced in SQL Server 2005, and remained frustratingly unchanged until SQL Server 2012. This post is going to look at a particular aspect of the analytic functions though (not the enhancements to the OVER clause). When I give presentations about the analytic functions around Australia as part of the tour of SQL Saturdays (starting in Brisbane this Thursday), and in Chicago next month, I’ll make sure it’s sufficiently well described. But for this post – I’m going to skip that and assume you get it. The analytic functions introduced in SQL 2012 seem to come in pairs – FIRST_VALUE and LAST_VALUE, LAG and LEAD, CUME_DIST and PERCENT_RANK, PERCENTILE_CONT and PERCENTILE_DISC. Perhaps frustratingly, they take slightly different forms as well. The ones I want to look at now are FIRST_VALUE and LAST_VALUE, and PERCENTILE_CONT and PERCENTILE_DISC. The reason I’m pulling this ones out is that they always produce the same result within their partitions (if you’re applying them to the whole partition). Consider the following query: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     LAST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; This is designed to get the TotalDue for the first order of the year, the last order of the year, and also the 95% percentile, using both the continuous and discrete methods (‘discrete’ means it picks the closest one from the values available – ‘continuous’ means it will happily use something between, similar to what you would do for a traditional median of four values). I’m sure you can imagine the results – a different value for each field, but within each year, all the rows the same. Notice that I’m not grouping by the year. Nor am I filtering. This query gives us a result for every row in the SalesOrderHeader table – 31465 in this case (using the original AdventureWorks that dates back to the SQL 2005 days). The RANGE BETWEEN bit in FIRST_VALUE and LAST_VALUE is needed to make sure that we’re considering all the rows available. If we don’t specify that, it assumes we only mean “RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW”, which means that LAST_VALUE ends up being the row we’re looking at. At this point you might think about other environments such as Access or Reporting Services, and remember aggregate functions like FIRST. We really should be able to do something like: SELECT     YEAR(OrderDate),     FIRST_VALUE(TotalDue)         OVER (PARTITION BY YEAR(OrderDate)               ORDER BY OrderDate, SalesOrderID               RANGE BETWEEN UNBOUNDED PRECEDING                         AND UNBOUNDED FOLLOWING) FROM Sales.SalesOrderHeader GROUP BY YEAR(OrderDate) ; But you can’t. You get that age-old error: Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.OrderDate' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Msg 8120, Level 16, State 1, Line 5 Column 'Sales.SalesOrderHeader.SalesOrderID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. Hmm. You see, FIRST_VALUE isn’t an aggregate function. None of these analytic functions are. There are too many things involved for SQL to realise that the values produced might be identical within the group. Furthermore, you can’t even surround it in a MAX. Then you get a different error, telling you that you can’t use windowed functions in the context of an aggregate. And so we end up grouping by doing a DISTINCT. SELECT DISTINCT     YEAR(OrderDate),         FIRST_VALUE(TotalDue)              OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),         LAST_VALUE(TotalDue)             OVER (PARTITION BY YEAR(OrderDate)                   ORDER BY OrderDate, SalesOrderID                   RANGE BETWEEN UNBOUNDED PRECEDING                             AND UNBOUNDED FOLLOWING),     PERCENTILE_CONT(0.95)          WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)),     PERCENTILE_DISC(0.95)         WITHIN GROUP (ORDER BY TotalDue)         OVER (PARTITION BY YEAR(OrderDate)) FROM Sales.SalesOrderHeader ; I’m sorry. It’s just the way it goes. Hopefully it’ll change the future, but for now, it’s what you’ll have to do. If we look in the execution plan, we see that it’s incredibly ugly, and actually works out the results of these analytic functions for all 31465 rows, finally performing the distinct operation to convert it into the four rows we get in the results. You might be able to achieve a better plan using things like TOP, or the kind of calculation that I used in http://sqlblog.com/blogs/rob_farley/archive/2011/08/23/t-sql-thoughts-about-the-95th-percentile.aspx (which is how PERCENTILE_CONT works), but it’s definitely convenient to use these functions, and in time, I’m sure we’ll see good improvements in the way that they are implemented. Oh, and this post should be good for fellow SQL Server MVP Nigel Sammy’s T-SQL Tuesday this month.

    Read the article

  • Handling special characters with FOR XML PATH('')

    - by Rob Farley
    Because I hate seeing &gt; or &amp; in my results… Since SQL Server 2005, we’ve been able to use FOR XML PATH('') to do string concatenation. I’ve blogged about it before several times. But I don’t think I’ve blogged about the fact that it all goes a bit wrong if you have special characters in the strings you’re concatenating. Generally, I don’t even worry about this. I should, but I don’t, particularly when the solution is so easy. Suppose I want to concatenate the list of user databases...(read more)

    Read the article

  • 3D terrain map with Hexagon Grids

    - by Rob
    I'm working on a hobby project (I'm a web/backend developer by day) and I want to create a 3D Tile (terrain) engine. I'm using XNA, but I can use MonoGame, OpenGL, or straight DirectX, so the answer does not have to be XNA specific. I'm more looking for some high level advice on how to approach this problem. I know about creating height maps and such, there are thousands of references out there on the net for that, this is a bit more specific. I'm more concerned with is the approach to get a 3D hexagon tile grid out of my terrain (since the terrain, and all 3d objects, are basically triangles). The first approach I thought about is to basically draw the triangles on the screen in the following order (blue numbers) to give me the triangles for terrain (black triangles) and then make hexes out of the triangles (red hex). This approach seems complicated to me since i'm basically having to draw 4 different types of triangles. The next approach I thought of was to use the existing triangles like I did for a square grid and get my hexes from 6 triangles as follows This seems like the easier approach to me since there are only 2 types of triangles (i would have to play with the heights and widths to get a "perfect" hexagon, but the idea is the same. So I'm looking for: 1) Any suggestions on which approach I should take, and why. 2) How would I translate mouse position to a hexagon grid position (especially when moving the camera around), for example in the second image if the mouse pointer were the green circle, how would I determine to highlight that hexagon and then translating that into grid coordinates (assuming it is 0,0)? 3) Any references, articles, books, etc - to get me going in the right direction. Note: I've done hex grid's and mouse-grid coordinate conversion before in 2d. looking for some pointers on how to do the same in 3d. The result I would like to achieve is something similar to this video.

    Read the article

  • My new laptop - with a really nice battery option

    - by Rob Farley
    It was about time I got a new laptop, and so I made a phone-call to Dell to discuss my options. I decided not to get an SSD from them, because I’d rather choose one myself – the sales guy tells me that changing the HD doesn’t void my warranty, so that’s good (incidentally, I’d love to hear people’s recommendations for which SSD to get for my laptop). Unfortunately this machine only has one HD slot, but I figure that I’ll put lots of stuff onto external disks anyway. The machine I got was a Dell Studio XPS 16. It’s red (which suits my company), but also has the Intel® Core™ i7-820QM Processor, which is 4 Cores/8 Threads. Makes for a pretty Task Manager, but nothing like the one I saw at SQLBits last year (at 96 cores), or the one that my good friend James Rowland-Jones writes about here. But the reason for this post is actually something in the software that comes with the machine – you know, the stuff that most people uninstall at the earliest opportunity. I had just reinstalled the operating system, and was going through the utilities to get the drivers up-to-date, when I noticed that one of Dell applications included an option to disable battery charging. So I installed it. And sure enough, I can tell the battery not to charge now. Clearly Dell see it as a temporary option, and one that’s designed for when you’re on a plane. But for me, I most often use my laptop with the power plugged in, which means I don’t need to have my battery continually topping itself up. So I really love this option, but I feel like it could go a little further. I’d like “Not Charging” to be the default option, and let me set it when I want to charge it (which should theoretically make my battery last longer). I also intend to work out how this option works, so that I can script it and put it into my StartUp options (so it can be the Default setting). Actually – if someone has already worked this out and can tell me what it does, then please feel free to let me know. Even better would be an external switch. I had a switch on my old laptop (a Dell Latitude) for WiFi, so that I could turn that off before I turned on the computer (this laptop doesn’t give me that option – no physical switch for flight mode). I guess it just means I’ll get used to leaving the WiFi off by default, and turning it on when I want it – might save myself some battery power that way too. Soon I’ll need to take the plunge and sync my iPhone with the new laptop. I’m a little worried that I might lose something – Apple’s messages about how my stuff will be wiped and replaced with what’s on the PC doesn’t fill me with confidence, as it’s a new PC that doesn’t have stuff on it. But having a new machine is definitely a nice experience, and one that I can recommend. I’m sure when I get around to buying an SSD I’ll feel like it’s shiny and new all over again! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • When is a SQL function not a function?

    - by Rob Farley
    Should SQL Server even have functions? (Oh yeah – this is a T-SQL Tuesday post, hosted this month by Brad Schulz) Functions serve an important part of programming, in almost any language. A function is a piece of code that is designed to return something, as opposed to a piece of code which isn’t designed to return anything (which is known as a procedure). SQL Server is no different. You can call stored procedures, even from within other stored procedures, and you can call functions and use these in other queries. Stored procedures might query something, and therefore ‘return data’, but a function in SQL is considered to have the type of the thing returned, and can be used accordingly in queries. Consider the internal GETDATE() function. SELECT GETDATE(), SomeDatetimeColumn FROM dbo.SomeTable; There’s no logical difference between the field that is being returned by the function and the field that’s being returned by the table column. Both are the datetime field – if you didn’t have inside knowledge, you wouldn’t necessarily be able to tell which was which. And so as developers, we find ourselves wanting to create functions that return all kinds of things – functions which look up values based on codes, functions which do string manipulation, and so on. But it’s rubbish. Ok, it’s not all rubbish, but it mostly is. And this isn’t even considering the SARGability impact. It’s far more significant than that. (When I say the SARGability aspect, I mean “because you’re unlikely to have an index on the result of some function that’s applied to a column, so try to invert the function and query the column in an unchanged manner”) I’m going to consider the three main types of user-defined functions in SQL Server: Scalar Inline Table-Valued Multi-statement Table-Valued I could also look at user-defined CLR functions, including aggregate functions, but not today. I figure that most people don’t tend to get around to doing CLR functions, and I’m going to focus on the T-SQL-based user-defined functions. Most people split these types of function up into two types. So do I. Except that most people pick them based on ‘scalar or table-valued’. I’d rather go with ‘inline or not’. If it’s not inline, it’s rubbish. It really is. Let’s start by considering the two kinds of table-valued function, and compare them. These functions are going to return the sales for a particular salesperson in a particular year, from the AdventureWorks database. CREATE FUNCTION dbo.FetchSales_inline(@salespersonid int, @orderyear int) RETURNS TABLE AS  RETURN (     SELECT e.LoginID as EmployeeLogin, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ) ; GO CREATE FUNCTION dbo.FetchSales_multi(@salespersonid int, @orderyear int) RETURNS @results TABLE (     EmployeeLogin nvarchar(512),     OrderDate datetime,     SalesOrderID int     ) AS BEGIN     INSERT @results (EmployeeLogin, OrderDate, SalesOrderID)     SELECT e.LoginID, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ;     RETURN END ; GO You’ll notice that I’m being nice and responsible with the use of the DATEADD function, so that I have SARGability on the OrderDate filter. Regular readers will be hoping I’ll show what’s going on in the execution plans here. Here I’ve run two SELECT * queries with the “Show Actual Execution Plan” option turned on. Notice that the ‘Query cost’ of the multi-statement version is just 2% of the ‘Batch cost’. But also notice there’s trickery going on. And it’s nothing to do with that extra index that I have on the OrderDate column. Trickery. Look at it – clearly, the first plan is showing us what’s going on inside the function, but the second one isn’t. The second one is blindly running the function, and then scanning the results. There’s a Sequence operator which is calling the TVF operator, and then calling a Table Scan to get the results of that function for the SELECT operator. But surely it still has to do all the work that the first one is doing... To see what’s actually going on, let’s look at the Estimated plan. Now, we see the same plans (almost) that we saw in the Actuals, but we have an extra one – the one that was used for the TVF. Here’s where we see the inner workings of it. You’ll probably recognise the right-hand side of the TVF’s plan as looking very similar to the first plan – but it’s now being called by a stack of other operators, including an INSERT statement to be able to populate the table variable that the multi-statement TVF requires. And the cost of the TVF is 57% of the batch! But it gets worse. Let’s consider what happens if we don’t need all the columns. We’ll leave out the EmployeeLogin column. Here, we see that the inline function call has been simplified down. It doesn’t need the Employee table. The join is redundant and has been eliminated from the plan, making it even cheaper. But the multi-statement plan runs the whole thing as before, only removing the extra column when the Table Scan is performed. A multi-statement function is a lot more powerful than an inline one. An inline function can only be the result of a single sub-query. It’s essentially the same as a parameterised view, because views demonstrate this same behaviour of extracting the definition of the view and using it in the outer query. A multi-statement function is clearly more powerful because it can contain far more complex logic. But a multi-statement function isn’t really a function at all. It’s a stored procedure. It’s wrapped up like a function, but behaves like a stored procedure. It would be completely unreasonable to expect that a stored procedure could be simplified down to recognise that not all the columns might be needed, but yet this is part of the pain associated with this procedural function situation. The biggest clue that a multi-statement function is more like a stored procedure than a function is the “BEGIN” and “END” statements that surround the code. If you try to create a multi-statement function without these statements, you’ll get an error – they are very much required. When I used to present on this kind of thing, I even used to call it “The Dangers of BEGIN and END”, and yes, I’ve written about this type of thing before in a similarly-named post over at my old blog. Now how about scalar functions... Suppose we wanted a scalar function to return the count of these. CREATE FUNCTION dbo.FetchSales_scalar(@salespersonid int, @orderyear int) RETURNS int AS BEGIN     RETURN (         SELECT COUNT(*)         FROM Sales.SalesOrderHeader AS o         LEFT JOIN HumanResources.Employee AS e         ON e.EmployeeID = o.SalesPersonID         WHERE o.SalesPersonID = @salespersonid         AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')         AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ); END ; GO Notice the evil words? They’re required. Try to remove them, you just get an error. That’s right – any scalar function is procedural, despite the fact that you wrap up a sub-query inside that RETURN statement. It’s as ugly as anything. Hopefully this will change in future versions. Let’s have a look at how this is reflected in an execution plan. Here’s a query, its Actual plan, and its Estimated plan: SELECT e.LoginID, y.year, dbo.FetchSales_scalar(p.SalesPersonID, y.year) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; We see here that the cost of the scalar function is about twice that of the outer query. Nicely, the query optimizer has worked out that it doesn’t need the Employee table, but that’s a bit of a red herring here. There’s actually something way more significant going on. If I look at the properties of that UDF operator, it tells me that the Estimated Subtree Cost is 0.337999. If I just run the query SELECT dbo.FetchSales_scalar(281,2003); we see that the UDF cost is still unchanged. You see, this 0.0337999 is the cost of running the scalar function ONCE. But when we ran that query with the CROSS JOIN in it, we returned quite a few rows. 68 in fact. Could’ve been a lot more, if we’d had more salespeople or more years. And so we come to the biggest problem. This procedure (I don’t want to call it a function) is getting called 68 times – each one between twice as expensive as the outer query. And because it’s calling it in a separate context, there is even more overhead that I haven’t considered here. The cheek of it, to say that the Compute Scalar operator here costs 0%! I know a number of IT projects that could’ve used that kind of costing method, but that’s another story that I’m not going to go into here. Let’s look at a better way. Suppose our scalar function had been implemented as an inline one. Then it could have been expanded out like a sub-query. It could’ve run something like this: SELECT e.LoginID, y.year, (SELECT COUNT(*)     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = p.SalesPersonID     AND o.OrderDate >= DATEADD(year,y.year-2000,'20000101')     AND o.OrderDate < DATEADD(year,y.year-2000+1,'20000101')     ) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; Don’t worry too much about the Scan of the SalesOrderHeader underneath a Nested Loop. If you remember from plenty of other posts on the matter, execution plans don’t push the data through. That Scan only runs once. The Index Spool sucks the data out of it and populates a structure that is used to feed the Stream Aggregate. The Index Spool operator gets called 68 times, but the Scan only once (the Number of Executions property demonstrates this). Here, the Query Optimizer has a full picture of what’s being asked, and can make the appropriate decision about how it accesses the data. It can simplify it down properly. To get this kind of behaviour from a function, we need it to be inline. But without inline scalar functions, we need to make our function be table-valued. Luckily, that’s ok. CREATE FUNCTION dbo.FetchSales_inline2(@salespersonid int, @orderyear int) RETURNS table AS RETURN (SELECT COUNT(*) as NumSales     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ); GO But we can’t use this as a scalar. Instead, we need to use it with the APPLY operator. SELECT e.LoginID, y.year, n.NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID OUTER APPLY dbo.FetchSales_inline2(p.SalesPersonID, y.year) AS n; And now, we get the plan that we want for this query. All we’ve done is tell the function that it’s returning a table instead of a single value, and removed the BEGIN and END statements. We’ve had to name the column being returned, but what we’ve gained is an actual inline simplifiable function. And if we wanted it to return multiple columns, it could do that too. I really consider this function to be superior to the scalar function in every way. It does need to be handled differently in the outer query, but in many ways it’s a more elegant method there too. The function calls can be put amongst the FROM clause, where they can then be used in the WHERE or GROUP BY clauses without fear of calling the function multiple times (another horrible side effect of functions). So please. If you see BEGIN and END in a function, remember it’s not really a function, it’s a procedure. And then fix it. @rob_farley

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >