Search Results

Search found 33585 results on 1344 pages for 'sql execution plan'.

Page 415/1344 | < Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >

  • spring mvc simpleformcontroller - how to stop execution when exception thrown

    - by alan t
    Hi I am using simpleformcontroller and get exception thrown in my OnSubmit method. This does not seem to stop the simpleformcontroler process as it redisplays my form. I can see from log4j output that the exception is getting caught and forwarding to my error page as expected. I can also tell the onSubmit method does not continue after the exception. Which is all good. But i do not see the error page as the simpleformcontroller starts up again and goes through processing for a new form (i can see in log4j ouput Spring log statements 'Displaying New form', 'Creating new command of class ..'. The normal form page is then displayed again. So the problem is that the exception does not seem to terminate the SimpleFormController, it carries on to display the form again. Anyone help? Thanks Alan

    Read the article

  • Speaking at Atlanta.MDF on March 12

    - by RickHeiges
    I am fortunate enough to be speaking to a user group with a really cool name - Atlanta.MDF (Microsoft Database Forum). Although I visit Atlanta often, it usually involves running from one councourse to another and rarely do I get the chance to visit the user group. I have made it to the user group on several occassions in the past, but it has been several years. This will be my first presentation to the group. I will be speaking about Database Consolidation - something I have been doing for years....(read more)

    Read the article

  • Security Goes Underground

    - by BuckWoody
    You might not have heard of as many data breaches recently as in the past. As you’re probably aware, I call them out here as often as I can, especially the big ones in government and medical institutions, because I believe those can have lasting implications on a person’s life. I think that my data is personal – and I’ve seen the impact of someone having their identity stolen. It’s a brutal experience that I wouldn’t wish on anyone. So with all of that it stands to reason that I hold the data professionals to the highest standards on security. I think your first role is to ensure the data you have, number one because it can be so harmful, and number two because it isn’t yours. It belongs to the person that has that data. You might think I’m happy about that downturn in reported data losses. Well, I was, until I learned that companies have realized they suffer a lowering of their stock when they report it, but not when they don’t. So, since we all do what we are measured on, they don’t. So now, not only are they not protecting your information, they are hiding the fact that they are losing it. So take this as a personal challenge. Make sure you have a security audit on your data, and treat any breach like a personal failure. We’re the gatekeepers, so let’s keep the gates. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Why I don't use SSIS checkpoint files

    - by jamiet
    In a recent discussion in regard to general ETL best practises the subject of checkpoint files as a means for package restartability came up and I stated that I was dead against using them. For anyone that may care, here is why: Configuring them is distinctly unintuitive (that's a matter of opinion but if you follow the link I'll wager that you will agree) they don't make any allowance for loop iterations they cannot store variables of type Object they are limited in ability. There are many scenarios where you may want to execute certain containers regardless of whether the package is started from a checkpoint file but the current usage model does not allow for this. they are ignored by eventhandlers, which wouldn't be so bad if there were a way to toggle this behaviour in certain scenarios they dont work properly I'll expand on the last bullet point. I have encountered situations where the behaviour for tasks executing concurrently is unpredictable. That is, sometimes the completion of a task that executes concurrently with a failed/failing task will make it into the checkpoint file and sometimes it won't. This is near-impossible to reproduce but it does happen as my good friend John Welch will hopefully concur (if he is reading). Is anyone out there making successful use of checkpoint files within SSIS? I would be interested in knowing about that if so. @Jamiet

    Read the article

  • Merge Join component sorted outputs [SSIS]

    - by jamiet
    One question that I have been asked a few times of late in regard to performance tuning SSIS data flows is this: Why isn’t the Merge Join output sorted (i.e.IsSorted=True)? This is a fair question. After all both of the Merge Join inputs are sorted, hence why wouldn’t the output be sorted as well? Well here’s a little secret, the Merge Join output IS sorted! There’s a caveat though – it is only under certain circumstances and SSIS itself doesn’t do a good job of informing you of it. Let’s take a look at an example. Here we have a dataflow that consumes data from the [AdventureWorks2008].[Sales].[SalesOrderHeader] & [AdventureWorks2008].[Sales].[SalesOrderDetail] tables then joins them using a Merge Join component: Let’s take a look inside the editor of the Merge Join: We are joining on the [SalesOrderId] field (which is what the two inputs just happen to be sorted upon). We are also putting [SalesOrderHeader].[SalesOrderId] into the output. Believe it or not the output from this Merge Join component is sorted (i.e. has IsSorted=True) but unfortunately the Merge Join component does not have an Advanced Editor hence it is hidden away from us. There are a couple of ways to prove to you that is the case; I could open up the package XML inside the .dtsx file and show you the metadata but there is an easier way than that – I can attach a Sort component to the output. Take a look: Notice that the Sort component is attempting to sort on the [SalesOrderId] column. This gives us the following warning: Validation warning. DFT Get raw data: {992B7C9A-35AD-47B9-A0B0-637F7DDF93EB}: The data is already sorted as specified so the transform can be removed. The warning proves that the output from the Merge Join is sorted! It must be noted that the Merge Join output will only have IsSorted=True if at least one of the join columns is included in the output. So there you go, the Merge Join component can indeed produce a sorted output and that’s very useful in order to avoid unnecessary expensive Sort operations downstream. Hope this is useful to someone out there! @Jamiet  P.S. Thank you to Bob Bojanic on the SSIS product team who pointed this out to me!

    Read the article

  • Managing execution priorities and request expiry time in your web application

    - by Dan
    Some installations that run our applications can be under hefty stress on a busy day. Our clients ask us is there is a way to manage priorities in our application. For example, in a typical internet banking application, banks are interested in having the form “Transfer money” responsive, while the “Statement” page is a lot less critical. Not being able to transfer money is a direct loss for the bank, while not being able to produce a statement or something similar can be fixed with an apology. AFAIK, neither can you manage different request or session timeouts in a typical web application, it is one value for the whole of your web app. Managing message priority and expiry time is a typical feature in many middleware platforms. Something like this can be useful for a web front end as well. Do any of web servers (either java or .net) or web frameworks provide these features? How would you go about implementing it if you’d have to go for roll-your-own?

    Read the article

  • PASS By-Law Changes

    - by RickHeiges
    Over the past year, the PASS Board of Directors (BoD) has been looking at changing the by-laws. We've had in-depth in-person discussions about how the by-laws could/should be changed. Here is the link to the documents that I am referring to: http://www.sqlpass.org/Community/PASSBlog/entryid/300/Amendments-to-PASS-Bylaws.aspx One of the changes that I believe addresses more perception than reality is the rule of "No more than two from a single organization". While I personally do not believe that...(read more)

    Read the article

  • Click No Browse: How to Navigate Objects Without Opening Them

    - by thatjeffsmith
    Oracle SQL Developer by default automatically opens the object editor when you click on an object in your connection tree or schema browser. For most folks this is very convenient. But if you are selecting objects to drag them to a model or to the worksheet, this can get annoying as the focus of the screen changes when you don’t want it to. The other scenario this feature might disrupt more than delight is when you want to click around the database in the tree and every time you click on an object, the object editor automatically changes to the selected object. You can disable this automatic browsing behavior in SQL Developer by modifying this preference: Tools Preferences Database ObjectViewer Open Object on Single Click Disable this if you don’t want an object to open when you click on it OK, I do realize my description of the problem may have confused the heck out of you just now. So instead of more words, how about a couple of animations of the object-click behavior with the option ON and OFF? Preference Disabled Click, no open. Double click, open. Preference Enabled (Default) As you click on objects, they are automatically opened

    Read the article

  • Backup File Naming Convention

    - by Andrew Kelly
      I have been asked this many times before and again just recently so I figured why not blog about it. None of this information outlined here is rocket science or even new but it is an area that I don’t think people put enough thought into before implementing.  Sure everyone choses some format but it often doesn’t go far enough in my opinion to get the most bang for the buck. This is the format I prefer to use: ServerName_InstanceName_BackupType_DBName_DateTimeStamp.xxx ServerName_InstanceName...(read more)

    Read the article

  • Working with data and meta data that are separated on different servers

    - by afuzzyllama
    While developing a product, I've come across a situation where my group wants to store meta data for data entry forms (questions, layout, etc) in a different database then the database where the collected data is stored. This is mostly for security because we want to be able to have our meta data public facing, while keeping collected data as secure as possible. I was thinking about writing a web service that provides the meta information that the data collection program could access. The only issue I see with this approach is the front end is going to have to match the meta data with the collected data, which would be more efficient as a join on the back end. Currently, this system is slated to run on .NET and MSSQL. I haven't played around with .NET libraries running in SQL, but I'm considering trying to create logic that would pull from the web service, convert the meta data into a table that SQL can join on, and return the combined data and meta data that way. Is this solution the wrong way to approach the problem? Is there a pattern or "industry standard" way of bringing together two datasets that don't live in the same database?

    Read the article

  • ASP.NET MVC 2: How to write this Linq SQL as a Dynamic Query (using strings)?

    - by Dr. Zim
    Skip to the "specific question" as needed. Some background: The scenario: I have a set of products with a "drill down" filter (Query Object) populated with DDLs. Each progressive DDL selection will further limit the product list as well as what options are left for the DDLs. For example, selecting a hammer out of tools limits the Product Sizes to only show hammer sizes. Current setup: I created a query object, sent it to a repository, and fed each option to a SQL "table valued function" where null values represent "get all products". I consider this a good effort, but far from DDD acceptable. I want to avoid any "programming" in SQL, hopefully doing everything with a repository. Comments on this topic would be appreciated. Specific question: How would I rewrite this query as a Dynamic Query? A link to something like 101 Linq Examples would be fantastic, but with a Dynamic Query scope. I really want to pass to this method the field in quotes "" for which I want a list of options and how many products have that option. (from p in db.Products group p by p.ProductSize into g select new Category { PropertyType = g.Key, Count = g.Count() }).Distinct(); Each DDL option will have "The selection (21)" where the (21) is the quantity of products that have that attribute. Upon selecting an option, all other remaining DDLs will update with the remaining options and counts.

    Read the article

  • ArgumentOutOfRangeException at MySql execution. (MySqlConnector .NET)

    - by Lazlo
    I am getting this exception from a MySqlCommand.ExecuteNonQuery(): Index and length must refer to a location within the string. Parameter name: length The command text is as follows: INSERT INTO accounts (username, password, salt, pin, banned, staff, logged_in, points_a, points_b, points_c, birthday) VALUES ('adminb', 'aea785fbcac7f870769d30226ad55b1aab850fb0979ee00481a87bc846744a646a649d30bca5474b59e4292095c74fa47ae6b9b3a856beef332ff873474cc0d3', 'cb162ef55ff7c58c7cb9f2a580928679', '', '0, '0', '0', '0', '0', '0', '2010-04-18') Sorry for the long string, it is a SHA512 hash. I tried manually adding this data in the table from MySQL GUI tools, and it worked perfectly. I see no "out of range" problem in these strings. Does anybody see something wrong?

    Read the article

  • Adding Actions to a Cube in SQL Server Analysis Services 2008

    Actions are powerful way of extending the value of SSAS cubes for the end user. They can click on a cube or portion of a cube to start an application with the selected item as a parameter, or to retrieve information about the selected item. Actions haven't been well-documented until now; Robert Sheldon once more makes everything clear.

    Read the article

  • Asp.Net MVC and ajax async callback execution order

    - by lrb
    I have been sorting through this issue all day and hope someone can help pinpoint my problem. I have created a "asynchronous progress callback" type functionality in my app using ajax. When I strip the functionality out into a test application I get the desired results. See image below: Desired Functionality When I tie the functionality into my single page application using the same code I get a sort of blocking issue where all requests are responded to only after the last task has completed. In the test app above all request are responded to in order. The server reports a ("pending") state for all requests until the controller method has completed. Can anyone give me a hint as to what could cause the change in behavior? Not Desired Desired Fiddler Request/Response GET http://localhost:12028/task/status?_=1383333945335 HTTP/1.1 X-ProgressBar-TaskId: 892183768 Accept: */* X-Requested-With: XMLHttpRequest Referer: http://localhost:12028/ Accept-Language: en-US Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0) Connection: Keep-Alive DNT: 1 Host: localhost:12028 HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Vary: Accept-Encoding Server: Microsoft-IIS/8.0 X-AspNetMvc-Version: 3.0 X-AspNet-Version: 4.0.30319 X-SourceFiles: =?UTF-8?B?QzpcUHJvamVjdHNcVEVNUFxQcm9ncmVzc0Jhclx0YXNrXHN0YXR1cw==?= X-Powered-By: ASP.NET Date: Fri, 01 Nov 2013 21:39:08 GMT Content-Length: 25 Iteration completed... Not Desired Fiddler Request/Response GET http://localhost:60171/_Test/status?_=1383341766884 HTTP/1.1 X-ProgressBar-TaskId: 838217998 Accept: */* X-Requested-With: XMLHttpRequest Referer: http://localhost:60171/Report/Index Accept-Language: en-US Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0) Connection: Keep-Alive DNT: 1 Host: localhost:60171 Pragma: no-cache Cookie: ASP.NET_SessionId=rjli2jb0wyjrgxjqjsicdhdi; AspxAutoDetectCookieSupport=1; TTREPORTS_1_0=CC2A501EF499F9F...; __RequestVerificationToken=6klOoK6lSXR51zCVaDNhuaF6Blual0l8_JH1QTW9W6L-3LroNbyi6WvN6qiqv-PjqpCy7oEmNnAd9s0UONASmBQhUu8aechFYq7EXKzu7WSybObivq46djrE1lvkm6hNXgeLNLYmV0ORmGJeLWDyvA2 HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Vary: Accept-Encoding Server: Microsoft-IIS/8.0 X-AspNetMvc-Version: 4.0 X-AspNet-Version: 4.0.30319 X-SourceFiles: =?UTF-8?B?QzpcUHJvamVjdHNcSUxlYXJuLlJlcG9ydHMuV2ViXHRydW5rXElMZWFybi5SZXBvcnRzLldlYlxfVGVzdFxzdGF0dXM=?= X-Powered-By: ASP.NET Date: Fri, 01 Nov 2013 21:37:48 GMT Content-Length: 25 Iteration completed... The only difference in the two requests headers besides the auth tokens is "Pragma: no-cache" in the request and the asp.net version in the response. Thanks Update - Code posted (I probably need to indicate this code originated from an article by Dino Esposito ) var ilProgressWorker = function () { var that = {}; that._xhr = null; that._taskId = 0; that._timerId = 0; that._progressUrl = ""; that._abortUrl = ""; that._interval = 500; that._userDefinedProgressCallback = null; that._taskCompletedCallback = null; that._taskAbortedCallback = null; that.createTaskId = function () { var _minNumber = 100, _maxNumber = 1000000000; return _minNumber + Math.floor(Math.random() * _maxNumber); }; // Set progress callback that.callback = function (userCallback, completedCallback, abortedCallback) { that._userDefinedProgressCallback = userCallback; that._taskCompletedCallback = completedCallback; that._taskAbortedCallback = abortedCallback; return this; }; // Set frequency of refresh that.setInterval = function (interval) { that._interval = interval; return this; }; // Abort the operation that.abort = function () { // if (_xhr !== null) // _xhr.abort(); if (that._abortUrl != null && that._abortUrl != "") { $.ajax({ url: that._abortUrl, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId } }); } }; // INTERNAL FUNCTION that._internalProgressCallback = function () { that._timerId = window.setTimeout(that._internalProgressCallback, that._interval); $.ajax({ url: that._progressUrl, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId }, success: function (status) { if (that._userDefinedProgressCallback != null) that._userDefinedProgressCallback(status); }, complete: function (data) { var i=0; }, }); }; // Invoke the URL and monitor its progress that.start = function (url, progressUrl, abortUrl) { that._taskId = that.createTaskId(); that._progressUrl = progressUrl; that._abortUrl = abortUrl; // Place the Ajax call _xhr = $.ajax({ url: url, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId }, complete: function () { if (_xhr.status != 0) return; if (that._taskAbortedCallback != null) that._taskAbortedCallback(); that.end(); }, success: function (data) { if (that._taskCompletedCallback != null) that._taskCompletedCallback(data); that.end(); } }); // Start the progress callback (if any) if (that._userDefinedProgressCallback == null || that._progressUrl === "") return this; that._timerId = window.setTimeout(that._internalProgressCallback, that._interval); }; // Finalize the task that.end = function () { that._taskId = 0; window.clearTimeout(that._timerId); } return that; };

    Read the article

  • SQL MDS - Updating the Name attribute of member using Staging Table

    - by Randy Aldrich Paulo
    Creating member is usually done by populating the Member Staging Table (tblStgMember), during this process you assign a value for member code and member name. Now if you want to update the member name attribute you can do this by adding record in Attribute staging table (tblStgMemberAttribute) with Attribute Name = "Name". If you try populating the tblStgMember table it will say that the member code already exists.   INSERT INTO mdm.tblStgMemberAttribute (ModelName, EntityName, MemberType_ID, MemberCode, AttributeName, AttributeValue) VALUES (N'Product', N'Product', 1, N'BK-M101', N'Name',N'Updated Member Name Description')

    Read the article

  • How to avoid loading a LINQ to SQL object twice when editting it on a website.

    - by emzero
    Hi guys I know you are all tired of this Linq-to-Sql questions, but I'm barely starting to use it (never used an ORM before) and I've already find some "ugly" things. I'm pretty used to ASP.NET Webforms old school developing, but I want to leave that behind and learn the new stuff (I've just started to read a ASP.NET MVC book and a .NET 3.5/4.0 one). So here's is one thing I didn't like and I couldn't find a good alternative to it. In most examples of editing a LINQ object I've seen the object is loaded (hitting the db) at first to fill the current values on the form page. Then, the user modify some fields and when the "Save" button is clicked, the object is loaded for second time and then updated. Here's a simplified example of ScottGu NerdDinner site. // // GET: /Dinners/Edit/5 [Authorize] public ActionResult Edit(int id) { Dinner dinner = dinnerRepository.GetDinner(id); return View(new DinnerFormViewModel(dinner)); } // // POST: /Dinners/Edit/5 [AcceptVerbs(HttpVerbs.Post), Authorize] public ActionResult Edit(int id, FormCollection collection) { Dinner dinner = dinnerRepository.GetDinner(id); UpdateModel(dinner); dinnerRepository.Save(); return RedirectToAction("Details", new { id=dinner.DinnerID }); } As you can see the dinner object is loaded two times for every modification. Unless I'm missing something about LINQ to SQL caching the last queried objects or something like that I don't like getting it twice when it should be retrieved only one time, modified and then comitted back to the database. So again, am I really missing something? Or is it really hitting the database twice (in the example above it won't harm, but there could be cases that getting an object or set of objects could be heavy stuff). If so, what alternative do you think is the best to avoid double-loading the object? Thank you so much, Greetings!

    Read the article

  • SSDT gotcha – Moving a file erases code analysis suppressions

    - by jamiet
    I discovered a little wrinkle in SSDT today that is worth knowing about if you are managing your database schemas using SSDT. In short, if a file is moved to a different folder in the project then any code analysis suppressions that reference that file will disappear from the suppression file. This makes sense if you think about it because the paths stored in the suppression file are no longer valid, but you probably won’t be aware of it until it happens to you. If you don’t know what code analysis is or you don’t know what the suppression file is then you can probably stop reading now, otherwise read on for a simple short demo. Let’s create a new project and add a stored procedure to it called sp_dummy. Naming stored procedures with a sp_ prefix is generally frowned upon and hence SSDT static code analysis will look for occurrences of this and flag them. So, the next thing we need to do is turn on static code analysis in the project properties: A subsequent build causes a code analysis warning as we might expect: Let’s suppose we actually don’t mind stored procedures with sp_ prefixes, we can just right-click on the message to suppress and get rid of it: That causes a suppression file to get created in our project: Notice that the suppression file contains a relative path to the file that has had the suppression placed upon it. Now if we simply move the file within our project to a new folder notice that the suppression that we just created gets removed from the suppression file: As I alluded above this behaviour is intuitive because the path originally stored in the suppression file is no longer relevant but you’re probably not going to be aware of it until it happens to you and messages that you thought you had suppressed start appearing again. Definitely one to be aware of. @Jamiet   

    Read the article

  • Is it possible to update old database from dbml file ? (C#, .Net 4, Linq, SQL Server)

    - by Emil
    Hi all, I began recently a new job, a very interesting project (C#,.Net 4, Linq, VS 2010 and SQL Server). And immediately I got a very exciting challenge: I must implement either a new tool or integrate the logic when program start, or whatever, but what must happen is the following: the customers have previous application and database (full with their specific data). Now a new version is ready and the customer gets the update. In the mean time we made some modification on DB (new table, columns, maybe an old column deleted, or whatever). I’m pretty new in Linq and also SQL databases and my first solution can be: I check the applications/databases version and implement all the changes step by step comparing all tables, columns, keys, constrains, etc. (all this new information I have in my dbml and the old I asked from the existing DB). And I’ll do this each time the version changed. But somehow I feel, this is NOT a smart solution so I look for a general solution of this problem. Is there a way to update customers DB from the dbml file? To create a new one is not a problem (CreateDatabase with DataContext), is there any update/alter database methods? I guess I’m not the only one who search for such a solution (I found nothing in internet – or I looked for bad keywords). How did you solve this problem? I look also for an external tool, but first for a solution with C#, Linq or something similar. For any idea thank you in advance! Best regards, Emil

    Read the article

  • Optimising Database Mirroring over WAN

    - by blakmk
      I recently got asked by our network guys about botlenecks in the WAN that used for mirroring to our DR I site. They asked me to turn off encryption of Database Mirroring so that the riverbed software  they were using could optimise the packets sent over the WAN. I was a bit sceptical at first about the security risks, but it seems the riverbed software has its own form of obfuscation making the packets difficult to read. After reading an article by rusanu I realised that it could be done with minimal downtime and potential reducing network traffic by 5-10% on its own. After turning off encryption I was pleasantly suprised to see that overall network traffic for mirroring dropped by a whopping 75%!                                               This Web Page Created with PageBreeze Free HTML Editor

    Read the article

  • jquery control execution of the callback functions

    - by user253530
    Function socialbookmarksTableData(data) is called by another function to generate the content of a table -- data is a JSON object. Inside the function i call 2 other functions that use getJSON and POST (with json as a return object) to get some data. The problem is: though the functions execute correctly i get undefined value for the 2 variables (bookmarkingSites, bookmarkCategories). Help with a solution please. function socialbookmarksGetBookmarkCategories(bookmarkID) { var toReturn = ''; $.post("php/socialbookmark-get-bookmark-categories.php",{ bookmarkID: bookmarkID },function(data){ $.each(data,function(i,categID){ toReturn += '<option value ="' + data[i].categID + '">' + data[i].categName + '</option>'; }) return toReturn; },"JSON"); } function socialbookmarksGetBookmarkSites() { var bookmarkingSites = ''; $.getJSON("php/socialbookmark-get-bookmarking-sites.php",function(bookmarks){ for(var i = 0; i < bookmarks.length; i++){ //alert( bookmarks[i].id); bookmarkingSites += '<option value = "' + bookmarks[i].id + '">' + bookmarks[i].title + '</option>'; } return bookmarkingSites; }); } function socialbookmarksTableData(data) { var toAppend = ''; var bookmarkingSites = socialbookmarksGetBookmarkSites(); $.each(data.results, function(i, id){ var bookmarkCategories = socialbookmarksGetBookmarkCategories(data.results[i].bookmarkID); //rest of the code is not important }); $("#searchTable tbody").append(toAppend); }

    Read the article

  • [C#] Onpaint events (invalidated) changing execution order after a period normal operation (runtime)

    - by Luke Mcneice
    Hi all, I have 3 data graphs that are painted via the their paint events. When I have data that I need to insert into the graph I call the controls invalidate() command. The first control's paint event actually creates a bitmap buffer for the other 2 graphs to avoid repeating a long loop. So the invalidate commands are in a specific order (1,2,3). This works well, however when the graphed data reaches the end of the graph window (PictureBox) where the data would normally start scrolling, the paint events begin firing in the wrong order (2,3,1). has anyone came across this before? why might this be happening?

    Read the article

  • SQL Server 2008 R2: StreamInsight changes at RTM: Access to grouping keys via explicit typing

    - by Greg Low
    One of the problems that existed in the CTP3 edition of StreamInsight was an error that occurred if you tried to access the grouping key from within your projection expression. That was a real issue as you always need access to the key. It's a bit like using a GROUP BY in TSQL and then not including the columns you're grouping by in the SELECT clause. You'd see the results but not be able to know which results are which. Look at the following code: var laneSpeeds = from e in vehicleSpeeds group e...(read more)

    Read the article

  • What should PASS be?

    - by RickHeiges
    Recently, there have been some blog posts about what PASS should be? It is great to see these posts because it gives the BoD feedback on how we are doing and where we can improve. When I first started to get involved in PASS back in 2001, PASS was little more than a conference and some loosely affiliated chapters. It wanted to be more and claimed to be more, but it wasn't. The conference was (and still is) our main source of revenue. The website was essentially a brochure for the conference. The...(read more)

    Read the article

  • What are events SQL profiler eventclass numbers 65527,65528,65533,65534

    - by simonsabin
    I’ve been trying to use RML to process some files and couldn’t figure out why the numbers where all so much smaller than they should be. I then found a line in the RML output “Found [TRACE_STOP] event indicating the end of the trace files” This causes RML to stop processing further data, oh. In my case I had stopped the trace to add some error events because the client was experiencing errors. How do I get RML to process all the other data I wondered. This lead me to the eventclasses in the trace...(read more)

    Read the article

  • Including BLOB images in your PDF Reports

    - by thatjeffsmith
    Earlier this year we walked through how to work with BLOBs in Oracle SQL Developer. So you already know how to INSERT, UPDATE and view the BLOBs stored in your tables. But now I want to show you how to include those images in your PDF reports. You know how to work with SQL Developer reports, right? No? OK, let’s do a quick run down memory lane then: How to Build a Bar Chart Child reports – click on parent record for on-the-fly children records Alright, so if you have a GRID report that contains a BLOB column, you have the option of including the BLOB contents when you create a PDF export: At design time, specify how you want the BLOB content to be treated when you export to PDF Note that you must specify the treatment of the BLOBs in the report design. You won’t be prompted when you launch the Export wizard dialog. When you open your PDF, there will be a link to the image. Click it. Click then confirm. It will launch the default image viewer on your machine. I hope your pictures are more excited than mine.

    Read the article

< Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >