Search Results

Search found 36387 results on 1456 pages for 'query extender control'.

Page 57/1456 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Version control: delete branches after merging?

    - by Nathan Long
    When you branch some code, finish working with the branch, and merge it back to the trunk, what do you do with the branch? Delete it from the repository? Keep it for reference? It seems like you would keep it for reference, but I imagine the /branches directory could get pretty cluttered. (If this isn't something people generally agree on, please comment and I'll make it a community wiki.)

    Read the article

  • sending control+c (SIGINT) to NSPIPE in objective-c

    - by Ron
    Hello, I am trying to terminate an openvpn task, spawned via NSTask. My question: Should I send ctrl+c (SIGINT) to the input NSPipe for my NSTask? inputPipe = [NSPipe pipe]; taskInput = [inputPipe fileHandleForWriting]; NSString dataString = @"\cC"; [taskInput writeData:[dataString dataUsingEncoding: [NSString defaultCStringEncoding]]]; Alternatively, I was thinking about using kill( pid, SIGINT ); but it would be much more complicated since the process ID has to be determined via a detour ([task processIdentifier] does not help here) - the original NSTask calls: /bin/bash -c sudo -S | mypassword .... That's not nice, I know but it is only called once and the sudo password has been entered in that case already. thanks for any suggestions/opinions/etc. Ron

    Read the article

  • WPF Issues with Control Layout

    - by Brett Powell
    I am making an application that connects to our billing software using its API, and I am running into a few issues getting the layout working properly. I want to make it so that when one of the expanders is minimized, the other window fills the gap, and when it is expanded again the other expander goes back to where it was. Right now when the arrow is clicked on one, there is just an empty gap. I used a DockPanel as the parent which I assumed would automatically do this, but it isn't working. Second question, is there a way to make these areas resizable? I don't want to try and get too frisky with allowing the user to undock the menus (don't even know if that is possible with just straight WPF) but it would be nice if they could change the width/height of them. Also, just a newbie question to C#, but what is the equivalent of a C++ header file? It looks like you just use .cs files, but I am not sure. I want to extract all of my functions that pull the data from the billing software and put them into a different file to clean up the code. Here is my XAML... <Window x:Class="WpfApplication3.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Billing Management" Height="550" Width="754" xmlns:shared="http://schemas.actiprosoftware.com/winfx/xaml/shared" WindowStartupLocation="CenterScreen" WindowStyle="ThreeDBorderWindow"> <Grid> <Grid.RowDefinitions> <RowDefinition Height="22" /> <RowDefinition /> </Grid.RowDefinitions> <Menu Height="22" Name="menu1" Margin="0" HorizontalAlignment="Stretch" VerticalAlignment="Top" HorizontalContentAlignment="Left" IsEnabled="True" IsMainMenu="True"> <MenuItem Header="_File"> <MenuItem Header="_Open" /> <MenuItem Header="_Close" /> <Separator/> <MenuItem Header="_Exit" /> </MenuItem> </Menu> <TabControl Name="tabControl1" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch" BorderThickness="1" Padding="0" TabStripPlacement="Bottom" UseLayoutRounding="False" FlowDirection="LeftToRight" Grid.Row="1"> <TabItem Header="Main" Name="tabItem1" Margin="0"> <DockPanel Name="dockPanel1" LastChildFill="True"> <ListBox Height="100" Name="listBox3" DockPanel.Dock="Top" /> <ListBox Name="listBox4" Width="200" DockPanel.Dock="Right" /> <DockPanel Height="Auto" Name="dockPanel2" Width="Auto" VerticalAlignment="Stretch" LastChildFill="True"> <shared:AnimatedExpander Header="Staff Online" Width="200" Name="expanderStaffOnline" IsExpanded="True" Height="194" BorderThickness="0" DockPanel.Dock="Top" VerticalContentAlignment="Stretch"> <ListBox Name="listboxStaffOnline" Width="Auto" Height="Auto" Margin="0" VerticalAlignment="Stretch" Loaded="listboxStaffOnline_Loaded" /> </shared:AnimatedExpander> <shared:AnimatedExpander Header="Test Menu 2" Height="Auto" Name="animatedExpander1" BorderThickness="1" Margin="0,0,0,0" IsExpanded="True" VerticalContentAlignment="Stretch"> <ListBox Height="Auto" HorizontalAlignment="Stretch" Name="listBox6" VerticalAlignment="Stretch" Margin="0" BorderThickness="1" /> </shared:AnimatedExpander> </DockPanel> <ListBox Height="100" Name="listboxAdminLogs" DockPanel.Dock="Bottom" Loaded="listboxAdminLogs_Loaded" /> <ListBox Name="listBox5" /> </DockPanel> </TabItem> <TabItem Header="Support" Name="tabItem2" Margin="0"> </TabItem> <TabItem Header="Clients" /> <TabItem Header="Billing" /> <TabItem Header="Orders" /> </TabControl> </Grid> </Window>

    Read the article

  • Version Control: multiple version hell, file synchronization

    - by SigTerm
    Hello. I would like to know how you normally deal with this situation: I have a set of utility functions. Say..5..10 files. And technically they are static library, cross-platform - SConscript/SConstruct plus Visual Studio project (not solution). Those utility functions are used in multiple small projects (15+, number increases over time). Each project has a copy of a few files or of an entire library, not a link into one central place. Sometimes project uses one file, two files, some use everything. Normally, utility functions are included as a copy of every file and SConscript/SConstruct or Visual Studio Project (depending on the situation). Each project has a separate git repository. Sometimes one project is derived from other, sometimes it isn't. You work on every one of them, in random order. There are no other people (to make things simpler) The problem arises when while working on one project you modify those utility function files. Because each project has a copy of file, this introduces new version, which leads to the mess when you try later (week later, for example) to guess which version has a most complete functionality (i.e. you added a function to a.cpp in one project, and added another function to a.cpp in another project, which created a version fork) How would you handle this situation to avoid "version hell"? One way I can think of is using symbolic links/hard links, but it isn't perfect - if you delete one central storage, it will all go to hell. And hard links won't work on dual-boot system (although symbolic links will). It looks like what I need is something like advanced git repository, where code for the project is stored in one local repository, but is synchronized with multiple external repositories. But I'm not sure how to do it or if it is possible to do this with git. So, what do you think?

    Read the article

  • Common Lisp condition system for transfer of control

    - by Ken
    I'll admit right up front that the following is a pretty terrible description of what I want to do. Apologies in advance. Please ask questions to help me explain. :-) I've written ETLs in other languages that consist of individual operations that look something like: // in class CountOperation IEnumerable<Row> Execute(IEnumerable<Row> rows) { var count = 0; foreach (var row in rows) { row["record number"] = count++; yield return row; } } Then you string a number of these operations together, and call The Dispatcher, which is responsible for calling Operations and pushing data between them. I'm trying to do something similar in Common Lisp, and I want to use the same basic structure, i.e., each operation is defined like a normal function that inputs a list and outputs a list, but lazily. I can define-condition a condition (have-value) to use for yield-like behavior, and I can run it in a single loop, and it works great. I'm defining the operations the same way, looping through the inputs: (defun count-records (rows) (loop for count from 0 for row in rows do (signal 'have-value :value `(:count ,count @,row)))) The trouble is if I want to string together several operations, and run them. My first attempt at writing a dispatcher for these looks something like: (let ((next-op ...)) ;; pick an op from the set of all ops (loop (handler-bind ((have-value (...))) ;; records output from operation (setq next-op ...) ;; pick a new next-op (call next-op))) But restarts have only dynamic extent: each operation will have the same restart names. The restart isn't a Lisp object I can store, to store the state of a function: it's something you call by name (symbol) inside the handler block, not a continuation you can store for later use. Is it possible to do something like I want here? Or am I better off just making each operation function explicitly look at its input queue, and explicitly place values on the output queue?

    Read the article

  • Setting up Inversion of Control (IoC) in ASP.NET MVC with Castle Windsor

    - by Lirik
    I'm going over Sanderson's Pro ASP.NET MVC Framework and in Chapter 4 he discusses Creating a Custom Controller Factory and it seems that the original method, AddComponentLifeStyle or AddComponentWithLifeStyle, used to register controllers is deprecated now: public class WindsorControllerFactory : DefaultControllerFactory { IWindsorContainer container; public WindsorControllerFactory() { container = new WindsorContainer(new XmlInterpreter(new ConfigResource("castle"))); // register all the controller types as transient var controllerTypes = from t in Assembly.GetExecutingAssembly().GetTypes() where typeof(IController).IsAssignableFrom(t) select t; //[Obsolete("Use Register(Component.For<I>().ImplementedBy<T>().Named(key).Lifestyle.Is(lifestyle)) instead.")] //IWindsorContainer AddComponentLifeStyle<I, T>(string key, LifestyleType lifestyle) where T : class; foreach (Type t in controllerTypes) { container.Register(Component.For<IController>().ImplementedBy<???>().Named(t.FullName).LifeStyle.Is(LifestyleType.Transient)); } } // Constructs the controller instance needed to service each request protected override IController GetControllerInstance(Type controllerType) { return (IController)container.Resolve(controllerType); } } The new suggestion is to use Register(Component.For<I>().ImplementedBy<T>().Named(key).Lifestyle.Is(lifestyle)), but I can't figure out how to present the implementing controller type in the ImplementedBy<???>() method. I tried ImplementedBy<t>() and ImplementedBy<typeof(t)>(), but I can't find the appropriate way to pass int he implementing type. Any ideas?

    Read the article

  • Adding a search box to a windows forms web browser control that highlights text

    - by evan
    I have a windows forms (using c#) application. It displays a webpage and has a textbox/botton combination which can be used to search for text displayed to the user. Currently I search the inner text of all elements for the text in the textbox. And then I weed out the elements that are redundant (for example a word could be in a 'p' and 'b' element where the 'b' is a child element of 'p' so the element returned should be 'b'). Finally I run the ScrollIntoView(true) method on the found element. I'd now like to add a function that highlights the text (like if you search for a term in a real webbrowser). My first thought was to just inject html and or javascript code around the text but that seems like a messy solution. Any ideas on how I should do this? Thanks for your help!

    Read the article

  • Agile version control?

    - by Paul Dixon
    I'm trying to work out a good method to manage code changes on a large project with multiple teams. We use subversion at the moment, but I want more flexibility in building a new release than I seem to be able to get with subversion. Here's roughly I want: for each developer to create easily identifiable patches for the project. Each patch delivers a complete user story (a releasable feature or fix). It might encompass many changes to many files. developers are able to easily apply and remove their own and other patches to facilitate testing release manager selects the patches to be used in the next release into a new branch branch is tested, fixes merged in, and ultimately merged into live teams can then pull these changes back down into their sandboxes. I'm looking at stacked git as a way of achieving this, but what other tools or techniques can deliver this sort of workflow?

    Read the article

  • Understanding Git's version control

    - by georgeliquor
    Is there a way to go through different commits on a file. Say I modified a file 5 times and I want to go back to change 2, after I already committed and pushed to a repository. In my understanding the only way is to keep many branches, have I got that right? If I'm right I'm gonna have hundreds of branches in a few days, so I'm probably not understanding it really. Could anyone clear that up please?

    Read the article

  • Version control for subtitle creation

    - by user3635
    We make subtitles for a TV series and I plan to use a VCS for it. The structure of project directory is like this: series/ episode1/nameofepisode1.str episode2/nameofepisode2.str episode3/nameofepisode3.str ... Question: When I finish subtitle of an episode, I want to assign release tag for this episode (episode1_v1). I wanted to use git for this, but in git tag is assigned only to the whole repository. What to do, so that I can view every episode progress separately? Maybe there are some more suitable VCS for this?

    Read the article

  • Inversion of control domain objects construction problem

    - by Andrey
    Hello! As I understand IoC-container is helpful in creation of application-level objects like services and factories. But domain-level objects should be created manually. Spring's manual tells us: "Typically one does not configure fine-grained domain objects in the container, because it is usually the responsibility of DAOs and business logic to create/load domain objects." Well. But what if my domain "fine-grained" object depends on some application-level object. For example I have an UserViewer(User user, UserConstants constants) class. There user is domain object which cannot be injected, but UserViewer also needs UserConstants which is high-level object injected by IoC-container. I want to inject UserConstants from the IoC-container, but I also need a transient runtime parameter User here. What is wrong with the design? Thanks in advance! UPDATE It seems I was not precise enough with my question. What I really need is an example how to do this: create instance of class UserViewer(User user, UserService service), where user is passed as the parameter and service is injected from IoC. If I inject UserViewer viewer then how do I pass user to it? If I create UserViewer viewer manually then how do I pass service to it?

    Read the article

  • Validation Control

    - by James
    I placed some validation controls on my grid view template. The only problem is that it is taking a lot of space vertically. Is there a property to set to solve this problem?

    Read the article

  • Source control issue with deploying versions

    - by Bonefisher
    Hi all, we have this discussion about how to deploy to production revisions that are UAT closed without revisions with UAT not-closed status. We are using SVN and we figured out that we are not able to just take revisions without prior-revisions on the same file made. Let me explain it on this example: we have 3 revisions made on same file: r1: UAT closed (ready to deploy) r2: UAT not-closed (not ready) r3: UAT closed (ready to deploy) now I want to deploy only my changes for which the UAT is closed (e.g. r1 and r3). In SVN this is not possible because r3 contains also r2 changes.. How do you made this to work? Maybe branching? Or just take r1 and wait until r2 is UAT closed? thanks

    Read the article

  • version control projects in eclipse

    - by Stan
    Say I have eclipse installed in office and home. Both eclipse are 3.5 but may have slightly difference, like plugins version. I'd like to commit the code to online repo when get off work and then checkout at home. What would be a possible solution? github? sourceforge? Are they free? Would those slightly difference in eclipse cause any problem? Since I might commit whole project folder which consist some configurations. Can the community explain a bit or suggest some keyword? I will look up more online. Thanks.

    Read the article

  • (g)Vim with version control like Eclipse

    - by Somebody still uses you MS-DOS
    I was an Eclipse user, now I have to use Vim in my machine. I used to "compare" a file I edited with a CVS repository to do merges an commit the files, using a context menu and my mouse. Is this possible in Vim? Opening a vimdiff for a file before commiting, and commit it from vim itself? And how is that supposed to work? I'm supposing I would be editing a file. Then, I want to see the modifications. I run vimdiff in gvim, and a new window (or buffer) is opened. I run the modifications, save what is applicable (using vimdiff commands), and commit running another command. Is this all transparent in vim? Do I have to keep getting out of vim to my terminal, or can all be done inside it? Do I need to use some plugins, or just really simple functions inside my vimrc?

    Read the article

  • get ubuntu terminal to send an escape sequence (control+shift+up)

    - by user62046
    This problem starts when I use emacs ( with -nw option). Let me first explain it. I tried to define hotkey (for emacs) as following (global-set-key [(control shift up)] 'other-window) but it doesn't work (no error, just doesn't work), neither does (global-set-key [(control shift down)] 'other-window) But (global-set-key [(control shift right)] 'other-window) and (global-set-key [(control shift left)] 'other-window) work! But because the last two key combinations are used by emacs (as default), I don't wanna change them for other functions. So how could I make control-shift-up and control-shift-down work? I have googled "(control shift up)", it seems that control-shift-up is used by other people, (but not very few results). In the Stack Overflow forum, Gille answered me as following: Ctrl+Shift+Up does send a signal to your computer, but your terminal emulator is apparently not transmitting any escape sequence for it. So your problem is in two parts. First you must get your terminal emulator to send an escape sequence, which depends on your terminal emulator, and is Super User material, or Unix.SE if you're using a unix system. Then you need to declare the escape sequence in Emacs, and my answer explains that part So I come here for this question: How do I get my terminal (I use ubuntu 10.04, and the built-in terminal) to send an escape sequence for Control+Shift+Up Control+Shift+down

    Read the article

  • Is there a way to explore my WHS files via Xbox 360's Media Center app?

    - by bangoker
    I know its a "Console" question but its really more of a Media Center Extender question! I can access the data through the Video/Pictures options, but not through the Media Center App. Also I can only see the videos that are directly on the Videos folder on my WHS, but not any of the subfolders. Same thing in my Laptop, if I connect to my WHS through Media Player it will only show the videos on Videos but not any other sub-folder, but if I open Media Center I can then connect to server and see all the folders.

    Read the article

  • SQL SERVER – Maximize Database Performance with DB Optimizer – SQL in Sixty Seconds #054

    - by Pinal Dave
    Performance tuning is an interesting concept and everybody evaluates it differently. Every developer and DBA have different opinion about how one can do performance tuning. I personally believe performance tuning is a three step process Understanding the Query Identifying the Bottleneck Implementing the Fix While, we are working with large database application and it suddenly starts to slow down. We are all under stress about how we can get back the database back to normal speed. Most of the time we do not have enough time to do deep analysis of what is going wrong as well what will fix the problem. Our primary goal at that time is to just fix the database problem as fast as we can. However, here is one very important thing which we need to keep in our mind is that when we do quick fix, it should not create any further issue with other parts of the system. When time is essence and we want to do deep analysis of our system to give us the best solution we often tend to make mistakes. Sometimes we make mistakes as we do not have proper time to analysis the entire system. Here is what I do when I face such a situation – I take the help of DB Optimizer. It is a fantastic tool and does superlative performance tuning of the system. Everytime when I talk about performance tuning tool, the initial reaction of the people is that they do not want to try this as they believe it requires lots of the learning of the tool before they use it. It is absolutely not true with the case of the DB optimizer. It is a very easy to use and self intuitive tool. Once can get going with the product, in no time. Here is a quick video I have build where I demonstrate how we can identify what index is missing for query and how we can quickly create the index. Entire three steps of the query tuning are completed in less than 60 seconds. If you are into performance tuning and query optimization you should download DB Optimizer and give it a go. Let us see the same concept in following SQL in Sixty Seconds Video: You can Download DB Optimizer and reproduce the same Sixty Seconds experience. Related Tips in SQL in Sixty Seconds: Performance Tuning – Part 1 of 2 – Getting Started and Configuration Performance Tuning – Part 2 of 2 – Analysis, Detection, Tuning and Optimizing What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Interview Questions and Answers, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Identity

    Read the article

  • SQL SERVER – Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday post of Prox ‘n’ Funx. This is a very interesting subject. By the way Brad Schulz is my favorite guy when it is about blogging. I respect him as well learn a lot from him. Everybody is writing something new his subject, I decided to start SQL Server 2012 analytic functions series. SQL Server 2012 introduces new analytical function CUME_DIST(). This function provides cumulative distribution value. It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Let us fun following query. USE AdventureWorks GO SELECT SalesOrderID, OrderQty, CUME_DIST() OVER(ORDER BY SalesOrderID) AS CDist FROM Sales.SalesOrderDetail WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY CDist DESC GO Above query will give us following result. Now let us understand what is the formula behind CUME_DIST and why the values in SalesOrderID = 43670 are 1. Let us take more example and be clear about why the values in SalesOrderID = 43667 are 0.5. Now let us enhence the same example and use PARTITION BY into the OVER clause and see the results. Run following query in SQL Server 2012. USE AdventureWorks GO SELECT SalesOrderID, OrderQty, ProductID, CUME_DIST() OVER(PARTITION BY SalesOrderID ORDER BY ProductID ) AS CDist FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID DESC, CDist DESC GO Now let us see the result of this query. We are have changed the ORDER BY clause as well partitioning by SalesOrderID. You can see that CUME_DIST() function provides us different results. Additionally now we see value 1 multiple times. As we are using partitioning for each group of SalesOrderID we get the CUME_DIST() value. CUME_DIST() was long awaited Analytical function and I am glad to see it in SQL Server 2012. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Validating Spatial Object with IsValidDetailed Function

    - by pinaldave
    What do you prefer – error or warning indicating error may happen with the reason for the error. While writing the previous statement I remember the movie “Minory Report”. This blog post is not about minority report but I will still cover the concept in a single statement “Let us predict the future and prevent the crime which is about to happen in future”. (Please feel free to correct me if I am wrong about the movie concept, I really do not want to hurt your sentiment if you are dedicated fan). Let us switch to the SQL Server world. Spatial data types are interesting concepts. I love writing about spatial data types because it allows me to be creative with shapes (just like toddlers). When working with Spatial Datatypes it is all good when the spatial object works fine. However, when the spatial object has issue or it is created with invalid coordinates it used to give a simple error that there is an issue with the object but did not provide much information. This made it very difficult to debug. If this spatial object was used in the big procedure and while this big procedural error out because of the invalid spatial object, it is indeed very difficult to debug it. I always wished that the more information provided regarding what is the problem with spatial datatype. SQL Server 2012 has introduced the new function IsValidDetailed(). This function has made my life very easy. In simple words this function will check if the spatial object passed is valid or not. If it is valid it will give information that it is valid. If the spatial object is not valid it will return the answer that it is not valid and the reason for the same. This makes it very easy to debug the issue and make the necessary correction. DECLARE @p GEOMETRY = 'Polygon((2 2, 6 6, 4 2, 2 2))' SELECT @p.IsValidDetailed() GO DECLARE @p GEOMETRY = 'Polygon((2 2, 3 3, 4 4, 5 5, 6 6, 2 2))' SELECT @p.IsValidDetailed() GO DECLARE @p GEOMETRY = 'Polygon((2 2, 4 4, 4 2, 2 3, 2 2))' SELECT @p.IsValidDetailed() GO DECLARE @p GEOMETRY = 'CIRCULARSTRING(2 2, 4 4, 0 0)' SELECT @p.IsValidDetailed() GO DECLARE @p GEOMETRY = 'CIRCULARSTRING(2 2, 4 4, 0 0)' SELECT @p.IsValidDetailed() GO DECLARE @p GEOMETRY = 'LINESTRING(2 2, 4 4, 0 0)' SELECT @p.IsValidDetailed() GO Here is the resultset of the above query. You can see any valid query and some invalid query. If the query is invalid it also demonstrates the reason along with the error message. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Spatial Database, SQL Spatial

    Read the article

  • SQL SERVER – Fix: Error: 147 An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference

    - by pinaldave
    Everybody was beginner once and I always like to get involved in the questions from beginners. There is a big difference between the question from beginner and question from advanced user. I have noticed that if an advanced user gets an error, they usually need just a small hint to resolve the problem. However, when a beginner gets error he sometimes sits on the error for a long time as he/she has no idea about how to solve the problem as well have no idea regarding what is the capability of the product. I recently received a very novice level question. When I received the problem I quickly see how the user was stuck. When I replied him with the solution, he wrote a long email explaining how he was not able to solve the problem. He thanked multiple times in the email. This whole thing inspired me to write this quick blog post. I have modified the user’s question to match the code with AdventureWorks as well simplified so it contains the core content which I wanted to discuss. Problem Statement: Find all the details of SalesOrderHeaders for the latest ShipDate. He comes up with following T-SQL Query: SELECT * FROM [Sales].[SalesOrderHeader] WHERE ShipDate = MAX(ShipDate) GO When he executed above script it gave him following error: Msg 147, Level 15, State 1, Line 3 An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference. He was not able to resolve this problem, even though the solution was given in the query description itself. Due to lack of experience he came up with another version of above query based on the error message. SELECT * FROM [Sales].[SalesOrderHeader] HAVING ShipDate = MAX(ShipDate) GO When he ran above query it produced another error. Msg 8121, Level 16, State 1, Line 3 Column ‘Sales.SalesOrderHeader.ShipDate’ is invalid in the HAVING clause because it is not contained in either an aggregate function or the GROUP BY clause. What he wanted actually was the SalesOrderHeader all the Sales shipped on the last day. Based on the problem statement what the right solution is as following, which does not generate error. SELECT * FROM [Sales].[SalesOrderHeader] WHERE ShipDate = (SELECT MAX(ShipDate) FROM [Sales].[SalesOrderHeader]) Well, that’s it! Very simple. With SQL Server there are always multiple solution to a single problem. Is there any other solution available to the problem stated? Please share in the comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Custom DataSource Extender

    - by Brian
    I dream of creating a control which works something like this: <asp:SqlDataSource id="dsFoo" runat="server" ConnectionString="<%$ ConnectionStrings:conn %>" SelectCommandType="StoredProcedure" SelectCommand="cmd_foo"> </asp:SqlDataSource> <Custom:DataViewSource id="dvFoo" runat="server" rowfilter="colid &gt; 10" datasourceid="dsFoo"> </Custom:DataViewSource> I can accomplish the same thing in the code behind by executing cmd_foo, loading the results into a DataTable, then loading them into a DataView with a RowFilter. The goal would be to have multiple DataViews for one DataSource with whatever special filters I wish to apply to the select portion of the DataSource. I could imagine extending this to be more powerful. I tried peaking at this and this but am a bit confused on a few points. Currently, my main issue is being unsure where to grab the output data of the DataSource so I can stick it into a DataTable.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >