Search Results

Search found 40581 results on 1624 pages for 'mysql select db'.

Page 541/1624 | < Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >

  • SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions

    - by pinaldave
    To learn any technology and move to a more advanced level, it is very important to understand the fundamentals of the subject first. Today, we will be talking about something which has been quite introduced a long time ago but not properly explored when it comes to the isolation level. Snapshot Isolation was introduced in SQL Server in 2005. However, the reality is that there are still many software shops which are using the SQL Server 2000, and therefore cannot be able to maintain the Snapshot Isolation. Many software shops have upgraded to the later version of the SQL Server, but their respective developers have not spend enough time to upgrade themselves with the latest technology. “It works!” is a very common answer of many when they are asked about utilizing the new technology, instead of backward compatibility commands. In one of the recent consultation project, I had same experience when developers have “heard about it” but have no idea about snapshot isolation. They were thinking it is the same as Snapshot Replication – which is plain wrong. This is the same demo I am including here which I have created for them. In Snapshot Isolation, the updated row versions for each transaction are maintained in TempDB. Once a transaction has begun, it ignores all the newer rows inserted or updated in the table. Let us examine this example which shows the simple demonstration. This transaction works on optimistic concurrency model. Since reading a certain transaction does not block writing transaction, it also does not block the reading transaction, which reduced the blocking. First, enable database to work with Snapshot Isolation. Additionally, check the existing values in the table from HumanResources.Shift. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO Now, we will need two different sessions to prove this example. First Session: Set Transaction level isolation to snapshot and begin the transaction. Update the column “ModifiedDate” to today’s date. -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO Please note that we have not yet been committed to the transaction. Now, open the second session and run the following “SELECT” statement. Then, check the values of the table. Please pay attention on setting the Isolation level for the second one as “Snapshot” at the same time when we already start the transaction using BEGIN TRAN. -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values in the table are still original values. They have not been modified yet. Once again, go back to session 1 and begin the transaction. -- Session 1 COMMIT After that, go back to Session 2 and see the values of the table. -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that the values are yet not changed and they are still the same old values which were there right in the beginning of the session. Now, let us commit the transaction in the session 2. Once committed, run the same SELECT statement once more and see what the result is. -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO You will notice that it now reflects the new updated value. I hope that this example is clear enough as it would give you good idea how the Snapshot Isolation level works. There is much more to write about an extra level, READ_COMMITTED_SNAPSHOT, which we will be discussing in another post soon. If you wish to use this transaction’s Isolation level in your production database, I would appreciate your comments about their performance on your servers. I have included here the complete script used in this example for your quick reference. ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON GO SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN UPDATE HumanResources.Shift SET ModifiedDate = GETDATE() GO -- Session 2 SET TRANSACTION ISOLATION LEVEL SNAPSHOT BEGIN TRAN SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 1 COMMIT -- Session 2 SELECT ModifiedDate FROM HumanResources.Shift GO -- Session 2 COMMIT SELECT ModifiedDate FROM HumanResources.Shift GO Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Transaction Isolation

    Read the article

  • How to retrieve the Identity (@@IDENTITY) of a record you just inserted into a table.

    - by Edward Boyle
    SELECT @@IDENTITY will retrive that last generated @@IDENTITY from the current connection. int thisid = (int)cmd.ExecuteScalar("SELECT @@IDENTITY",conn); If there is another write in another connection you do not have to worry. Again, @@IDENTITY will retrieve last generated @@IDENTITY from the current connection. Null if no @@IDENTITY was generated on this connection. Another method is to append ;SELECT @@IDENTITY to your SQL Insert and use ExecuteScalar() What was: INSERT INTO STUFF(Field) VALUES(1) ... cmd.ExecuteNonQuery(); Becomes: string cstring= "INSERT INTO STUFF(Field) VALUES(1);SELECT @@IDENTITY"; int thisid = (int)cmd.ExecuteScalar(cstring, conn); In SQL Server Compact Edition you must send your commands in one at a time, you can not append ;SELECT @@IDENTITY to an insert.

    Read the article

  • Performing a clean database creation using msbuild

    - by Robert May
    So I’m taking a break from writing about other Agile stuff for a post. :)  I’m still going to get back to the other subjects, but this is fun too. Something I’ve done quite a bit of is MSBuild and CI work.  I’m experimenting with ways to improve what I’ve done in the past, particularly around database CI. Today, I developed a mechanism for starting from scratch with your database.  By scratch, I mean blowing away the existing database and creating it again from a single command line call.  I’m a firm believer that developers should be able to get to a known clean state at the database level with a single command and that they should be operating off of their own isolated database to improve productivity.  These scripts will help that. Here’s how I did it.  First, we have to disconnect users.  I did so using the help of a script from sql server central.  Note that I’m using sqlcmd variable replacement. -- kills all the users in a particular database -- dlhatheway/3M, 11-Jun-2000 declare @arg_dbname sysname declare @a_spid smallint declare @msg varchar(255) declare @a_dbid int set @arg_dbname = '$(DatabaseName)' select @a_dbid = sdb.dbid from master..sysdatabases sdb where sdb.name = @arg_dbname declare db_users insensitive cursor for select sp.spid from master..sysprocesses sp where sp.dbid = @a_dbid open db_users fetch next from db_users into @a_spid while @@fetch_status = 0 begin select @msg = 'kill '+convert(char(5),@a_spid) print @msg execute (@msg) fetch next from db_users into @a_spid end close db_users deallocate db_users GO Once all users are booted from the database, we can commence with recreating the database.  I generated the script that is used to create a database from SQL Server management studio, so I’m only going to show the bits that weren’t generated that are important.  There are a bunch of Alter Database statements that aren’t shown. First, I had to find the default location of the database files in the install, since they can be in many different locations.  I used Method 1 from a technet blog and then modified it a bit to do what I needed to do.  I ended up using dynamic SQL because for the life of me, I couldn’t get the “Filename” property to not return an error when I used anything besides a string.  I’m dropping the database first, if it exists.  Here’s the code:   IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = N'$(DatabaseName)') BEGIN drop database $(DatabaseName) END; go IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; -- Create temp database. Because no options are given, the default data and --- log path locations are used CREATE DATABASE zzTempDBForDefaultPath; DECLARE @Default_Data_Path VARCHAR(512), @Default_Log_Path VARCHAR(512); --Get the default data path SELECT @Default_Data_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 0); --Get the default Log path SELECT @Default_Log_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 1); --Clean up. IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; DECLARE @SQL nvarchar(max) SET @SQL= 'CREATE DATABASE $(DatabaseName) ON PRIMARY ( NAME = N''$(DatabaseName)'', FILENAME = N''' + @Default_Data_Path + N'$(DatabaseName)' + '.mdf' + ''', SIZE = 2048KB , FILEGROWTH = 1024KB ) LOG ON ( NAME = N''$(DatabaseName)Log'', FILENAME = N''' + @Default_Log_Path + N'$(DatabaseName)' + '.ldf' + ''', SIZE = 1024KB , FILEGROWTH = 10%) ' exec (@SQL) GO And with that, your database is created.  You can run these scripts on any server and on any database name.  To do that, I created an MSBuild script that looks like this: <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0"> <PropertyGroup> <DatabaseName>MyDatabase</DatabaseName> <Server>localhost</Server> <SqlCmd>sqlcmd -v DatabaseName=$(DatabaseName) -S $(Server) -i </SqlCmd> <ScriptDirectory>.\Scripts</ScriptDirectory> </PropertyGroup> <Target Name ="Rebuild"> <ItemGroup> <ScriptFiles Include="$(ScriptDirectory)\*.sql"/> </ItemGroup> <Exec Command="$(SqlCmd) &quot;%(ScriptFiles.Identity)&quot;" ContinueOnError="false"/> </Target> </Project> Note that the Scripts directory is underneath the directory where I’m running the msbuild command and is relative to that directory.  Note also that the target is using batching to run each script in the scripts subdirectory, one after the other.  Each script is passed to the sqlcmd command line execution using the .Identity property on the itemgroup that is created.  This target file is saved in the file “Database.target”. To make this work, you’ll need msbuild in your path, and then run the following command: msbuild database.target /target:Rebuild Once you’ve got your virgin database setup, you’d then need to use a tool like dbdeploy.net to determine that it was a virgin database, build a change script based on the change scripts, and then you’d want another sqlcmd call to update the database with the appropriate scripts.  I’m doing that next, so I’ll post a blog update when I’ve got it working. Technorati Tags: MSBuild,Agile,CI,Database

    Read the article

  • How to Force Graphics Options in PC Games with NVIDIA, AMD, or Intel Graphics

    - by Chris Hoffman
    PC games usually have built-in graphics options you can change. But you’re not limited to the options built into games — the graphics control panels bundled with graphics drivers allow you to tweak options from outside PC games. For example, these tools allow you to force-enabling antialiasing to make old games look better, even if they don’t normally support it. You can also reduce graphics quality to get more performance on slow hardware. If You Don’t See These Options If you don’t have the NVIDIA Control Panel, AMD Catalyst Control Center, or Intel Graphics and Media Control Panel installed, you may need to install the appropriate graphics driver package for your hardware from the hardware manufacturer’s website. The drivers provided via Windows Update don’t include additional software like the NVIDIA Control Panel or AMD Catalyst Control Center. Drivers provided via Windows Update are also more out of date. If you’re playing PC games, you’ll want to have the latest graphics drivers installed on your system. NVIDIA Control Panel The NVIDIA Control Panel allows you to change these options if your computer has NVIDIA graphics hardware. To launch it, right-click your desktop background and select NVIDIA Control Panel. You can also find this tool by performing a Start menu (or Start screen) search for NVIDIA Control Panel or by right-clicking the NVIDIA icon in your system tray and selecting Open NVIDIA Control Panel. To quickly set a system-wide preference, you could use the Adjust image settings with preview option. For example, if you have old hardware that struggles to play the games you want to play, you may want to select “Use my preference emphasizing” and move the slider all the way to “Performance.” This trades graphics quality for an increased frame rate. By default, the “Use the advanced 3D image settings” option is selected. You can select Manage 3D settings and change advanced settings for all programs on your computer or just for specific games. NVIDIA keeps a database of the optimal settings for various games, but you’re free to tweak individual settings here. Just mouse-over an option for an explanation of what it does. If you have a laptop with NVIDIA Optimus technology — that is, both NVIDIA and Intel graphics — this is the same place you can choose which applications will use the NVIDIA hardware and which will use the Intel hardware. AMD Catalyst Control Center AMD’s Catalyst Control Center allows you to change these options on AMD graphics hardware. To open it, right-click your desktop background and select Catalyst Control Center. You can also right-click the Catalyst icon in your system tray and select Catalyst Control Center or perform a Start menu (or Start screen) search for Catalyst Control Center. Click the Gaming category at the left side of the Catalyst Control Center window and select 3D Application Settings to access the graphics settings you can change. The System Settings tab allows you to configure these options globally, for all games. Mouse over any option to see an explanation of what it does. You can also set per-application 3D settings and tweak your settings on a per-game basis. Click the Add option and browse to a game’s .exe file to change its options. Intel Graphics and Media Control Panel Intel integrated graphics is nowhere near as powerful as dedicated graphics hardware from NVIDIA and AMD, but it’s improving and comes included with most computers. Intel doesn’t provide anywhere near as many options in its graphics control panel, but you can still tweak some common settings. To open the Intel graphics control panel, locate the Intel graphics icon in your system tray, right-click it, and select Graphics Properties. You can also right-click the desktop and select Graphics Properties. Select either Basic Mode or Advanced Mode. When the Intel Graphics and Media Control Panel appears, select the 3D option. You’ll be able to set your Performance or Quality setting by moving the slider around or click the Custom Settings check box and customize your Anisotropic Filtering and Vertical Sync preference. Different Intel graphics hardware may have different options here. We also wouldn’t be surprised to see more advanced options appear in the future if Intel is serious about competing in the PC graphics market, as they say they are. These options are primarily useful to PC gamers, so don’t worry about them — or bother downloading updated graphics drivers — if you’re not a PC gamer and don’t use any intensive 3D applications on your computer. Image Credit: Dave Dugdale on Flickr     

    Read the article

  • Unable to mount smb share. "Please select another viewer and try again". Please help. Serious smb/nautilus foo needed

    - by oznah
    This don't think this is the typical, "I can't mount a windows share" post. I am using stock Ubuntu 12.04. I am pretty sure this is a Nautilus issue, but I have reached a dead end. I have one share that I can't mount using smb://server/share via nautilus. I get the following error. Error: Failed to mount Windows share Please select another viewer and try again I can mount this share from other machines(non-ubuntu) using the same credentials so I know I have perms on the destination share. I can mount other shares on other servers from my Ubuntu box so I am pretty sure I have all the smb packages I need on my Ubuntu box. To make thing more interesting, if I use smbclient from the command line, I mount this share with no problems from my Ubuntu box. So here's what we know: destination share perms are ok (no problem accessing from other machines) smb is setup correctly on Ubuntu box (access other windows shares no problem) I only get the error when using nautilus smbclient in terminal works, no problem Any help would be greatly appreciated. Googling turned up simple mount/perms issues, and I don't think that is what is going on here. Let me know if you need more information. Hugh

    Read the article

  • Oracle Sequences

    - by jkrebsbach
    Reminder to myself - SQL Server has nice index columns directly tied to their tables. Oracle has sequences that are islands to themselves. select seq_name.currval from dual; select seq_name.nextval from dual; currval - return current number at top of sequence nextval - increment sequence by 1, return new number   therefore - to create functionality in oracle similar to an index column - OPTION A) - Create insert trigger: CREATE OR REPLACE TRIGGER dept_bir BEFORE INSERT ON departments FOR EACH ROW WHEN (new.id IS NULL) BEGIN SELECT dept_seq.NEXTVAL INTO :new.id FROM dual; END; This will handle creating a unique identity, but will not necessarily inform process flow of identity without additional logic. OPTION B) - Select indentity into temp variable, insert whole item into tab **** When attemptint to query currval, the below error was being thrown - SELECT seq_name.currval from dual; ERROR : TABLE OR VIEW DOES NOT EXIST *** Although Oracle sys tables may have access to the sequences, that isn't to say the Oracle user may have access to those sequences - verify permissions when the system can't see object that are being reported in the object explorer.

    Read the article

  • How do I rotate my monitor using xorg?

    - by user1106405
    I have just installed KUbuntu 12.10, and I am attempting to rotate my monitor 90 deg to the left. When I add the option to rotate, the monitor seems to ignore the directive. I'm currently using dual 02:00.0 VGA compatible controller: NVIDIA Corporation GF104 [GeForce GTX 460] (rev a1) 03:00.0 VGA compatible controller: NVIDIA Corporation GF104 [GeForce GTX 460] (rev a1) and NVidia driver version 310 My xorg.conf is as follows: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 304.51 (buildd@komainu) Fri Oct 12 12:53:49 UTC 2012 # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 310.14 ([email protected]) Tue Oct 9 13:04:01 PDT 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 1280 0 Screen 1 "Screen1" RightOf "Screen0" Screen 2 "Screen2" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "1" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Samsung SyncMaster" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 60.0 Option "DPMS" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Unknown" ModelName "DELL 1908WFP" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 EndSection Section "Monitor" Identifier "Monitor2" VendorName "Unknown" ModelName "DELL 1907FP" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 460" BusID "PCI:2:0:0" Screen 0 EndSection Section "Device" Identifier "Device1" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 460" BusID "PCI:2:0:0" Screen 1 EndSection Section "Device" Identifier "Device2" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 460" BusID "PCI:3:0:0" EndSection Section "Screen" # Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+0; DFP-0: nvidia-auto-select +0+0; DFP-0: 1920x1200 +0+0; DFP-0: 1920x1200_60 +0+0; DFP-0: 1600x1200 +0+0; DFP-0: 1600x1200_60 +0+0; DFP-0: 1280x1024 +0+0; DFP-0: 1280x1024_60 +0+0; DFP-0: 1280x960 +0+0; DFP-0: 1280x960_60 +0+0; DFP-0: 1024x768 +0+0; DFP-0: 1024x768_60 +0+0; DFP-0: 800x600 +0+0; DFP-0: 800x600_60 +0+0; DFP-0: 800x600_56 +0+0; DFP-0: 640x480 +0+0; DFP-0: 640x480_60 +0+0; DFP-0: nvidia-auto-select @1920x1080 +0+0; DFP-0: nvidia-auto-select @1920x720 +0+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DFP-0: nvidia-auto-select +0+0; DFP-0: nvidia-auto-select +0+0; DFP-0: 1920x1200 +0+0; DFP-0: 1920x1200_60 +0+0; DFP-0: 1600x1200 +0+0; DFP-0: 1600x1200_60 +0+0; DFP-0: 1280x1024 +0+0; DFP-0: 1280x1024_60 +0+0; DFP-0: 1280x960 +0+0; DFP-0: 1280x960_60 +0+0; DFP-0: 1024x768 +0+0; DFP-0: 1024x768_60 +0+0; DFP-0: 800x600 +0+0; DFP-0: 800x600_60 +0+0; DFP-0: 800x600_56 +0+0; DFP-0: 640x480 +0+0; DFP-0: 640x480_60 +0+0; DFP-0: nvidia-auto-select +0+0; DFP-0: nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Device1" Monitor "Monitor1" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "DFP-2: nvidia-auto-select +0+0" Option "Rotate" "left" SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen2" Device "Device2" Monitor "Monitor2" DefaultDepth 24 Option "Stereo" "0" Option "metamodes" "nvidia-auto-select +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Section "Extensions" Option "Composite" "Enable" EndSection Edit: If I delete the xorg.conf and reboot, I am able to rotate my monitor, however, my third monitor is not recognized: Screen 0: minimum 8 x 8, current 3360 x 1200, maximum 16384 x 16384 DVI-I-0 disconnected (normal left inverted right x axis y axis) DVI-I-1 disconnected (normal left inverted right x axis y axis) DVI-I-2 connected 1920x1200+0+0 (normal left inverted right x axis y axis) 518mm x 324mm 1920x1200 60.0*+ 1600x1200 60.0 1280x1024 60.0 1280x960 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 HDMI-0 disconnected (normal left inverted right x axis y axis) DVI-I-3 connected 1440x900+1920+0 (normal left inverted right x axis y axis) 408mm x 255mm 1440x900 59.9*+ 75.0 1280x1024 75.0 60.0 1280x800 59.8 1152x864 75.0 1024x768 75.0 70.1 60.0 800x600 75.0 72.2 60.3 56.2 640x480 75.0 72.8 59.9

    Read the article

  • SQL Concatenate

    - by Bunch
    Concatenating output from a SELECT statement is a pretty basic thing to do in SQL. The main ways to perform this would be to use either the CONCAT() function, the || operator or the + operator. It really all depends on which version of SQL you are using. The following examples use T-SQL (MS SQL Server 2005) so it uses the + operator but other SQL versions have similar syntax. If you wanted to join two fields together for a full name: SELECT (lname + ', ' + fname) AS Name FROM tblCustomers To add some static text to a value: SELECT (lname + ' - SS') AS Name FROM tblPlayers WHERE PlayerPosition = 6 Or to select some text and an integer together: SELECT (lname + cast(playerNumber as varchar) AS Name FORM tblPlayers Technorati Tags: SQL

    Read the article

  • asp.net mvc & jquery dialog: What approach do I take to add items to a dropdownlist/select list with

    - by Mark Redman
    Hi, I am new to MVC and have a grasp of the basic model, but still doing everything with postbacks etc. One aspect of the UI I want to build is to have a drop-down-list of items with a button to add an item to the database and refresh the list. Achieving this with WebForms was straight forward as everything was wrapped in UpdatePanels, but what is the best approach to achieve this using MVC? Part of the markup for the list and button look like this: <table> <tr> <td><%=Html.DropDownList("JobTitleSelectList", Model.JobTitleSelectList, "(select job title)", new { @class = "data-entry-field" })%></td> <td>&nbsp;</td> <td><a id="AddJobTitleDialogLink" href="javascript: addJobTitleDialog();" title="Add Job Title"><img id="AddJobTitleButtonImage" src="/Content/Images/Icons/16x16/PlusGrey.png" border="0" /></a></td> </tr> </table> The Dialog is a standard jquery Ui dialog, looks like this: <div id="SingleTextEntryDialog" style="display:none"> <table> <tr> <td>Name:</td> <td><input id="SingleTextEntryDialogText" type="text" size="25" /></td> </tr> </table> </div> I am guessing I need to put this into a UserControl / PartialView (are they the same thing?) Also with the strongly typed View how do I pass the Model or just the SelectList Property to the UserControl or is this not the case? Nor sure if there should be form in the dialog div? or how that is going to postback via ajax. Some examples show a lot of ajax code in the page something like: $.ajax({...}); I assume doing this using jquery is more code than asp.net webforms, but there is just more code to see doing a "View Source" on a page? Your comments appreciated.

    Read the article

  • How to select number of lines from large text files?

    - by MiNdFrEaK
    I was wondering how to select number of lines from a certain text file. As an example: I have a text file containing the following lines: branch 27 : rect id 23400 rect: -115.475609 -115.474907 31.393650 31.411301 branch 28 : rect id 23398 rect: -115.474907 -115.472282 31.411301 31.417351 branch 29 : rect id 23396 rect: -115.472282 -115.468033 31.417351 31.427151 branch 30 : rect id 23394 rect: -115.468033 -115.458733 31.427151 31.438181 Non-Leaf Node: level=1 count=31 address=53 branch 0 : rect id 42 rect: -115.768539 -106.251556 31.425039 31.717550 branch 1 : rect id 50 rect: -109.559479 -106.009361 31.296721 31.775299 branch 2 : rect id 51 rect: -110.937401 -106.226143 31.285870 31.771971 branch 3 : rect id 54 rect: -109.584412 -106.069092 31.285240 31.775230 branch 4 : rect id 56 rect: -109.570961 -106.000954 31.296721 31.780769 branch 5 : rect id 58 rect: -115.806213 -106.366188 31.400450 31.687519 branch 6 : rect id 59 rect: -113.173859 -106.244057 31.297440 31.627750 branch 7 : rect id 60 rect: -115.811478 -106.278252 31.400450 31.679470 branch 8 : rect id 61 rect: -109.953888 -106.020111 31.325319 31.775270 branch 9 : rect id 64 rect: -113.070969 -106.015968 31.331841 31.704750 branch 10 : rect id 68 rect: -113.065689 -107.034576 31.326300 31.770809 branch 11 : rect id 71 rect: -112.333344 -106.059860 31.284081 31.662920 branch 12 : rect id 73 rect: -115.071083 -106.309677 31.267879 31.466850 branch 13 : rect id 74 rect: -116.094414 -106.286308 31.236290 31.424770 branch 14 : rect id 75 rect: -115.423264 -106.286308 31.229691 31.415510 branch 15 : rect id 76 rect: -116.111656 -106.313110 31.259390 31.478300 branch 16 : rect id 77 rect: -116.247467 -106.309677 31.240231 31.451799 branch 17 : rect id 78 rect: -116.170792 -106.094543 31.156429 31.391781 branch 18 : rect id 79 rect: -116.225723 -106.292709 31.239960 31.442850 branch 19 : rect id 80 rect: -116.268013 -105.769913 31.157240 31.378111 branch 20 : rect id 82 rect: -116.215424 -105.827202 31.198441 31.383421 branch 21 : rect id 83 rect: -116.095734 -105.826439 31.197460 31.373819 branch 22 : rect id 84 rect: -115.423264 -105.815018 31.182640 31.368891 branch 23 : rect id 85 rect: -116.221527 -105.776512 31.160931 31.389830 branch 24 : rect id 86 rect: -116.203369 -106.473831 31.168350 31.367611 branch 25 : rect id 87 rect: -115.727631 -106.501587 31.189100 31.395941 branch 26 : rect id 88 rect: -116.237289 -105.790756 31.164780 31.358959 branch 27 : rect id 89 rect: -115.791344 -105.990044 31.072620 31.349529 branch 28 : rect id 90 rect: -115.736847 -106.495079 31.187969 31.376900 branch 29 : rect id 91 rect: -115.721710 -106.000130 31.160351 31.354601 branch 30 : rect id 92 rect: -115.792236 -106.000793 31.166620 31.378811 Leaf Node: level=0 count=21 address=42 branch 0 : rect id 18312 rect: -106.412270 -106.401367 31.704750 31.717550 branch 1 : rect id 18288 rect: -106.278252 -106.253387 31.520321 31.548361 I just want those lines which are in between Non-Leaf Node level=1 to Leaf Node Level=0 and also there are a lot of segments like this and I need them all.

    Read the article

  • rpm rollback ignoring rpms - no error output

    - by John H
    Issue rpm rollback is not working with a set of repackaged rpms created in the last couple days, but does work with more recent ones. [root@host1 repackage]# ls -l zsh-4.2.6-* -rw-r--r-- 1 root root 1788283 Apr 10 2011 zsh-4.2.6-3.el5.i386.rpm -rw-r--r-- 1 root root 1788691 Aug 18 04:38 zsh-4.2.6-5.el5.i386.rpm [root@host1 repackage]# rpm -q zsh zsh-4.2.6-6.el5 [root@host1 repackage]# rpm --test -Uvh --rollback 'Aug 18 01:00' [root@host1 repackage]# rpm -e zsh [root@host1 repackage]# [root@host1 repackage]# ls -l zsh* -rw-r--r-- 1 root root 1788283 Apr 10 2011 zsh-4.2.6-3.el5.i386.rpm -rw-r--r-- 1 root root 1788691 Aug 18 04:38 zsh-4.2.6-5.el5.i386.rpm -rw-r--r-- 1 root root 1789064 Aug 20 09:06 zsh-4.2.6-6.el5.i386.rpm [root@host1 repackage]# cp zsh-4.2.6-6.el5.i386.rpm /tmp [root@host1 repackage]# rpm --test -Uvh --rollback 'Aug 18 01:00' Rollback packages (+1/-0) to Mon Aug 20 09:02:16 2012 (0x50323558): Preparing... ########################################### [100%] Cleaning up repackaged packages: Removing /var/spool/repackage/zsh-4.2.6-6.el5.i386.rpm: [root@host1 repackage]# ls -l zsh-4.2.6-* -rw-r--r-- 1 root root 1788283 Apr 10 2011 zsh-4.2.6-3.el5.i386.rpm -rw-r--r-- 1 root root 1788691 Aug 18 04:38 zsh-4.2.6-5.el5.i386.rpm [root@host1 repackage]# cp /tmp/zsh-4.2.6-6.el5.i386.rpm . [root@host1 repackage]# rpm -Uvh --rollback 'Aug 18 01:00' Rollback packages (+1/-0) to Mon Aug 20 09:06:05 2012 (0x5032363d): Preparing... ########################################### [100%] 1:zsh ########################################### [ 50%] Cleaning up repackaged packages: Removing /var/spool/repackage/zsh-4.2.6-6.el5.i386.rpm: [root@host1 repackage]# rpm --test -Uvh --rollback 'April 9' [root@host1 repackage]# Now, if I run my test commands with -Uvvh I get debug messages to stderror which shows me that rpm reads each of the rpm files in /var/spool/repackage. The only interesting bit is the "expected size" but after searching, the expected size should be different, as it records the files as they are on the filesystem. D: opening db environment /var/lib/rpm/Packages joinenv D: opening db index /var/lib/rpm/Packages rdonly mode=0x0 D: locked db index /var/lib/rpm/Packages D: opening db index /var/lib/rpm/Installtid rdonly mode=0x0 D: opening db index /var/lib/rpm/Pubkeys rdonly mode=0x0 D: read h# 769 Header sanity check: OK D: ========== DSA pubkey id 53268101 37017186 (h#769) D: read h# 32 Header V3 DSA signature: OK, key ID 37017186 D: read h# 40 Header V3 DSA signature: OK, key ID 37017186 ... D: read h# 1753 Header V3 DSA signature: OK, key ID 37017186 D: Expected size: 3628918 = lead(96)+sigs(344)+pad(0)+data(3628478) D: Actual size: 3583695 D: /var/spool/repackage/Deployment_Guide-en-US-5.2-11.noarch.rpm: Header V3 DSA signature: OK, key ID 37017186 D: Expected size: 1100789 = lead(96)+sigs(344)+pad(0)+data(1100349) D: Actual size: 1109281 D: /var/spool/repackage/NetworkManager-0.7.0-10.el5_5.2.i386.rpm: Header V3 DSA signature: OK, key ID 37017186 D: Expected size: 1098167 = lead(96)+sigs(344)+pad(0)+data(1097727) D: Actual size: 1106179 D: /var/spool/repackage/NetworkManager-0.7.0-9.el5.i386.rpm: Header V3 DSA signature: OK, key ID 37017186 D: Expected size: 84351 = lead(96)+sigs(344)+pad(0)+data(83911) D: Actual size: 85378 ... D: Expected size: 1788276 = lead(96)+sigs(344)+pad(0)+data(1787836) D: Actual size: 1788691 D: /var/spool/repackage/zsh-4.2.6-5.el5.i386.rpm: Header V3 DSA signature: OK, key ID 37017186 D: --- erase h#1758 D: closed db index /var/lib/rpm/Pubkeys D: closed db index /var/lib/rpm/Installtid D: closed db index /var/lib/rpm/Packages D: closed db environment /var/lib/rpm/Packages D: May free Score board((nil)) I am able to copy these rpms out of the repackage directory and if I run them through cpio, extract the files. I also tried backing up and rebuilding the rpm database - no change. System Information: RHEL 5.8 rpm 4.4.2.3 /etc/yum.conf tsflags=repackage /etc/rpm/macros %_repackage_all_erasures 1

    Read the article

  • Magento hosting on a budget

    - by spa
    I have to do a setup for Magento. My constraint is primarily ease of setup and fault tolerance/fail over. Furthermore costs are an issue. I have three identical physical servers to get the job done. Each server node has an i7 quad core, 16GB RAM, and 2x3TB HD in a software RAID 1 configuration. Each node runs Ubuntu 12.04. right now. I have an additional IP address which can be routed to any of these nodes. The Magento shop has max. 1000 products, 50% of it are bundle products. I would estimate that max. 100 users are active at once. This leads me to the conclusion, that performance is not top priority here. My first setup idea One node (lb) runs nginx as a load balancer. The additional IP is used with domain name and routed to this node by default. Nginx distributes the load equally to the other two nodes (shop1, shop2). Shop1 and shop2 are configured equally: each server runs Apache2 and MySQL. The Mysqls are configured with master/slave replication. My failover strategy: Lb fails = Route IP to shop1 (MySQL master), continue. Shop1 fails = Lb will handle that automatically, promote MySQL slave on shop2 to master, reconfigure Magento to use shop2 for writes, continue. Shop2 fails = Lb will handle that automatically, continue. Is this a sane strategy? Has anyone done a similar setup with Magento? My second setup idea Another way to do it would be to use drbd for storing the MySQL data files on shop1 and shop2. I understand that in this scenario only one node/MySQL instance can be active and the other is used as hot standby. So in case shop1 fails, I would start up MySQL on shop2, route the IP to shop2, and continue. I like that as the MySQL setup is easier and the nodes can be configured 99% identical. So in this case the load balancer becomes useless and I have a spare server. My third setup idea The third way might be master-master replication of MySQL databases. However, in my optinion this might be tricky, as Magento isn't build for this scenario (e.g. conflicting ids for new rows). I would not do that until I have heard of a working example. Could you give me an advice which route to follow? There seems not one "good" way to do it. E.g. I read blog posts which describe a MySQL master/slave setup for Magento, but elsewhere I read, that data might get duplicated when the slave lags behind the master (e.g. when an order is placed, a customer might get created twice). I'm kind of lost here.

    Read the article

  • How to recover data files from xampp-windows to xampp-linux after crash?

    - by David Buehler
    My Windows box died after I developed a database in xampp on it; fortunately I have a backup of the entire F:/TestWeb/Xampp partition. Unfortunately, I did not do an Export (nor dump) of the "Lws2" database before the crash. I have replaced the defunct machine with one running Mint7 (based on Ubuntu 9.04 "Jaunty Jackalope") and installed xampp-linux into the /opt partition, so the new xampp now runs fine in /opt/lampp, and says all the elements are secured by passwords (which I just assigned during this installation.) I assumed that Xamp-Windows installed in November would migrate easily to xampp-linux installed iin February -- a bad assumption. It apparently would have been simple if I had known enough to do an Export or a Dump before the crash, but.... The backup was done to a Network Attached Storage drive, which is formatted as "vfat" so the backup does not carry with it any valid ownership permissions from MySql on NTFS. I now see from my backup that the old data resided in \TestWeb\Xampp\Mysql\Data\Lws2\ and consists of 7 ".frm" files which define my tables. The actual data -- I suppose a ".sql" file or files -- has disappeared, and I am resigning myself to two days of retyping it. But I do not wish to do the table layouts all over again. So I copied Data tree to /opt/lampp/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/Lws2 -- PhpMyAdmin does not see it. So I copied Data tree to /opt/lampp/var/mysql/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/var/mysql/Lws2 -- PhpMyAdmin does not see it. So I adjusted all the permissions to stop saying owner "nobody" to owner "root" and gave full permissions to all groups and to all others, with permissions percolating down, in all 4 trees. You guessed it -- PhpMyAdmin does not see any database named Lws2, only its 4 default ones. I double-checked the permissions and rebooted Linux and repeated the tests. At some point in that process I did see PhpMyAdmin showing "lws2(7)" but when I clicked on it I saw a "no table found" message. I have not been able to recreate that experience. Apparently there are some setup files for MySql and for PhpMyAdmin which need to be set up by running a wizard or two or by editing the files directly. I grepped the TestWeb tree and found an old "ldir = "C:TestWeb\Xampp\MySql\" and a "DataDir = C:TestWeb\Xampp\MySql\" in a .php file and in a .bat file, but I cannot find the corresponding config file names on the /opt partition/ -- so it looks as if these wizards have not been run to create them. What config files files does Linux use to setup MySql config files for PhpMyAdmin? What wizards do I need to run to point the MySql engine and the PhpMyAdmin at the folder /opt/lampp/data/ with its lws2 folder inside it? Or which files do I need to edit, with a sample of what it normally says under Linux? Incidentally, I remember I converted from MyISAM with its .MYD and .MYI files to InnoDB after entering only a small amount of the data -- and I do not know what file types to look for -- perhaps my data is still there but under another guise or in another place? Is it something as simple as linux needing to see "/data/" instead of /Data? I will check that out while waiting for a response. If anyone can point me to documentation that discusses this level of detail -- I will read it avidly! In any case, thanks for any clarification you can give on this thorny problem. wizdum

    Read the article

  • Setting up home DNS with Ubuntu Server

    - by Zeophlite
    I have a webserver (with static IP 192.168.1.5), and I want to have my machines on my local network to be able to access it without modifying /etc/hosts (or equivalent for Windows/OSX). My router has Primary DNS server 192.168.1.5 Secondary DNS server 8.8.8.8 (Google's public DNS). Nginx is set up to server websites externally as *.example.com Internally, I want *.example.local to point to the server. My webserver has BIND9 installed, but I'm unsure of the settings. I've been through various contradicting tutorials, and so most of my settings have been clobbered. I've stripped out the lines which I'm confused about. The tutorials I looked at are http://tech.surveypoint.com/blog/installing-a-local-dns-server-behind-a-hardware-router/ and http://ubuntuforums.org/showthread.php?t=236093 . They mostly differ on what should be put in /etc/bind/zones/db.example.local and /etc/bind/zones/db.192, so I've left the conflicting lines out below. Can someone suggest what the correct lines are to give my above behaviour (namely *.example.local pointing to 192.168.1.5)? /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.5 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.254 /etc/hostname avalon /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN /etc/bind/named.conf.options options { directory "/var/cache/bind"; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; /etc/bind/named.conf.local zone "example.local" { type master; file "/etc/bind/zones/db.example.local"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/db.192"; }; /etc/bind/zones/db.example.local $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 5 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL /etc/bind/zones/db.192 $TTL 604800 @ IN SOA avalon.example.local. webadmin.example.local. ( 4 ; Serial, increment each edit 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; What do I need to add to the above files so that on a laptop on the internal network, I can type in webapp.example.local, and be served by my webserver? EDIT I made several changes to the above files on the webserver. /etc/network/interfaces (end of file) dns-nameservers 127.0.0.1 dns-search example.local /etc/bind/zones/db.example.local (end of file) @ IN NS avalon.example.local. @ IN A 192.168.1.5 avalon IN A 192.168.1.5 webapp IN A 192.168.1.5 www IN CNAME 192.168.1.5 /etc/bind/zones/db.192 (end of file) IN NS avalon.example.local. 73 IN PTR avalon.example.local. As a side note, my spare Win7 machine was able to connect directly to webapp.example.local, but for a Ubuntu 13.10 machine, I had to make the following changes as well (not on the webserver, but on a separate machine): /etc/nsswitch.conf before hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 after hosts: files dns /etc/NetworkManager/NetworkManager.conf before dns=dnsmasq after #dns=dnsmasq The issue remains that its not wildcard DNS, and so I have to add entries to /etc/bind/zones/db.example.local for webapp1, webapp2, ...

    Read the article

  • T-sql Common expression query as subquery

    - by ase69s
    I have the following query: WITH Orders(Id) AS ( SELECT DISTINCT anfrageid FROM MPHotlineAnfrageAnhang ) SELECT Id, ( SELECT CONVERT(VARCHAR(255),anfragetext) + ' | ' FROM MPHotlineAnfrageAnhang WHERE anfrageid = Id ORDER BY anfrageid, erstelltam FOR XML PATH('') ) AS Descriptions FROM Orders Its concatenates varchar values of diferents rows grouped by an id. But now i want to include it as a subquery and it gives some errors i cant solve. Simplified example of use: select descriptions from ( WITH Orders(Id) AS ( SELECT DISTINCT anfrageid FROM MPHotlineAnfrageAnhang ) SELECT Id, ( SELECT CONVERT(VARCHAR(255),anfragetext) + ' | ' FROM MPHotlineAnfrageAnhang WHERE anfrageid = Id ORDER BY anfrageid, erstelltam FOR XML PATH('') ) AS Descriptions FROM Orders ) as tx where id=100012 Errors (Aproximate translation from spanish): -Incorrect sintaxis near 'WITH'. -Incorrect sintaxis near 'WITH'. If the instruction is a common table expression or a xmlnamespaces clause, the previous instruction must end with semicolon. -Incorrect sintaxis near ')'. What im doing wrong?

    Read the article

  • tsql - using internal stored procedure as parameter is where clause

    - by vondip
    Hi all, I'm tryng to build a stored procedure that makes use of another stored proceudre. Taking its result and using it as part of its where clause, from some reason I receive an error: Invalid object name 'dbo.GetSuitableCategories'. Here is a copy of the code: select distinct top 6 * from ( SELECT TOP 100 * FROM [dbo].[products] products where products.categoryId in (select top 10 categories.categoryid from [dbo].[GetSuitableCategories] ( -- @Age -- ,@Sex -- ,@Event 1, 1, 1 ) categories ORDER BY NEWID() ) --and products.Price <=@priceRange ORDER BY NEWID() )as d union select * from ( select TOP 1 * FROM [dbo].[products] competingproducts where competingproducts.categoryId =-2 --and competingproducts.Price <=@priceRange ORDER BY NEWID() ) as d and here is [dbo].[GetSuitableCategories] : if (@gender =0) begin select * from categoryTable categories where categories.gender =3 end else begin select * from categoryTable categories where categories.gender = @gender or categories.gender =3 end Thank you very much!~

    Read the article

  • How to add additional rows to result set by condition

    - by Puzzled
    I have a table like this: ObjId Date Value 100 '20100401' 12 200 '20100501' 45 200 '20100401' 37 300 '20100501' 75 300 '20100401' 69 400 '20100401' 87 I have to add additional rows to result set for objId's, where there is no data at '20100501' **100 '20100501' null** 100 '20100401' 12 200 '20100501' 45 200 '20100401' 37 300 '20100501' 75 300 '20100401' 69 **400 '20100501' null** 400 '20100401' 87 What is the best way to do this? Here is the T-SQL script for the initial table: declare @datesTable table (objId int, date smalldatetime, value int) insert @datesTable select 100, '20100401', 12 union all select 200, '20100501', 45 union all select 200, '20100401', 37 union all select 300, '20100501', 75 union all select 300, '20100401', 69 union all select 400, '20100401', 87 select * from @datesTable

    Read the article

  • code duplication in sql case statements

    - by NS
    Hi I'm trying to output something like the following but am finding that there is a lot of code duplication going on. | australian_has_itch | kiwi_has_itch | | yes | no | | no | n/a | | n/a | no | ... My query looks like this with two case statements that do the same thing but flip the country (my real query has 5 of these case statements): SELECT CASE WHEN NOT EXISTS ( SELECT person_id FROM people_with_skin WHERE people_with_skin.person_id = people.person_id AND people.country = "Australia" ) THEN 'N/A' WHEN EXISTS ( SELECT person_id FROM itch_none_to_report WHERE people.country = "Australia" AND person_id = people.person_id ) THEN 'None to report' WHEN EXISTS ( SELECT person_id FROM itchy_people WHERE people.country = "Australia" AND person_id = people.person_id ) THEN 'Yes' ELSE 'No' END australian_has_itch, CASE WHEN NOT EXISTS ( SELECT person_id FROM people_with_skin WHERE people_with_skin.person_id = people.person_id AND people.country = "NZ" ) THEN 'N/A' WHEN EXISTS ( SELECT person_id FROM itch_none_to_report WHERE people.country = "NZ" AND person_id = people.person_id ) THEN 'None to report' WHEN EXISTS ( SELECT person_id FROM itchy_people WHERE people.country = "NZ" AND person_id = people.person_id ) THEN 'Yes' ELSE 'No' END kiwi_has_itch, FROM people Is there a way for me to condense this somehow and not have so much code duplication? Thanks!

    Read the article

  • Need to speed up the results of this SQL statement. Any advice?

    - by jeffself
    I've got the following SQL Statement that needs some major speed up. The problem is I need to search on two fields, where each of them is calling several sub-selects. Is there a way to join the two fields together so I call the sub-selects only once? SELECT billyr, billno, propacct, vinid, taxpaid, duedate, datepif, propdesc FROM trcdba.billspaid WHERE date(datepif) > '01/06/2009' AND date(datepif) <= '01/06/2010' AND custno in (select custno from cwdba.txpytaxid where taxpayerno in (select taxpayerno from cwdba.txpyaccts where accountno in (select accountno from rtadba.reasacct where controlno = 1234567))) OR custno2 in (select custno from cwdba.txpytaxid where taxpayerno in (select taxpayerno from cwdba.txpyaccts where accountno in (select accountno from rtadba.reasacct where controlno = 1234567)))

    Read the article

  • Padding a string in Postgresql with rpad without truncating it

    - by dmoebius
    Using Postgresql 8.4, how can I right-pad a string with blanks without truncating it when it's too long? The problem is that rpad truncates the string when it is actually longer than number of characters to pad. Example: SELECT rpad('foo', 5); ==> 'foo ' -- fine SELECT rpad('foo', 2); ==> 'fo' -- not good, I want 'foo' instead. The shortest solution I found doesn't involve rpad at all: SELECT 'foo' || repeat(' ', 5-length('foo')); ==> 'foo ' -- fine SELECT 'foo' || repeat(' ', 2-length('foo')); ==> 'foo' -- fine, too but this looks ugly IMHO. Note that I don't actually select the string 'foo' of course, instead I select from a column: SELECT colname || repeat(' ', 30-length(colname)) FROM mytable WHERE ... Is there a more elegant solution?

    Read the article

  • Procedure Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32).

    - by Nick
    The stored proc is failing at below location,Thanks, for all your help. --Insert MSOrg Information DECLARE @PersonnelNumber int, @MSOrg varchar(255) DECLARE csr CURSOR FAST_FORWARD FOR SELECT PersonnelNumber FROM Person OPEN csr FETCH NEXT FROM csr INTO @PersonnelNumber WHILE @@FETCH_STATUS = 0 BEGIN EXEC GetMSOrg @PersonnelNumber, @MSOrg out INSERT INTO PersonSubject ( PersonnelNumber ,SubjectID ,SubjectValue ,Created ,Updated ) SELECT @PersonnelNumber ,SubjectID ,@MSOrg ,getDate() ,getDate() FROM Subject WHERE DisplayName = 'MS Org' FETCH NEXT FROM csr INTO @PersonnelNumber END CLOSE csr DEALLOCATE csr Below is the stored prc defination GetMSOrg and fails at third condition CREATE PROCEDURE [dbo].[GetMSOrg] ( @PersonnelNumber int ,@OrgTerm varchar(200) out ) AS DECLARE @MDRTermID int ,@ReportsToPersonnelNbr int --Check to see if we have reached the top of the chart SELECT @ReportsToPersonnelNbr = ReportsToPersonnelNbr FROM ReportsTo WHERE PersonnelNumber = @PersonnelNumber IF (@ReportsToPersonnelNbr IS NULL) --Reached the Top of the Org Ladder BEGIN SET @OrgTerm = 'Non-standard rollup' END ELSE IF (@PersonnelNumber IN (SELECT PersonnelNumber FROM OrgTermMap)) BEGIN SELECT @OrgTerm = s.Term FROM OrgTermMap tm JOIN Taxonomy..StaticHierarchy s ON tm.OrgTermID = s.TermID WHERE tm.PersonnelNumber = @PersonnelNumber END ELSE BEGIN SELECT @MDRTermID = tm.OrgTermID FROM ReportsTo r JOIN OrgTermMap tm ON r.ReportsToPersonnelNbr = tm.PersonnelNumber WHERE r.PersonnelNumber = @PersonnelNumber IF (@MDRTermID IS NULL) BEGIN EXEC GetMSOrg @ReportsToPersonnelNbr, @OrgTerm out END ELSE BEGIN SELECT @OrgTerm = Term FROM Taxonomy..StaticHierarchy WHERE VocabID = 118 AND TermID = @MDRTermID END END GO

    Read the article

  • IF/ELSE makes stored procedure not return a result set

    - by Brendan Long
    I have a stored procedure that needs to return something from one of two databases: IF @x = 1 SELECT @y FROM Table_A ELSE IF @x = 2 SELECT @y FROM Table_B Either SELECT alone will return what I want, but adding the IF/ELSE makes it stop returning anything. I tried: IF @x = 1 RETURN SELECT @y FROM Table_A ELSE IF @x = 2 RETURN SELECT @y FROM Table_B But that causes a syntax error. The two options I see are both horrible: Do a UNION and make sure that only one side has any results: SELECT @y FROM Table_A WHERE @x = 1 UNION SELECT @y FROM Table_B WHERE @x = 2 Create a temporary table to store one row in, and create and delete it every time I run this procedure (lots). Neither solution is elegant, and I assume they would both be horrible for performance (unless MS SQL is smart enough not to search the tables when the WHERE class is always false). Is there anything else I can do? Is option 1 not as bad as I think?

    Read the article

  • Oracle SQL: Multiple Subqueries Unioned Without Running Original Query Multiple Times.

    - by Bob
    So I've got a very large database, and need to work on a subset ~1% of the data to dump into an excel spreadsheet to make a graph. Ideally, I could select out the subset of data and then run multiple select queries on that, which are then UNION'ed together. Is this even possible? I can't seem to find anyone else trying to do this and would improve the performance of my current query quite a bit. Right now I have something like this: SELECT ( SELECT ( SELECT( long list of requirements ) UNION SELECT( slightly different long list of requirements ) ) ) and it would be nice if i could group the commonalities of the two long requirements and have simple differences between the two select statements being unioned.

    Read the article

  • SQL query problem

    - by Brisonela
    Hi, I'm new to StackOverflow, and new to SQL Server, I'd like you to help me with some troublesome query. This is my database structure(It's half spanish, hope doesn't matter) Database My problem is that I don't now how to make a query that states which team is local and which is visitor(using table TMatch, knowing that the stadium belongs to only one team) This is as far as I can get Select P.NroMatch, (select * from fnTeam (P.TeamA)) as TeamA,(select * from fnTeam (P.TeamB)) as TeamB, (select * from fnEstadium (P.CodEstadium)) as Estadium, (cast(P.GolesTeamA as varchar)) + '-' + (cast(P.GolesTeamA as varchar)) as Score, P.Fecha from TMatch P Using this functions: If object_id ('fnTeam','fn')is not null drop function fnTeam go create function fnTeam(@CodTeam varchar(5)) returns table return(Select Name from TTeam where CodTeam = @CodTeam) go select * from fnTeam ('Eq001') go ----**** If object_id ('fnEstadium','fn')is not null drop function fnEstadium go create function fnEstadium(@CodEstadium varchar(5)) returns table return(Select Name from TEstadium where CodEstadium = @CodEstadium) go I hope I'd explained myself well, and I thank you help in advance

    Read the article

  • Query returning an ascending group number

    - by Dougman
    I have a query like below that has groups (COL1) and that group's values (COL2). select col1, col2 from (select 'A' col1, 1 col2 from dual union all select 'A' col1, 2 col2 from dual union all select 'B' col1, 1 col2 from dual union all select 'B' col1, 2 col2 from dual union all select 'C' col1, 1 col2 from dual union all select 'C' col1, 2 col2 from dual ) order by col1, col2; The output of this query looks like: COL1 COL2 ---- ---- A 1 A 2 B 1 B 2 C 1 C 2 What I need is a query that will return an ordered number increasing for each different group (COL1). It seems like there would be a simple way to accomplish this (maybe with analytics) but for some reason it is escaping me. GRPNUM COL1 COL2 ------ ---- ---- 1 A 1 1 A 2 2 B 1 2 B 2 3 C 1 3 C 2 I am running Oracle 10gR2.

    Read the article

< Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >